Kernel
Threads by month
- ----- 2025 -----
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- 51 participants
- 18724 discussions

23 Nov '20
From: Samuel Thibault <samuel.thibault(a)ens-lyon.org>
mainline inclusion
from mainline-v5.10
commit d4122754442799187d5d537a9c039a49a67e57f1
category: bugfix
bugzilla: NA
CVE: CVE-2020-28941
--------------------------------
Speakup has only one speakup_tty variable to store the tty it is managing. This
makes sense since its codebase currently assumes that there is only one user who
controls the screen reading.
That however means that we have to forbid using the line discipline several
times, otherwise the second closure would try to free a NULL ldisc_data, leading to
general protection fault: 0000 [#1] SMP KASAN PTI
RIP: 0010:spk_ttyio_ldisc_close+0x2c/0x60
Call Trace:
tty_ldisc_release+0xa2/0x340
tty_release_struct+0x17/0xd0
tty_release+0x9d9/0xcc0
__fput+0x231/0x740
task_work_run+0x12c/0x1a0
do_exit+0x9b5/0x2230
? release_task+0x1240/0x1240
? __do_page_fault+0x562/0xa30
do_group_exit+0xd5/0x2a0
__x64_sys_exit_group+0x35/0x40
do_syscall_64+0x89/0x2b0
? page_fault+0x8/0x30
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Cc: stable(a)vger.kernel.org
Reported-by: 秦世松 <qinshisong1205(a)gmail.com>
Signed-off-by: Samuel Thibault <samuel.thibault(a)ens-lyon.org>
Tested-by: Shisong Qin <qinshisong1205(a)gmail.com>
Link: https://lore.kernel.org/r/20201110183541.fzgnlwhjpgqzjeth@function
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Conflicts:
drivers/accessibility/speakup/spk_ttyio.c
[yyl: spk_ttyio.c is in drivers/staging/ in kernel-4.19]
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/staging/speakup/spk_ttyio.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/speakup/spk_ttyio.c b/drivers/staging/speakup/spk_ttyio.c
index 93742dbdee77..6c754ddf1257 100644
--- a/drivers/staging/speakup/spk_ttyio.c
+++ b/drivers/staging/speakup/spk_ttyio.c
@@ -49,15 +49,25 @@ static int spk_ttyio_ldisc_open(struct tty_struct *tty)
if (tty->ops->write == NULL)
return -EOPNOTSUPP;
+
+ mutex_lock(&speakup_tty_mutex);
+ if (speakup_tty) {
+ mutex_unlock(&speakup_tty_mutex);
+ return -EBUSY;
+ }
speakup_tty = tty;
ldisc_data = kmalloc(sizeof(struct spk_ldisc_data), GFP_KERNEL);
- if (!ldisc_data)
+ if (!ldisc_data) {
+ speakup_tty = NULL;
+ mutex_unlock(&speakup_tty_mutex);
return -ENOMEM;
+ }
sema_init(&ldisc_data->sem, 0);
ldisc_data->buf_free = true;
speakup_tty->disc_data = ldisc_data;
+ mutex_unlock(&speakup_tty_mutex);
return 0;
}
--
2.25.1
1
0
Al Viro (1):
don't dump the threads that had been already exiting when zapped.
Alexander Usyskin (1):
mei: protect mei_cl_mtu from null dereference
Anand Jain (1):
btrfs: dev-replace: fail mount if we don't have replace item with
target device
Anand K Mistry (1):
x86/speculation: Allow IBPB to be conditionally enabled on CPUs with
always-on STIBP
Andrew Jeffery (1):
ARM: 9019/1: kprobes: Avoid fortify_panic() when copying optprobe
template
Andy Shevchenko (1):
pinctrl: intel: Set default bias in case no particular value given
Ard Biesheuvel (1):
crypto: arm64/aes-modes - get rid of literal load of addend vector
Arnaldo Carvalho de Melo (1):
perf scripting python: Avoid declaring function pointers with a
visibility attribute
Arnaud de Turckheim (3):
gpio: pcie-idio-24: Fix irq mask when masking
gpio: pcie-idio-24: Fix IRQ Enable Register value
gpio: pcie-idio-24: Enable PEX8311 interrupts
Baolin Wang (1):
mfd: sprd: Add wakeup capability for PMIC IRQ
Billy Tsai (1):
pinctrl: aspeed: Fix GPI only function problem.
Bob Peterson (3):
gfs2: Free rd_bits later in gfs2_clear_rgrpd to fix use-after-free
gfs2: Add missing truncate_inode_pages_final for sd_aspace
gfs2: check for live vs. read-only file system in gfs2_fitrim
Boris Protopopov (1):
Convert trailing spaces and periods in path components
Brian Foster (1):
xfs: flush new eof page on truncate to avoid post-eof corruption
Chen Zhou (1):
selinux: Fix error return code in sel_ib_pkey_sid_slow()
Chris Brandt (1):
usb: cdc-acm: Add DISABLE_ECHO for Renesas USB Download mode
Christoph Hellwig (2):
nbd: fix a block_device refcount leak in nbd_release
xfs: fix a missing unlock on error in xfs_fs_map_blocks
Chunyan Zhang (1):
tick/common: Touch watchdog in tick_unfreeze() on all CPUs
Coiby Xu (2):
pinctrl: amd: use higher precision for 512 RtcClk
pinctrl: amd: fix incorrect way to disable debounce filter
Dan Carpenter (3):
ALSA: hda: prevent undefined shift in snd_hdac_ext_bus_get_link()
can: peak_usb: add range checking in decode operations
futex: Don't enable IRQs unconditionally in put_pi_state()
Darrick J. Wong (6):
xfs: set xefi_discard when creating a deferred agfl free log intent
item
xfs: fix scrub flagging rtinherit even if there is no rt device
xfs: fix flags argument to rmap lookup when converting shared file
rmaps
xfs: set the unwritten bit in rmap lookup flags in
xchk_bmap_get_rmapextents
xfs: fix rmap key and record comparison functions
xfs: fix brainos in the refcount scrubber's rmap fragment processor
Dinghao Liu (1):
btrfs: ref-verify: fix memory leak in btrfs_ref_tree_mod
Evan Nimmo (1):
of/address: Fix of_node memory leak in of_dma_is_coherent
Evan Quan (3):
drm/amdgpu: perform srbm soft reset always on SDMA resume
drm/amd/pm: perform SMC reset on suspend/hibernation
drm/amd/pm: do not use ixFEATURE_STATUS for checking smc running
Evgeny Novikov (1):
usb: gadget: goku_udc: fix potential crashes in probe
Filipe Manana (1):
Btrfs: fix missing error return if writeback for extent buffer never
started
Gao Xiang (1):
erofs: derive atime instead of leaving it empty
George Spelvin (1):
random32: make prandom_u32() output unpredictable
Greg Kroah-Hartman (1):
Linux 4.19.158
Hannes Reinecke (1):
scsi: scsi_dh_alua: Avoid crash during alua_bus_detach()
Heiner Kallweit (1):
r8169: fix potential skb double free in an error path
Jason A. Donenfeld (1):
netfilter: use actual socket sk rather than skb sk when routing harder
Jerry Snitselaar (1):
tpm_tis: Disable interrupts on ThinkPad T490s
Jing Xiangfeng (1):
thunderbolt: Add the missed ida_simple_remove() in ring_request_msix()
Jiri Olsa (1):
perf tools: Add missing swap for ino_generation
Joakim Zhang (1):
can: flexcan: remove FLEXCAN_QUIRK_DISABLE_MECR quirk for LS1021A
Johannes Berg (1):
mac80211: fix use of skb payload instead of header
Johannes Thumshirn (1):
btrfs: reschedule when cloning lots of extents
Josef Bacik (1):
btrfs: sysfs: init devices outside of the chunk_mutex
Joseph Qi (1):
ext4: unlock xattr_sem properly in ext4_inline_data_truncate()
Kaixu Xia (1):
ext4: correctly report "not supported" for {usr,grp}jquota when
!CONFIG_QUOTA
Keita Suzuki (1):
scsi: hpsa: Fix memory leak in hpsa_init_one()
Mao Wenan (1):
net: Update window_clamp if SOCK_RCVBUF is set
Marc Kleine-Budde (1):
can: rx-offload: don't call kfree_skb() from IRQ context
Marc Zyngier (1):
genirq: Let GENERIC_IRQ_IPI select IRQ_DOMAIN_HIERARCHY
Martin Schiller (1):
net/x25: Fix null-ptr-deref in x25_connect
Martin Willi (1):
vrf: Fix fast path output packet handling with async Netfilter rules
Masashi Honma (1):
ath9k_htc: Use appropriate rs_datalen type
Matteo Croce (2):
Revert "kernel/reboot.c: convert simple_strtoul to kstrtoint"
reboot: fix overflow parsing reboot cpu number
Matthew Wilcox (Oracle) (1):
btrfs: fix potential overflow in cluster_pages_for_defrag on 32bit
arch
Michał Mirosław (1):
regulator: defer probe when trying to get voltage from unresolved
supply
Mika Westerberg (1):
thunderbolt: Fix memory leak if ida_simple_get() fails in
enumerate_services()
Olaf Hering (1):
hv_balloon: disable warning when floor reached
Oleksij Rempel (1):
can: can_create_echo_skb(): fix echo skb generation: always use
skb_clone()
Oliver Hartkopp (1):
can: dev: __can_get_echo_skb(): fix real payload length return value
for RTR frames
Oliver Herms (1):
IPv6: Set SIT tunnel hard_header_len to zero
Peter Zijlstra (1):
perf: Fix get_recursion_context()
Qian Cai (1):
s390/smp: move rcu_cpu_starting() earlier
Shin'ichiro Kawasaki (1):
uio: Fix use-after-free in uio_unregister_device()
Stefano Brivio (1):
netfilter: ipset: Update byte and packet counters regardless of
whether they match
Stefano Stabellini (1):
swiotlb: fix "x86: Don't panic if can not alloc buffer for swiotlb"
Stephane Grosjean (2):
can: peak_usb: peak_usb_get_ts_time(): fix timestamp wrapping
can: peak_canfd: pucan_handle_can_rx(): fix echo management when
loopback is on
Suravee Suthikulpanit (1):
iommu/amd: Increase interrupt remapping table limit to 512 entries
Sven Van Asbroeck (1):
lan743x: fix "BUG: invalid wait context" when setting rx mode
Thinh Nguyen (2):
usb: dwc3: gadget: Continue to process pending requests
usb: dwc3: gadget: Reclaim extra TRBs after request completion
Thomas Zimmermann (1):
drm/gma500: Fix out-of-bounds access to struct drm_device.vblank[]
Tommi Rantala (1):
selftests: proc: fix warning: _GNU_SOURCE redefined
Tyler Hicks (1):
tpm: efi: Don't create binary_bios_measurements file for an empty log
Ursula Braun (1):
net/af_iucv: fix null pointer dereference on shutdown
Vincent Mailhol (1):
can: dev: can_get_echo_skb(): prevent call to kfree_skb() in hard IRQ
context
Wang Hai (2):
cosa: Add missing kfree in error path of cosa_write
tipc: fix memory leak in tipc_topsrv_start()
Wengang Wang (1):
ocfs2: initialize ip_next_orphan
Ye Bin (1):
cfg80211: regulatory: Fix inconsistent format argument
Yoshihiro Shimoda (1):
mmc: renesas_sdhi_core: Add missing tmio_mmc_host_free() at remove
Yunsheng Lin (1):
net: sch_generic: fix the missing new qdisc assignment bug
Zeng Tao (1):
time: Prevent undefined behaviour in timespec64_to_ns()
Zhang Qilong (2):
vfio: platform: fix reference leak in vfio_platform_open
xhci: hisilicon: fix refercence leak in xhci_histb_probe
zhuoliang zhang (1):
net: xfrm: fix a race condition during allocing spi
Makefile | 2 +-
arch/arm/include/asm/kprobes.h | 22 +-
arch/arm/probes/kprobes/opt-arm.c | 18 +-
arch/arm64/crypto/aes-modes.S | 16 +-
arch/s390/kernel/smp.c | 3 +-
arch/x86/kernel/cpu/bugs.c | 52 +-
drivers/block/nbd.c | 1 +
drivers/char/random.c | 1 -
drivers/char/tpm/eventlog/efi.c | 5 +
drivers/char/tpm/tpm_tis.c | 29 +-
drivers/gpio/gpio-pcie-idio-24.c | 62 ++-
drivers/gpu/drm/amd/amdgpu/cik_sdma.c | 27 +-
.../gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 4 +
drivers/gpu/drm/amd/powerplay/inc/hwmgr.h | 1 +
drivers/gpu/drm/amd/powerplay/inc/smumgr.h | 2 +
.../gpu/drm/amd/powerplay/smumgr/ci_smumgr.c | 29 +-
drivers/gpu/drm/amd/powerplay/smumgr/smumgr.c | 8 +
drivers/gpu/drm/gma500/psb_irq.c | 34 +-
drivers/hv/hv_balloon.c | 2 +-
drivers/iommu/amd_iommu_types.h | 6 +-
drivers/mfd/sprd-sc27xx-spi.c | 28 +-
drivers/misc/mei/client.h | 4 +-
drivers/mmc/host/renesas_sdhi_core.c | 1 +
drivers/net/can/dev.c | 14 +-
drivers/net/can/flexcan.c | 3 +-
drivers/net/can/peak_canfd/peak_canfd.c | 11 +-
drivers/net/can/rx-offload.c | 4 +-
drivers/net/can/usb/peak_usb/pcan_usb_core.c | 51 +-
drivers/net/can/usb/peak_usb/pcan_usb_fd.c | 48 +-
drivers/net/ethernet/microchip/lan743x_main.c | 12 +-
drivers/net/ethernet/microchip/lan743x_main.h | 3 -
drivers/net/ethernet/realtek/r8169.c | 3 +-
drivers/net/vrf.c | 92 +++-
drivers/net/wan/cosa.c | 1 +
drivers/net/wireless/ath/ath9k/htc_drv_txrx.c | 2 +-
drivers/of/address.c | 4 +-
drivers/pinctrl/aspeed/pinctrl-aspeed.c | 7 +-
drivers/pinctrl/intel/pinctrl-intel.c | 8 +
drivers/pinctrl/pinctrl-amd.c | 6 +-
drivers/regulator/core.c | 2 +
drivers/scsi/device_handler/scsi_dh_alua.c | 9 +-
drivers/scsi/hpsa.c | 4 +-
drivers/staging/erofs/inode.c | 21 +-
drivers/thunderbolt/nhi.c | 19 +-
drivers/thunderbolt/xdomain.c | 1 +
drivers/uio/uio.c | 10 +-
drivers/usb/class/cdc-acm.c | 9 +
drivers/usb/dwc3/gadget.c | 32 +-
drivers/usb/gadget/udc/goku_udc.c | 2 +-
drivers/usb/host/xhci-histb.c | 2 +-
drivers/vfio/platform/vfio_platform_common.c | 3 +-
fs/btrfs/dev-replace.c | 26 +-
fs/btrfs/extent_io.c | 4 +
fs/btrfs/ioctl.c | 12 +-
fs/btrfs/ref-verify.c | 1 +
fs/btrfs/volumes.c | 33 +-
fs/cifs/cifs_unicode.c | 8 +-
fs/ext4/inline.c | 1 +
fs/ext4/super.c | 4 +-
fs/gfs2/rgrp.c | 5 +-
fs/gfs2/super.c | 1 +
fs/ocfs2/super.c | 1 +
fs/xfs/libxfs/xfs_alloc.c | 1 +
fs/xfs/libxfs/xfs_bmap.h | 2 +-
fs/xfs/libxfs/xfs_rmap.c | 2 +-
fs/xfs/libxfs/xfs_rmap_btree.c | 16 +-
fs/xfs/scrub/bmap.c | 2 +
fs/xfs/scrub/inode.c | 3 +-
fs/xfs/scrub/refcount.c | 8 +-
fs/xfs/xfs_iops.c | 10 +
fs/xfs/xfs_pnfs.c | 2 +-
include/linux/can/skb.h | 20 +-
include/linux/netfilter_ipv4.h | 2 +-
include/linux/netfilter_ipv6.h | 2 +-
include/linux/prandom.h | 36 +-
include/linux/time64.h | 4 +
kernel/dma/swiotlb.c | 6 +-
kernel/events/internal.h | 2 +-
kernel/exit.c | 5 +-
kernel/futex.c | 5 +-
kernel/irq/Kconfig | 1 +
kernel/reboot.c | 28 +-
kernel/time/itimer.c | 4 -
kernel/time/tick-common.c | 2 +
kernel/time/timer.c | 7 -
lib/random32.c | 462 +++++++++++-------
net/ipv4/netfilter.c | 12 +-
net/ipv4/netfilter/ipt_SYNPROXY.c | 2 +-
net/ipv4/netfilter/iptable_mangle.c | 2 +-
net/ipv4/netfilter/nf_nat_l3proto_ipv4.c | 2 +-
net/ipv4/netfilter/nf_reject_ipv4.c | 2 +-
net/ipv4/netfilter/nft_chain_route_ipv4.c | 2 +-
net/ipv4/syncookies.c | 9 +-
net/ipv6/netfilter.c | 6 +-
net/ipv6/netfilter/ip6table_mangle.c | 2 +-
net/ipv6/netfilter/nf_nat_l3proto_ipv6.c | 2 +-
net/ipv6/netfilter/nft_chain_route_ipv6.c | 2 +-
net/ipv6/sit.c | 2 -
net/ipv6/syncookies.c | 10 +-
net/iucv/af_iucv.c | 3 +-
net/mac80211/tx.c | 37 +-
net/netfilter/ipset/ip_set_core.c | 3 +-
net/netfilter/ipvs/ip_vs_core.c | 4 +-
net/sched/sch_generic.c | 3 +
net/tipc/topsrv.c | 10 +-
net/wireless/reg.c | 2 +-
net/x25/af_x25.c | 2 +-
net/xfrm/xfrm_state.c | 8 +-
security/selinux/ibpkey.c | 4 +-
sound/hda/ext/hdac_ext_controller.c | 2 +
.../scripting-engines/trace-event-python.c | 7 +-
tools/perf/util/session.c | 1 +
.../testing/selftests/proc/proc-loadavg-001.c | 1 -
.../selftests/proc/proc-self-syscall.c | 1 -
.../testing/selftests/proc/proc-uptime-002.c | 1 -
115 files changed, 1073 insertions(+), 539 deletions(-)
--
2.25.1
1
98

[PATCH 1/2] ascend: share_pool: support debug mode and refactor some functions
by Yang Yingliang 23 Nov '20
by Yang Yingliang 23 Nov '20
23 Nov '20
From: Ding Tianhong <dingtianhong(a)huawei.com>
ascend inclusion
category: feature
bugzilla: NA
CVE: NA
-------------------------------------------------
The share pool is used widely for several accelerator, and it is
difficult to debug the user problem, so add debug mode to analyse
the problem, this mode is enabled by the sysctl_sp_debug_mode flag.
Some functions have been refactored to protect the critical area
correctly, and output message more clearly.
Signed-off-by: Tang Yizhou <tangyizhou(a)huawei.com>
Signed-off-by: Zhou Guanghui <zhouguanghui1(a)huawei.com>
Signed-off-by: Wu Peng <wupeng58(a)huawei.com>
Signed-off-by: Ding Tianhong <dingtianhong(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/linux/share_pool.h | 15 +-
kernel/sysctl.c | 9 +
mm/share_pool.c | 537 +++++++++++++++++++++----------------
3 files changed, 326 insertions(+), 235 deletions(-)
diff --git a/include/linux/share_pool.h b/include/linux/share_pool.h
index 09afbae33d41..2557ef138122 100644
--- a/include/linux/share_pool.h
+++ b/include/linux/share_pool.h
@@ -5,6 +5,7 @@
#include <linux/mm_types.h>
#include <linux/notifier.h>
#include <linux/vmalloc.h>
+#include <linux/printk.h>
#define SP_HUGEPAGE (1 << 0)
#define SP_HUGEPAGE_ONLY (1 << 1)
@@ -35,6 +36,8 @@ extern int sysctl_share_pool_hugepage_enable;
extern int sysctl_ac_mode;
+extern int sysctl_sp_debug_mode;
+
extern int enable_ascend_share_pool;
/* Processes in the same sp_group can share memory.
@@ -70,7 +73,7 @@ struct sp_group {
/* number of sp_area */
atomic_t spa_num;
/* total size of all sp_area from sp_alloc and k2u(spg) */
- atomic_t size;
+ atomic64_t size;
/* record the number of hugepage allocation failures */
int hugepage_failures;
/* is_alive == false means it's being destroyed */
@@ -211,6 +214,12 @@ static inline bool sp_mmap_check(unsigned long flags)
return false;
}
+static inline void sp_dump_stack(void)
+{
+ if (sysctl_sp_debug_mode)
+ dump_stack();
+}
+
#else
static inline int sp_group_add_task(int pid, int spg_id)
@@ -349,6 +358,10 @@ static inline bool sp_mmap_check(unsigned long flags)
{
return false;
}
+
+static inline void sp_dump_stack(void)
+{
+}
#endif
#endif /* LINUX_SHARE_POOL_H */
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 61e62f1ccee4..26c215fb37dc 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1737,6 +1737,15 @@ static struct ctl_table vm_table[] = {
.extra1 = &zero,
.extra2 = &one,
},
+ {
+ .procname = "sharepool_debug_mode",
+ .data = &sysctl_sp_debug_mode,
+ .maxlen = sizeof(sysctl_sp_debug_mode),
+ .mode = 0600,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = &zero,
+ .extra2 = &one,
+ },
#endif
{ }
};
diff --git a/mm/share_pool.c b/mm/share_pool.c
index fcbc831f7f8c..24c5dd680451 100644
--- a/mm/share_pool.c
+++ b/mm/share_pool.c
@@ -57,6 +57,8 @@ static const int mdc_default_group_id = 1;
/* access control mode */
int sysctl_ac_mode = AC_NONE;
+/* debug mode */
+int sysctl_sp_debug_mode;
/* idr of all sp_groups */
static DEFINE_IDR(sp_group_idr);
@@ -85,9 +87,11 @@ struct sp_proc_stat {
/* for kthread buff_module_guard_work */
static struct sp_proc_stat kthread_stat = {0};
-/* The caller must hold sp_mutex. */
-static struct sp_proc_stat *sp_init_proc_stat(struct task_struct *tsk)
-{
+/*
+ * The caller must hold sp_mutex and ensure no concurrency problem
+ * for task_struct and mm_struct.
+ */
+static struct sp_proc_stat *sp_init_proc_stat(struct task_struct *tsk) {
struct sp_proc_stat *stat;
int id = tsk->mm->sp_stat_id;
int tgid = tsk->tgid;
@@ -138,7 +142,7 @@ static struct sp_spa_stat spa_stat = {0};
/* statistics of all sp group born from sp_alloc and k2u(spg) */
struct sp_spg_stat {
atomic_t spa_total_num;
- atomic_t spa_total_size;
+ atomic64_t spa_total_size;
};
static struct sp_spg_stat spg_stat = {0};
@@ -166,10 +170,11 @@ struct sp_area {
struct list_head link; /* link to the spg->head */
struct sp_group *spg;
enum spa_type type; /* where spa born from */
+ struct mm_struct *mm; /* owner of k2u(task) */
};
static DEFINE_SPINLOCK(sp_area_lock);
static struct rb_root sp_area_root = RB_ROOT;
-bool host_svm_sp_enable = false;
+static bool host_svm_sp_enable = false;
int sysctl_share_pool_hugepage_enable = 1;
@@ -241,7 +246,7 @@ static int spa_dec_usage(enum spa_type type, unsigned long size)
return 0;
}
-static void *sp_mmap(struct mm_struct *mm, struct file *file,
+static unsigned long sp_mmap(struct mm_struct *mm, struct file *file,
struct sp_area *spa, unsigned long *populate);
static void free_sp_group(struct sp_group *spg)
@@ -274,7 +279,18 @@ static struct sp_group *__sp_find_spg(int pid, int spg_id)
if (ret)
return NULL;
- spg = tsk->mm->sp_group;
+ /*
+ * Once we encounter a concurrency problem here.
+ * To fix it, we believe get_task_mm() and mmput() is too
+ * heavy because we just get the pointer of sp_group.
+ */
+ task_lock(tsk);
+ if (tsk->mm == NULL)
+ spg = NULL;
+ else
+ spg = tsk->mm->sp_group;
+ task_unlock(tsk);
+
put_task_struct(tsk);
} else {
spg = idr_find(&sp_group_idr, spg_id);
@@ -318,7 +334,7 @@ static struct sp_group *find_or_alloc_sp_group(int spg_id)
}
spg->id = spg_id;
atomic_set(&spg->spa_num, 0);
- atomic_set(&spg->size, 0);
+ atomic64_set(&spg->size, 0);
spg->is_alive = true;
spg->hugepage_failures = 0;
spg->dvpp_multi_spaces = false;
@@ -377,9 +393,6 @@ static void sp_munmap_task_areas(struct mm_struct *mm, struct list_head *stop)
struct sp_area *spa, *prev = NULL;
int err;
- if (!mmget_not_zero(mm))
- return;
- down_write(&mm->mmap_sem);
spin_lock(&sp_area_lock);
list_for_each_entry(spa, &mm->sp_group->spa_list, link) {
@@ -406,8 +419,17 @@ static void sp_munmap_task_areas(struct mm_struct *mm, struct list_head *stop)
__sp_area_drop_locked(prev);
spin_unlock(&sp_area_lock);
- up_write(&mm->mmap_sem);
- mmput(mm);
+}
+
+/* The caller must hold sp_mutex. */
+static void __sp_group_drop_locked(struct sp_group *spg)
+{
+ bool is_alive = spg->is_alive;
+
+ if (atomic_dec_and_test(&spg->use_count)) {
+ BUG_ON(is_alive);
+ free_sp_group(spg);
+ }
}
/**
@@ -446,8 +468,9 @@ int sp_group_add_task(int pid, int spg_id)
spg = idr_find(&sp_group_idr, spg_id);
if (!spg_valid(spg)) {
mutex_unlock(&sp_mutex);
- pr_err("share pool: task add group failed because group id %d hasn't been create or dead\n",
- spg_id);
+ if (printk_ratelimit())
+ pr_err("share pool: task add group failed because group id %d "
+ "hasn't been create or dead\n", spg_id);
return -EINVAL;
}
mutex_unlock(&sp_mutex);
@@ -457,7 +480,9 @@ int sp_group_add_task(int pid, int spg_id)
spg_id = ida_alloc_range(&sp_group_id_ida, SPG_ID_AUTO_MIN,
SPG_ID_AUTO_MAX, GFP_ATOMIC);
if (spg_id < 0) {
- pr_err("share pool: task add group failed when automatically generate group id failed\n");
+ if (printk_ratelimit())
+ pr_err("share pool: task add group failed when automatically "
+ "generate group id failed\n");
return spg_id;
}
}
@@ -467,8 +492,9 @@ int sp_group_add_task(int pid, int spg_id)
SPG_ID_DVPP_PASS_THROUGH_MIN,
SPG_ID_DVPP_PASS_THROUGH_MAX, GFP_ATOMIC);
if (spg_id < 0) {
- pr_err("share pool: task add group failed when automatically generate group id failed"
- "in DVPP pass through\n");
+ if (printk_ratelimit())
+ pr_err("share pool: task add group failed when automatically "
+ "generate group id failed in DVPP pass through\n");
return spg_id;
}
}
@@ -494,25 +520,31 @@ int sp_group_add_task(int pid, int spg_id)
ret = PTR_ERR(spg);
goto out_put_task;
}
+ atomic_inc(&spg->use_count);
+
/* access control permission check */
if (sysctl_ac_mode == AC_SINGLE_OWNER) {
if (spg->owner != current->group_leader) {
ret = -EPERM;
- goto out_put_task;
+ goto out_drop_group;
}
}
+ mm = get_task_mm(tsk);
+ if (!mm) {
+ ret = -ESRCH;
+ goto out_drop_group;
+ }
+
/* per process statistics initialization */
stat = sp_init_proc_stat(tsk);
if (IS_ERR(stat)) {
ret = PTR_ERR(stat);
pr_err("share pool: init proc stat failed, ret %lx\n", PTR_ERR(stat));
- goto out_put_task;
+ goto out_put_mm;
}
- mm = tsk->mm;
mm->sp_group = spg;
- atomic_inc(&spg->use_count);
list_add_tail(&tsk->mm->sp_node, &spg->procs);
/*
* create mappings of existing shared memory segments into this
@@ -523,7 +555,7 @@ int sp_group_add_task(int pid, int spg_id)
list_for_each_entry(spa, &spg->spa_list, link) {
unsigned long populate = 0;
struct file *file = spa_file(spa);
- void *p;
+ unsigned long addr;
if (prev)
__sp_area_drop_locked(prev);
@@ -532,28 +564,24 @@ int sp_group_add_task(int pid, int spg_id)
atomic_inc(&spa->use_count);
spin_unlock(&sp_area_lock);
- p = sp_mmap(mm, file, spa, &populate);
- if (IS_ERR(p) && (PTR_ERR(p) != -ESPGMMEXIT)) {
+ down_write(&mm->mmap_sem);
+ addr = sp_mmap(mm, file, spa, &populate);
+ if (IS_ERR_VALUE(addr)) {
sp_munmap_task_areas(mm, &spa->link);
- ret = PTR_ERR(p);
+ up_write(&mm->mmap_sem);
+ ret = addr;
pr_err("share pool: task add group sp mmap failed, ret %d\n", ret);
spin_lock(&sp_area_lock);
break;
}
-
- if (PTR_ERR(p) == -ESPGMMEXIT) {
- pr_err("share pool: task add group sp mmap failed, ret -ESPGMEXIT\n");
- spin_lock(&sp_area_lock);
- ret = -ESPGMMEXIT;
- break;
- }
+ up_write(&mm->mmap_sem);
if (populate) {
ret = do_mm_populate(mm, spa->va_start, populate, 0);
if (ret) {
if (printk_ratelimit())
- pr_err("share pool: task add group failed when mm populate failed: %d\n",
- ret);
+ pr_warn("share pool: task add group failed when mm populate "
+ "failed (potential no enough memory): %d\n", ret);
sp_munmap_task_areas(mm, spa->link.next);
}
}
@@ -567,8 +595,16 @@ int sp_group_add_task(int pid, int spg_id)
if (unlikely(ret)) {
idr_remove(&sp_stat_idr, mm->sp_stat_id);
kfree(stat);
+ mm->sp_stat_id = 0;
+ list_del(&mm->sp_node);
+ mm->sp_group = NULL;
}
+out_put_mm:
+ mmput(mm);
+out_drop_group:
+ if (unlikely(ret))
+ __sp_group_drop_locked(spg);
out_put_task:
put_task_struct(tsk);
out_unlock:
@@ -609,9 +645,6 @@ void sp_group_exit(struct mm_struct *mm)
bool is_alive = true;
bool unlock;
- if (!enable_ascend_share_pool)
- return;
-
/*
* Nothing to do if this thread group doesn't belong to any sp_group.
* No need to protect this check with lock because we can add a task
@@ -638,18 +671,13 @@ void sp_group_exit(struct mm_struct *mm)
void sp_group_post_exit(struct mm_struct *mm)
{
- bool is_alive;
struct sp_proc_stat *stat;
bool unlock;
- if (!enable_ascend_share_pool)
- return;
-
if (!mm->sp_group)
return;
spg_exit_lock(&unlock);
- is_alive = mm->sp_group->is_alive;
/* pointer stat must be valid, we don't need to check sanity */
stat = idr_find(&sp_stat_idr, mm->sp_stat_id);
@@ -673,10 +701,7 @@ void sp_group_post_exit(struct mm_struct *mm)
idr_remove(&sp_stat_idr, mm->sp_stat_id);
- if (atomic_dec_and_test(&mm->sp_group->use_count)) {
- BUG_ON(is_alive);
- free_sp_group(mm->sp_group);
- }
+ __sp_group_drop_locked(mm->sp_group);
spg_exit_unlock(unlock);
kfree(stat);
@@ -716,7 +741,7 @@ static void __insert_sp_area(struct sp_area *spa)
static struct sp_area *sp_alloc_area(unsigned long size, unsigned long flags,
struct sp_group *spg, enum spa_type type)
{
- struct sp_area *spa;
+ struct sp_area *spa, *err;
struct rb_node *n;
unsigned long vstart = MMAP_SHARE_POOL_START;
unsigned long vend = MMAP_SHARE_POOL_16G_START;
@@ -728,6 +753,11 @@ static struct sp_area *sp_alloc_area(unsigned long size, unsigned long flags,
vstart = MMAP_SHARE_POOL_16G_START;
vend = MMAP_SHARE_POOL_16G_START + MMAP_SHARE_POOL_16G_SIZE;
} else {
+ if (!spg) {
+ if (printk_ratelimit())
+ pr_err("share pool: don't allow k2u(task) in host svm multiprocess scene\n");
+ return ERR_PTR(-EINVAL);
+ }
vstart = spg->dvpp_va_start;
vend = spg->dvpp_va_start + spg->dvpp_size;
}
@@ -735,14 +765,11 @@ static struct sp_area *sp_alloc_area(unsigned long size, unsigned long flags,
addr = vstart;
- if (!sysctl_share_pool_hugepage_enable)
- flags &= ~(SP_HUGEPAGE_ONLY | SP_HUGEPAGE);
-
spa = kmalloc(sizeof(struct sp_area), GFP_KERNEL);
if (unlikely(!spa)) {
if (printk_ratelimit())
pr_err("share pool: alloc spa failed due to lack of memory\n");
- return NULL;
+ return ERR_PTR(-ENOMEM);
}
spin_lock(&sp_area_lock);
@@ -788,6 +815,7 @@ static struct sp_area *sp_alloc_area(unsigned long size, unsigned long flags,
}
found:
if (addr + size_align > vend) {
+ err = ERR_PTR(-EOVERFLOW);
goto error;
}
@@ -799,15 +827,17 @@ static struct sp_area *sp_alloc_area(unsigned long size, unsigned long flags,
atomic_set(&spa->use_count, 1);
spa->type = type;
- if (spa_inc_usage(type, size))
+ if (spa_inc_usage(type, size)) {
+ err = ERR_PTR(-EINVAL);
goto error;
+ }
__insert_sp_area(spa);
if (spa->spg) {
atomic_inc(&spg->spa_num);
- atomic_add(size, &spg->size);
+ atomic64_add(size, &spg->size);
atomic_inc(&spg_stat.spa_total_num);
- atomic_add(size, &spg_stat.spa_total_size);
+ atomic64_add(size, &spg_stat.spa_total_size);
list_add_tail(&spa->link, &spg->spa_list);
}
spin_unlock(&sp_area_lock);
@@ -817,7 +847,7 @@ static struct sp_area *sp_alloc_area(unsigned long size, unsigned long flags,
error:
spin_unlock(&sp_area_lock);
kfree(spa);
- return NULL;
+ return err;
}
/* the caller should hold sp_area_lock */
@@ -862,9 +892,9 @@ static void sp_free_area(struct sp_area *spa)
spa_dec_usage(spa->type, spa->real_size); /* won't fail */
if (spa->spg) {
atomic_dec(&spa->spg->spa_num);
- atomic_sub(spa->real_size, &spa->spg->size);
+ atomic64_sub(spa->real_size, &spa->spg->size);
atomic_dec(&spg_stat.spa_total_num);
- atomic_sub(spa->real_size, &spg_stat.spa_total_size);
+ atomic64_sub(spa->real_size, &spg_stat.spa_total_size);
list_del(&spa->link);
}
rb_erase(&spa->rb_node, &sp_area_root);
@@ -898,7 +928,7 @@ void sp_area_drop(struct vm_area_struct *vma)
{
struct sp_area *spa;
- if (!sp_check_vm_share_pool(vma->vm_flags))
+ if (!(vma->vm_flags & VM_SHARE_POOL))
return;
/*
@@ -979,13 +1009,25 @@ int sp_free(unsigned long addr)
} else { /* spa == NULL */
ret = -EINVAL;
if (printk_ratelimit())
- pr_err("share pool: sp_free invalid input addr %pK\n", (void *)addr);
+ pr_err("share pool: sp free invalid input addr %pK\n", (void *)addr);
goto out;
}
+ if (spa->type != SPA_TYPE_ALLOC) {
+ if (printk_ratelimit())
+ pr_err("share pool: sp free failed, addr %pK is not from sp_alloc\n",
+ (void *)addr);
+ }
+
if (!spg_valid(spa->spg))
goto drop_spa;
+ pr_notice("share pool: [sp free] caller %s(%d/%d); "
+ "group id %d addr 0x%pK, size %ld\n",
+ current->comm, current->tgid, current->pid, spa->spg->id,
+ (void *)spa->va_start, spa->real_size);
+ sp_dump_stack();
+
__sp_free(spa->spg, spa->va_start, spa_size(spa), NULL);
/* Free the memory of the backing shmem or hugetlbfs */
@@ -993,7 +1035,7 @@ int sp_free(unsigned long addr)
offset = addr - MMAP_SHARE_POOL_START;
ret = vfs_fallocate(spa_file(spa), mode, offset, spa_size(spa));
if (ret)
- pr_err("share pool: fallocate failed: %d\n", ret);
+ pr_err("share pool: sp free fallocate failed: %d\n", ret);
/* pointer stat may be invalid because of kthread buff_module_guard_work */
if (current->mm == NULL) {
@@ -1016,7 +1058,7 @@ int sp_free(unsigned long addr)
EXPORT_SYMBOL_GPL(sp_free);
/* wrapper of __do_mmap() and the caller must hold down_write(&mm->mmap_sem). */
-static unsigned long __sp_mmap(struct mm_struct *mm, struct file *file,
+static unsigned long sp_mmap(struct mm_struct *mm, struct file *file,
struct sp_area *spa, unsigned long *populate)
{
unsigned long addr = spa->va_start;
@@ -1033,30 +1075,13 @@ static unsigned long __sp_mmap(struct mm_struct *mm, struct file *file,
if (IS_ERR_VALUE(addr)) {
atomic_dec(&spa->use_count);
pr_err("share pool: do_mmap fails %ld\n", addr);
+ } else {
+ BUG_ON(addr != spa->va_start);
}
return addr;
}
-static void *sp_mmap(struct mm_struct *mm, struct file *file,
- struct sp_area *spa, unsigned long *populate)
-{
- unsigned long addr;
-
- if (!mmget_not_zero(mm))
- return ERR_PTR(-ESPGMMEXIT);
- down_write(&mm->mmap_sem);
- addr = __sp_mmap(mm, file, spa, populate);
- up_write(&mm->mmap_sem);
- mmput(mm);
-
- if (IS_ERR_VALUE(addr))
- return ERR_PTR(addr);
-
- BUG_ON(addr != spa->va_start);
- return (void *)addr;
-}
-
/**
* Allocate shared memory for all the processes in the same sp_group
* size - the size of memory to allocate
@@ -1071,12 +1096,14 @@ void *sp_alloc(unsigned long size, unsigned long sp_flags, int spg_id)
struct sp_area *spa = NULL;
struct sp_proc_stat *stat;
unsigned long sp_addr;
- void *p_mmap, *p = ERR_PTR(-ENODEV);
+ unsigned long mmap_addr;
+ void *p = ERR_PTR(-ENODEV);
struct mm_struct *mm;
struct file *file;
unsigned long size_aligned;
int ret = 0;
struct mm_struct *tmp;
+ unsigned long mode, offset;
/* mdc scene hack */
if (enable_mdc_default_group)
@@ -1133,9 +1160,6 @@ void *sp_alloc(unsigned long size, unsigned long sp_flags, int spg_id)
goto out;
}
- if (!sysctl_share_pool_hugepage_enable)
- sp_flags &= ~(SP_HUGEPAGE_ONLY | SP_HUGEPAGE);
-
if (sp_flags & SP_HUGEPAGE) {
file = spg->file_hugetlb;
size_aligned = ALIGN(size, PMD_SIZE);
@@ -1145,10 +1169,12 @@ void *sp_alloc(unsigned long size, unsigned long sp_flags, int spg_id)
}
try_again:
spa = sp_alloc_area(size_aligned, sp_flags, spg, SPA_TYPE_ALLOC);
- if (!spa) {
+ if (IS_ERR(spa)) {
if (printk_ratelimit())
- pr_err("share pool: allocation failed due to alloc spa failure\n");
- p = ERR_PTR(-ENOMEM);
+ pr_err("share pool: allocation failed due to alloc spa failure "
+ "(potential no enough virtual memory when -75): %ld\n",
+ PTR_ERR(spa));
+ p = spa;
goto out;
}
sp_addr = spa->va_start;
@@ -1158,33 +1184,34 @@ void *sp_alloc(unsigned long size, unsigned long sp_flags, int spg_id)
unsigned long populate = 0;
struct vm_area_struct *vma;
- p_mmap = sp_mmap(mm, file, spa, &populate);
- if (IS_ERR(p_mmap) && (PTR_ERR(p_mmap) != -ESPGMMEXIT)) {
- p = p_mmap;
+ if (!mmget_not_zero(mm))
+ continue;
+
+ down_write(&mm->mmap_sem);
+ mmap_addr = sp_mmap(mm, file, spa, &populate);
+ if (IS_ERR_VALUE(mmap_addr)) {
+ up_write(&mm->mmap_sem);
+ p = (void *)mmap_addr;
__sp_free(spg, sp_addr, size_aligned, mm);
- pr_err("share pool: allocation sp mmap failed, ret %ld\n", PTR_ERR(p_mmap));
- break;
+ mmput(mm);
+ pr_err("share pool: allocation sp mmap failed, ret %ld\n", mmap_addr);
+ goto out;
}
- if (PTR_ERR(p_mmap) == -ESPGMMEXIT) {
- pr_info("share pool: allocation sp mmap failed, ret -ESPGMMEXIT\n");
+ p =(void *)mmap_addr; /* success */
+ if (populate == 0) {
+ up_write(&mm->mmap_sem);
+ mmput(mm);
continue;
}
- p = p_mmap; /* success */
- if (populate == 0)
- continue;
-
- if (!mmget_not_zero(mm))
- continue;
- down_write(&mm->mmap_sem);
vma = find_vma(mm, sp_addr);
if (unlikely(!vma)) {
+ up_write(&mm->mmap_sem);
+ mmput(mm);
pr_err("share pool: allocation failed due to find %pK vma failure\n",
(void *)sp_addr);
p = ERR_PTR(-EINVAL);
- up_write(&mm->mmap_sem);
- mmput(mm);
goto out;
}
/* clean PTE_RDONLY flags or trigger SMMU event */
@@ -1216,9 +1243,17 @@ void *sp_alloc(unsigned long size, unsigned long sp_flags, int spg_id)
}
if (printk_ratelimit())
- pr_err("share pool: allocation failed due to mm populate failed: %d\n",
- ret);
+ pr_warn("share pool: allocation failed due to mm populate failed"
+ "(potential no enough memory when -12): %d\n", ret);
p = ERR_PTR(ret);
+ __sp_area_drop(spa);
+
+ mode = FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE;
+ offset = sp_addr - MMAP_SHARE_POOL_START;
+ ret = vfs_fallocate(spa_file(spa), mode, offset, spa_size(spa));
+ if (ret)
+ pr_err("share pool: fallocate failed %d\n", ret);
+
mmput(mm);
break;
}
@@ -1235,24 +1270,20 @@ void *sp_alloc(unsigned long size, unsigned long sp_flags, int spg_id)
mutex_unlock(&sp_mutex);
/* this will free spa if mmap failed */
- if (spa)
+ if (spa && !IS_ERR(spa))
__sp_area_drop(spa);
+ if (!IS_ERR(p)) {
+ pr_notice("share pool: [sp alloc] caller %s(%d/%d); group id %d; "
+ "return addr 0x%pK, size %ld\n",
+ current->comm, current->tgid, current->pid, spa->spg->id,
+ (void *)spa->va_start, spa->real_size);
+ sp_dump_stack();
+ }
return p;
}
EXPORT_SYMBOL_GPL(sp_alloc);
-static unsigned long __sp_remap_get_pfn(unsigned long kva)
-{
- unsigned long pfn;
- if (is_vmalloc_addr((void *)kva))
- pfn = vmalloc_to_pfn((void *)kva);
- else
- pfn = virt_to_pfn(kva);
-
- return pfn;
-}
-
/*
* return value: >0 means this is a hugepage addr
* =0 means a normal addr. <0 means an errno.
@@ -1286,7 +1317,6 @@ static unsigned long sp_remap_kva_to_vma(unsigned long kva, struct sp_area *spa,
struct vm_area_struct *vma;
unsigned long ret_addr;
unsigned long populate = 0;
- unsigned long addr, buf, offset;
struct file *file = NULL;
int ret = 0;
struct user_struct *user = NULL;
@@ -1307,7 +1337,7 @@ static unsigned long sp_remap_kva_to_vma(unsigned long kva, struct sp_area *spa,
}
down_write(&mm->mmap_sem);
- ret_addr = __sp_mmap(mm, file, spa, &populate);
+ ret_addr = sp_mmap(mm, file, spa, &populate);
if (IS_ERR_VALUE(ret_addr)) {
pr_err("share pool: k2u mmap failed %lx\n", ret_addr);
goto out;
@@ -1326,20 +1356,12 @@ static unsigned long sp_remap_kva_to_vma(unsigned long kva, struct sp_area *spa,
goto out;
}
} else {
- buf = ret_addr;
- addr = kva;
- offset = 0;
- do {
- ret = remap_pfn_range(vma, buf, __sp_remap_get_pfn(addr), PAGE_SIZE,
- __pgprot(vma->vm_page_prot.pgprot));
- if (ret) {
- ret_addr = ret;
- goto out;
- }
- offset += PAGE_SIZE;
- buf += PAGE_SIZE;
- addr += PAGE_SIZE;
- } while (offset < spa_size(spa));
+ ret = remap_vmalloc_range(vma, (void *)kva, 0);
+ if (ret) {
+ pr_err("share pool: remap vmalloc failed, ret %d\n", ret);
+ ret_addr = ret;
+ goto out;
+ }
}
out:
@@ -1380,6 +1402,13 @@ static void *sp_make_share_kva_to_task(unsigned long kva, struct sp_area *spa,
}
p = (void *)ret_addr;
+
+ task_lock(tsk);
+ if (tsk->mm == NULL)
+ p = ERR_PTR(-ESRCH);
+ else
+ spa->mm = tsk->mm;
+ task_unlock(tsk);
out:
put_task_struct(tsk);
return p;
@@ -1438,6 +1467,7 @@ void *sp_make_share_k2u(unsigned long kva, unsigned long size,
unsigned long kva_aligned;
unsigned long size_aligned;
unsigned int page_size = PAGE_SIZE;
+ enum spa_type type;
int ret;
if (sp_flags & ~SP_DVPP) {
@@ -1453,6 +1483,7 @@ void *sp_make_share_k2u(unsigned long kva, unsigned long size,
} else if (ret == 0) {
/* do nothing */
} else {
+ pr_err("it is not vmalloc address\n");
return ERR_PTR(ret);
}
/* aligned down kva is convenient for caller to start with any valid kva */
@@ -1460,24 +1491,42 @@ void *sp_make_share_k2u(unsigned long kva, unsigned long size,
size_aligned = ALIGN(kva + size, page_size) - kva_aligned;
mutex_lock(&sp_mutex);
- spg = __sp_find_spg(pid, spg_id);
+ spg = __sp_find_spg(pid, SPG_ID_DEFAULT);
if (spg == NULL) {
- spa = sp_alloc_area(size_aligned, sp_flags, NULL, SPA_TYPE_K2TASK);
- if (!spa) {
+ type = SPA_TYPE_K2TASK;
+ if (spg_id != SPG_ID_NONE && spg_id != SPG_ID_DEFAULT) {
mutex_unlock(&sp_mutex);
if (printk_ratelimit())
- pr_err("share pool: k2u failed due to alloc spa failure\n");
- return ERR_PTR(-ENOMEM);
+ pr_err("share pool: k2task invalid spg id %d\n", spg_id);
+ return ERR_PTR(-EINVAL);
+ }
+ spa = sp_alloc_area(size_aligned, sp_flags, NULL, type);
+ if (IS_ERR(spa)) {
+ mutex_unlock(&sp_mutex);
+ if (printk_ratelimit())
+ pr_err("share pool: k2u(task) failed due to alloc spa failure "
+ "(potential no enough virtual memory when -75): %ld\n",
+ PTR_ERR(spa));
+ return spa;
}
uva = sp_make_share_kva_to_task(kva_aligned, spa, pid);
mutex_unlock(&sp_mutex);
} else if (spg_valid(spg)) {
- spa = sp_alloc_area(size_aligned, sp_flags, spg, SPA_TYPE_K2SPG);
- if (!spa) {
+ type = SPA_TYPE_K2SPG;
+ if (spg_id != SPG_ID_DEFAULT && spg_id != spg->id) {
mutex_unlock(&sp_mutex);
if (printk_ratelimit())
- pr_err("share pool: k2u failed due to alloc spa failure\n");
- return ERR_PTR(-ENOMEM);
+ pr_err("share pool: k2spg invalid spg id %d\n", spg_id);
+ return ERR_PTR(-EINVAL);
+ }
+ spa = sp_alloc_area(size_aligned, sp_flags, spg, type);
+ if (IS_ERR(spa)) {
+ mutex_unlock(&sp_mutex);
+ if (printk_ratelimit())
+ pr_err("share pool: k2u(spg) failed due to alloc spa failure "
+ "(potential no enough virtual memory when -75): %ld\n",
+ PTR_ERR(spa));
+ return spa;
}
uva = sp_make_share_kva_to_spg(kva_aligned, spa, spg);
@@ -1492,6 +1541,17 @@ void *sp_make_share_k2u(unsigned long kva, unsigned long size,
uva = uva + (kva - kva_aligned);
__sp_area_drop(spa);
+
+ if (!IS_ERR(uva)) {
+ if (spg_valid(spa->spg))
+ spg_id = spa->spg->id;
+ pr_notice("share pool: [sp k2u type %d] caller %s(%d/%d); group id %d; "
+ "return addr 0x%pK size %ld\n",
+ type, current->comm, current->tgid, current->pid, spg_id,
+ (void *)spa->va_start, spa->real_size);
+ sp_dump_stack();
+ }
+
return uva;
}
EXPORT_SYMBOL_GPL(sp_make_share_k2u);
@@ -1531,7 +1591,8 @@ static int sp_pte_hole(unsigned long start, unsigned long end,
struct mm_walk *walk)
{
if (printk_ratelimit())
- pr_err("share pool: hole [%pK, %pK) appeared unexpectedly\n", (void *)start, (void *)end);
+ pr_err("share pool: hole [%pK, %pK) appeared unexpectedly\n",
+ (void *)start, (void *)end);
return -EFAULT;
}
@@ -1545,7 +1606,8 @@ static int sp_hugetlb_entry(pte_t *ptep, unsigned long hmask,
if (unlikely(!pte_present(pte))) {
if (printk_ratelimit())
- pr_err("share pool: the page of addr %pK unexpectedly not in RAM\n", (void *)addr);
+ pr_err("share pool: the page of addr %pK unexpectedly "
+ "not in RAM\n", (void *)addr);
return -EFAULT;
}
@@ -1758,6 +1820,11 @@ static int sp_unshare_uva(unsigned long uva, unsigned long size, int pid, int sp
}
}
+ if (spa->type != SPA_TYPE_K2TASK && spa->type != SPA_TYPE_K2SPG) {
+ pr_err("share pool: this spa should not be unshare here\n");
+ ret = -EINVAL;
+ goto out_drop_area;
+ }
/*
* 1. overflow actually won't happen due to an spa must be valid.
* 2. we must unshare [spa->va_start, spa->va_start + spa->real_size) completely
@@ -1771,32 +1838,57 @@ static int sp_unshare_uva(unsigned long uva, unsigned long size, int pid, int sp
if (size_aligned < ALIGN(size, page_size)) {
ret = -EINVAL;
if (printk_ratelimit())
- pr_err("share pool: unshare uva failed due to invalid parameter size %lu\n", size);
+ pr_err("share pool: unshare uva failed due to invalid parameter size %lu\n",
+ size);
goto out_drop_area;
}
- if (spg_id == SPG_ID_NONE) {
- if (spa->spg) {
- ret = -EINVAL;
+ if (spa->type == SPA_TYPE_K2TASK) {
+ if (spg_id != SPG_ID_NONE && spg_id != SPG_ID_DEFAULT) {
if (printk_ratelimit())
- pr_err("share pool: unshare uva failed, SPG_ID_NONE is invalid\n");
+ pr_err("share pool: unshare uva(to task) failed, "
+ "invalid spg id %d\n", spg_id);
+ ret = -EINVAL;
goto out_drop_area;
}
rcu_read_lock();
tsk = find_task_by_vpid(pid);
- if (!tsk || (tsk->flags & PF_EXITING))
- ret = -ESRCH;
- else
- get_task_struct(tsk);
-
+ if (!tsk || !tsk->mm || (tsk->flags & PF_EXITING)) {
+ if (printk_ratelimit())
+ pr_info("share pool: no need to unshare uva(to task), "
+ "target process not found or do_exit\n");
+ ret = -EINVAL;
+ rcu_read_unlock();
+ sp_dump_stack();
+ goto out_drop_area;
+ }
+ get_task_struct(tsk);
rcu_read_unlock();
- if (ret)
+
+ if (!spa->mm ||
+ (current->mm && (current->mm != tsk->mm || tsk->mm != spa->mm))) {
+ if (printk_ratelimit())
+ pr_err("share pool: unshare uva(to task) failed, "
+ "wrong pid or invalid spa\n");
+ ret = -EINVAL;
goto out_drop_area;
+ }
+
+ if (spa->mm != tsk->mm) {
+ if (printk_ratelimit())
+ pr_err("share pool: unshare uva(to task) failed, "
+ "spa not belong to the task\n");
+ ret = -EINVAL;
+ goto out_drop_area;
+ }
if (!mmget_not_zero(tsk->mm)) {
put_task_struct(tsk);
- pr_info("share pool: no need to unshare uva, target process is exiting\n");
+ if (printk_ratelimit())
+ pr_info("share pool: no need to unshare uva(to task), "
+ "target process mm is not existing\n");
+ sp_dump_stack();
goto out_drop_area;
}
down_write(&tsk->mm->mmap_sem);
@@ -1809,32 +1901,51 @@ static int sp_unshare_uva(unsigned long uva, unsigned long size, int pid, int sp
(void *)uva_aligned);
}
put_task_struct(tsk);
- } else {
- /*
- * k2u to task, then unshare_uva(..., spg_id) is invalid due to potential
- * spa memory leak.
- */
- if (!spa->spg) {
+ } else if (spa->type == SPA_TYPE_K2SPG) {
+ if (!spa->spg || spg_id == SPG_ID_NONE) {
+ if (printk_ratelimit())
+ pr_err("share pool: unshare uva(to group) failed, "
+ "invalid spg id %d\n", spg_id);
ret = -EINVAL;
+ goto out_drop_area;
+ }
+
+ spg = __sp_find_spg(pid, SPG_ID_DEFAULT);
+ if (!spg_valid(spg)) {
if (printk_ratelimit())
- pr_err("share pool: unshare uva failed, sp group id %d is invalid\n", spg_id);
+ pr_err("share pool: unshare uva(to group) invalid pid, "
+ "process not in sp group or group is dead\n");
+ ret = -EINVAL;
goto out_drop_area;
}
- spg = __sp_find_spg(pid, spg_id);
- if (spg_valid(spg)) {
- __sp_free(spg, uva_aligned, size_aligned, NULL);
- } else {
- if (!spg) {
- if (printk_ratelimit())
- pr_err("share pool: unshare uva failed, doesn't belong to group %d\n",
- spg_id);
- ret = -EINVAL;
- goto out_drop_area;
- } else {
- pr_info("share pool: no need to unshare uva, target process is exiting\n");
- }
+ if (spa->spg != spg) {
+ if (printk_ratelimit())
+ pr_err("share pool: unshare uva(to group) failed, "
+ "spa not belong to the group\n");
+ ret = -EINVAL;
+ goto out_drop_area;
}
+
+ if (current->mm && current->mm->sp_group != spg) {
+ if (printk_ratelimit())
+ pr_err("share pool: unshare uva(to group) failed, "
+ "caller process doesn't belong to target group\n");
+ ret = -EINVAL;
+ goto out_drop_area;
+ }
+
+ __sp_free(spg, uva_aligned, size_aligned, NULL);
+ }
+
+ if (!ret) {
+ if (spg_valid(spa->spg))
+ spg_id = spa->spg->id;
+ pr_notice("share pool: [sp unshare uva type %d] caller %s(%d/%d); "
+ "group id %d addr 0x%pK size %ld\n",
+ spa->type, current->comm, current->tgid, current->pid,
+ spg_id, (void *)spa->va_start, spa->real_size);
+ sp_dump_stack();
}
out_drop_area:
@@ -1864,7 +1975,8 @@ static int sp_unshare_kva(unsigned long kva, unsigned long size)
step = PAGE_SIZE;
is_hugepage = false;
} else {
- pr_err("share pool: check vmap hugepage failed, ret %d\n", ret);
+ if (printk_ratelimit())
+ pr_err("share pool: check vmap hugepage failed, ret %d\n", ret);
return -EINVAL;
}
@@ -1882,7 +1994,8 @@ static int sp_unshare_kva(unsigned long kva, unsigned long size)
if (page)
put_page(page);
else
- pr_err("share pool: vmalloc to hugepage failed\n");
+ pr_err("share pool: vmalloc %pK to page/hugepage failed\n",
+ (void *)addr);
}
vunmap((void *)kva_aligned);
@@ -1944,7 +2057,7 @@ int sp_walk_page_range(unsigned long uva, unsigned long size,
get_task_struct(tsk);
if (!mmget_not_zero(tsk->mm)) {
put_task_struct(tsk);
- return -EINVAL;
+ return -ESRCH;
}
down_write(&tsk->mm->mmap_sem);
ret = __sp_walk_page_range(uva, size, tsk, sp_walk_data);
@@ -1973,46 +2086,6 @@ void sp_walk_page_free(struct sp_walk_data *sp_walk_data)
}
EXPORT_SYMBOL_GPL(sp_walk_page_free);
-/**
- * Walk the mm_struct of processes in the specified sp_group
- * and call CALLBACK once for each mm_struct.
- * @spg_id: the ID of the specified sp_group
- * @data: the param for callback function
- * @func: caller specific callback function
- *
- * Return -errno if fail.
- */
-int sp_group_walk(int spg_id, void *data, int (*func)(struct mm_struct *mm, void *))
-{
- struct sp_group *spg;
- int ret = -ESRCH;
-
- if (!func) {
- if (printk_ratelimit())
- pr_err("share pool: null func pointer\n");
- return -EINVAL;
- }
-
- mutex_lock(&sp_mutex);
- spg = idr_find(&sp_group_idr, spg_id);
- if (spg_valid(spg)) {
- struct mm_struct *mm;
- struct mm_struct *tmp;
- list_for_each_entry_safe(mm, tmp, &spg->procs, sp_node) {
- if (func) {
- ret = func(mm, data);
- if (ret)
- goto out_unlock;
- }
- }
- }
-out_unlock:
- mutex_unlock(&sp_mutex);
-
- return ret;
-}
-EXPORT_SYMBOL_GPL(sp_group_walk);
-
int sp_register_notifier(struct notifier_block *nb)
{
return blocking_notifier_chain_register(&sp_notifier_chain, nb);
@@ -2039,7 +2112,7 @@ bool sp_config_dvpp_range(size_t start, size_t size, int device_id, int pid)
struct sp_group *spg;
if (device_id < 0 || device_id >= MAX_DEVID || pid < 0 || size <= 0 ||
- size > MMAP_SHARE_POOL_16G_SIZE)
+ size> MMAP_SHARE_POOL_16G_SIZE)
return false;
mutex_lock(&sp_mutex);
@@ -2061,11 +2134,9 @@ EXPORT_SYMBOL_GPL(sp_config_dvpp_range);
/* Check whether the address belongs to the share pool. */
bool is_sharepool_addr(unsigned long addr)
{
- if (host_svm_sp_enable == false)
- return (addr >= MMAP_SHARE_POOL_START) &&
- addr < (MMAP_SHARE_POOL_16G_START + MMAP_SHARE_POOL_16G_SIZE);
-
- return addr >= MMAP_SHARE_POOL_START && addr < MMAP_SHARE_POOL_END;
+ if (host_svm_sp_enable == false)
+ return addr >= MMAP_SHARE_POOL_START && addr < (MMAP_SHARE_POOL_16G_START + MMAP_SHARE_POOL_16G_SIZE);
+ return addr >= MMAP_SHARE_POOL_START && addr < MMAP_SHARE_POOL_END;
}
EXPORT_SYMBOL_GPL(is_sharepool_addr);
@@ -2109,7 +2180,7 @@ static int idr_proc_stat_cb(int id, void *p, void *data)
mutex_lock(&sp_mutex);
spg = __sp_find_spg(id, SPG_ID_DEFAULT);
- if (spg) {
+ if (spg_valid(spg)) {
seq_printf(seq, "%-12d %-10d %-18ld\n",
id, spg->id, byte2kb(stat->amount));
}
@@ -2130,8 +2201,7 @@ static int proc_stat_show(struct seq_file *seq, void *offset)
return 0;
}
-static void rb_spa_stat_show(struct seq_file *seq)
-{
+static void rb_spa_stat_show(struct seq_file *seq) {
struct rb_node *node;
struct sp_area *spa;
@@ -2215,8 +2285,8 @@ static int idr_spg_stat_cb(int id, void *p, void *data)
struct sp_group *spg = p;
struct seq_file *seq = data;
- seq_printf(seq, "Group %-10d size: %13d KB, spa num: %d.\n",
- id, byte2kb(atomic_read(&spg->size)),
+ seq_printf(seq, "Group %-10d size: %13ld KB, spa num: %d.\n",
+ id, byte2kb(atomic64_read(&spg->size)),
atomic_read(&spg->spa_num));
return 0;
@@ -2227,8 +2297,8 @@ static void spg_overview_show(struct seq_file *seq)
mutex_lock(&sp_mutex);
idr_for_each(&sp_group_idr, idr_spg_stat_cb, seq);
mutex_unlock(&sp_mutex);
- seq_printf(seq, "Share pool total size: %13d KB, spa total num: %d.\n\n",
- byte2kb(atomic_read(&spg_stat.spa_total_size)),
+ seq_printf(seq, "Share pool total size: %13ld KB, spa total num: %d.\n\n",
+ byte2kb(atomic64_read(&spg_stat.spa_total_size)),
atomic_read(&spg_stat.spa_total_num));
}
@@ -2255,7 +2325,6 @@ void __init proc_sharepool_init(void)
proc_create_single_data("sharepool/spa_stat", 0, NULL, spa_stat_show, NULL);
}
-
struct page *sp_alloc_pages(struct vm_struct *area, gfp_t mask,
unsigned int page_order, int node)
{
--
2.25.1
1
1
From: Len Brown <len.brown(a)intel.com>
stable inclusion
from linux-4.19.157
commit 900281e167f45e0c0e5df6e59fa00334b5e38133
CVE: CVE-2020-8694
--------------------------------
commit 949dd0104c496fa7c14991a23c03c62e44637e71 upstream.
Remove non-privileged user access to power data contained in
/sys/class/powercap/intel-rapl*/*/energy_uj
Non-privileged users currently have read access to power data and can
use this data to form a security attack. Some privileged
drivers/applications need read access to this data, but don't expose it
to non-privileged users.
For example, thermald uses this data to ensure that power management
works correctly. Thus removing non-privileged access is preferred over
completely disabling this power reporting capability with
CONFIG_INTEL_RAPL=n.
Fixes: 95677a9a3847 ("PowerCap: Fix mode for energy counter")
Signed-off-by: Len Brown <len.brown(a)intel.com>
Cc: stable(a)vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/powercap/powercap_sys.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/powercap/powercap_sys.c b/drivers/powercap/powercap_sys.c
index 9e2f274bd44f..60c8375c3c81 100644
--- a/drivers/powercap/powercap_sys.c
+++ b/drivers/powercap/powercap_sys.c
@@ -379,9 +379,9 @@ static void create_power_zone_common_attributes(
&dev_attr_max_energy_range_uj.attr;
if (power_zone->ops->get_energy_uj) {
if (power_zone->ops->reset_energy_uj)
- dev_attr_energy_uj.attr.mode = S_IWUSR | S_IRUGO;
+ dev_attr_energy_uj.attr.mode = S_IWUSR | S_IRUSR;
else
- dev_attr_energy_uj.attr.mode = S_IRUGO;
+ dev_attr_energy_uj.attr.mode = S_IRUSR;
power_zone->zone_dev_attrs[count++] =
&dev_attr_energy_uj.attr;
}
--
2.25.1
1
0
openEuler Summit 已经得到了
大家的很多支持
演讲议题、支持单位、SIG、Demo 还在增加
但是,申报议题和Demo今天就是 Deadline 了
11月20日23:59截至,
不要错过你发展最迅速、最具活力的首届openEuler Summit 展示自己的技术实力
赶快点击下方的链接报名
申请链接
Call for Speaker<https://shimo.im/forms/XtCTP9jcrXKgjytD/fill>
Call for Sponsor<https://shimo.im/forms/VWWtgLsVHzovmbeH/fill>
Call for SIG<https://shimo.im/forms/KSMKHGPHIAsjoNHP/fill>
Call for Demo<https://shimo.im/forms/lMTsArbYcy4hd2dY/fill>
[cid:image001.png@01D6BF24.25DA26F0]<https://openeuler.org/zh/interaction/summit-list/>
在 openEuler Summit
人人都是主办方
人人都是参与者
这和开源精神的内核是一致的
大会议程日益丰满
并且干货满满、富有趣味
Hello openEuler 开发者体验区
非常的酷
因为我们想到了一个
从树莓派到数据中心的全场景展示
还有其他一些
“别有用心” 的 Demo
绝对的好玩
TC、SIG、Maintainer 已经准备好了
“网友见面”
现场召开 TC 工作会议
现场召开 SIG 组工作会议
现场接受新 SIG 申请
现场的 Maintainer 共同分享学习
不断增加的支持单位、友好基金会和社区媒体
[cid:image001.jpg@01D6BF21.1ADA52F0]
1
0

19 Nov '20
From: Dmitry Torokhov <dmitry.torokhov(a)gmail.com>
mainline inclusion
from mainline-v5.10-rc5
commit 77e70d351db7de07a46ac49b87a6c3c7a60fca7e
category: bugfix
bugzilla: NA
CVE: CVE-2020-25669
--------------------------------
We need to make sure we cancel the reinit work before we tear down the
driver structures.
Reported-by: Bodong Zhao <nopitydays(a)gmail.com>
Tested-by: Bodong Zhao <nopitydays(a)gmail.com>
Cc: stable(a)vger.kernel.org
Signed-off-by: Dmitry Torokhov <dmitry.torokhov(a)gmail.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/input/keyboard/sunkbd.c | 41 ++++++++++++++++++++++++++-------
1 file changed, 33 insertions(+), 8 deletions(-)
diff --git a/drivers/input/keyboard/sunkbd.c b/drivers/input/keyboard/sunkbd.c
index ad5d7f94f95a..1c7aa86c92ab 100644
--- a/drivers/input/keyboard/sunkbd.c
+++ b/drivers/input/keyboard/sunkbd.c
@@ -111,7 +111,8 @@ static irqreturn_t sunkbd_interrupt(struct serio *serio,
switch (data) {
case SUNKBD_RET_RESET:
- schedule_work(&sunkbd->tq);
+ if (sunkbd->enabled)
+ schedule_work(&sunkbd->tq);
sunkbd->reset = -1;
break;
@@ -212,16 +213,12 @@ static int sunkbd_initialize(struct sunkbd *sunkbd)
}
/*
- * sunkbd_reinit() sets leds and beeps to a state the computer remembers they
- * were in.
+ * sunkbd_set_leds_beeps() sets leds and beeps to a state the computer remembers
+ * they were in.
*/
-static void sunkbd_reinit(struct work_struct *work)
+static void sunkbd_set_leds_beeps(struct sunkbd *sunkbd)
{
- struct sunkbd *sunkbd = container_of(work, struct sunkbd, tq);
-
- wait_event_interruptible_timeout(sunkbd->wait, sunkbd->reset >= 0, HZ);
-
serio_write(sunkbd->serio, SUNKBD_CMD_SETLED);
serio_write(sunkbd->serio,
(!!test_bit(LED_CAPSL, sunkbd->dev->led) << 3) |
@@ -234,11 +231,39 @@ static void sunkbd_reinit(struct work_struct *work)
SUNKBD_CMD_BELLOFF - !!test_bit(SND_BELL, sunkbd->dev->snd));
}
+
+/*
+ * sunkbd_reinit() wait for the keyboard reset to complete and restores state
+ * of leds and beeps.
+ */
+
+static void sunkbd_reinit(struct work_struct *work)
+{
+ struct sunkbd *sunkbd = container_of(work, struct sunkbd, tq);
+
+ /*
+ * It is OK that we check sunkbd->enabled without pausing serio,
+ * as we only want to catch true->false transition that will
+ * happen once and we will be woken up for it.
+ */
+ wait_event_interruptible_timeout(sunkbd->wait,
+ sunkbd->reset >= 0 || !sunkbd->enabled,
+ HZ);
+
+ if (sunkbd->reset >= 0 && sunkbd->enabled)
+ sunkbd_set_leds_beeps(sunkbd);
+}
+
static void sunkbd_enable(struct sunkbd *sunkbd, bool enable)
{
serio_pause_rx(sunkbd->serio);
sunkbd->enabled = enable;
serio_continue_rx(sunkbd->serio);
+
+ if (!enable) {
+ wake_up_interruptible(&sunkbd->wait);
+ cancel_work_sync(&sunkbd->tq);
+ }
}
/*
--
2.25.1
1
0

19 Nov '20
From: Ming Lei <ming.lei(a)redhat.com>
mainline inclusion
from mainline-v5.10-rc2
commit b40813ddcd6b
category: bugfix
bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1891363
CVE: NA
backport: openEuler-20.09
Here is the testcase:
1. rbd create --size 2G rbdpool/foo
2. rbd-nbd map rbdpool/foo
3. mkfs.ext4 /dev/nbd0
4. mount /dev/nbd0 /mnt
5. rbd resize --size 4G rbdpool/foo
6. ls /mnt
ls will stuck here forever.
--------------------------------
[ Upstream commit b40813ddcd6bf9f01d020804e4cb8febc480b9e4 ]
Mounted NBD device can be resized, one use case is rbd-nbd.
Fix the issue by setting up default block size, then not touch it
in nbd_size_update() any more. This kind of usage is aligned with loop
which has same use case too.
Cc: stable(a)vger.kernel.org
Fixes: c8a83a6b54d0 ("nbd: Use set_blocksize() to set device blocksize")
Reported-by: lining <lining2020x(a)163.com>
Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
Cc: Josef Bacik <josef(a)toxicpanda.com>
Cc: Jan Kara <jack(a)suse.cz>
Tested-by: lining <lining2020x(a)163.com>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: lining <lining_yewu(a)cmss.chinamobile.com>
---
drivers/block/nbd.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index cdf62fb94fb15..9a0fb2d52a76c 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -268,7 +268,7 @@ static void nbd_size_clear(struct nbd_device *nbd)
}
}
-static void nbd_size_update(struct nbd_device *nbd)
+static void nbd_size_update(struct nbd_device *nbd, bool start)
{
struct nbd_config *config = nbd->config;
struct block_device *bdev = bdget_disk(nbd->disk, 0);
@@ -279,7 +279,8 @@ static void nbd_size_update(struct nbd_device *nbd)
if (bdev) {
if (bdev->bd_disk) {
bd_set_size(bdev, config->bytesize);
- set_blocksize(bdev, config->blksize);
+ if (start)
+ set_blocksize(bdev, config->blksize);
} else
bdev->bd_invalidated = 1;
bdput(bdev);
@@ -294,7 +295,7 @@ static void nbd_size_set(struct nbd_device *nbd, loff_t blocksize,
config->blksize = blocksize;
config->bytesize = blocksize * nr_blocks;
if (nbd->task_recv != NULL)
- nbd_size_update(nbd);
+ nbd_size_update(nbd, false);
}
static void nbd_complete_rq(struct request *req)
@@ -1231,7 +1232,7 @@ static int nbd_start_device(struct nbd_device *nbd)
args->index = i;
queue_work(nbd->recv_workq, &args->work);
}
- nbd_size_update(nbd);
+ nbd_size_update(nbd, true);
return error;
}
--
2.27.0
3
2
*** BLURB HERE ***
Greg Kroah-Hartman (1):
Linux 4.19.157
Len Brown (1):
powercap: restrict energy meter to root access
Makefile | 2 +-
drivers/powercap/powercap_sys.c | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
--
2.25.1
1
2
Alan Stern (1):
USB: Add NO_LPM quirk for Kingston flash drive
Alexander Aring (1):
gfs2: Wake up when sd_glock_disposal becomes zero
Artem Lapkin (1):
ALSA: usb-audio: add usb vendor id as DSD-capable for Khadas devices
Ben Hutchings (1):
Revert "btrfs: flush write bio if we loop in extent_write_cache_pages"
Chris Wilson (1):
drm/i915: Break up error capture compression loops with cond_resched()
Claire Chang (1):
serial: 8250_mtk: Fix uart_get_baud_rate warning
Claudiu Manoil (2):
gianfar: Replace skb_realloc_headroom with skb_cow_head for PTP
gianfar: Account for Tx PTP timestamp in the skb headroom
Clément Péron (1):
ARM: dts: sun4i-a10: fix cpu_alert temperature
Daniel Vetter (1):
vt: Disable KD_FONT_OP_COPY
Daniele Palmas (3):
net: usb: qmi_wwan: add Telit LE910Cx 0x1230 composition
USB: serial: option: add LE910Cx compositions 0x1203, 0x1230, 0x1231
USB: serial: option: add Telit FN980 composition 0x1055
Eddy Wu (1):
fork: fix copy_process(CLONE_PARENT) race with the exiting
->real_parent
Filipe Manana (1):
Btrfs: fix unwritten extent buffers and hangs on future writeback
attempts
Gabriel Krisman Bertazi (2):
blk-cgroup: Fix memleak on error path
blk-cgroup: Pre-allocate tree node on blkg_conf_prep
Geoffrey D. Bennett (2):
ALSA: usb-audio: Add implicit feedback quirk for Qu-16
ALSA: usb-audio: Add implicit feedback quirk for MODX
Greg Kroah-Hartman (1):
Linux 4.19.156
Guenter Roeck (1):
tools: perf: Fix build error in v4.19.y
Hoang Huu Le (1):
tipc: fix use-after-free in tipc_bcast_get_mode
Hoegeun Kwon (1):
drm/vc4: drv: Add error handding for bind
Jason Gunthorpe (1):
mm: always have io_remap_pfn_range() set pgprot_decrypted()
Jeff Vander Stoep (1):
vsock: use ns_capable_noaudit() on socket create
Johan Hovold (1):
USB: serial: cyberjack: fix write-URB completion race
Josef Bacik (1):
btrfs: flush write bio if we loop in extent_write_cache_pages
Kairui Song (1):
x86/kexec: Use up-to-dated screen_info copy to fill boot params
Keith Winstein (1):
ALSA: usb-audio: Add implicit feedback quirk for Zoom UAC-2
Lee Jones (1):
Fonts: Replace discarded const qualifier
Macpaul Lin (1):
usb: mtu3: fix panic in mtu3_gadget_stop()
Mark Deneen (1):
cadence: force nonlinear buffers to be cloned
Mike Galbraith (1):
futex: Handle transient "ownerless" rtmutex state correctly
Ming Lei (1):
scsi: core: Don't start concurrent async scan on same host
Oleg Nesterov (1):
ptrace: fix task_join_group_stop() for the case when current is traced
Pali Rohár (1):
arm64: dts: marvell: espressobin: Add ethernet switch aliases
Petr Malat (1):
sctp: Fix COMM_LOST/CANT_STR_ASSOC err reporting on big-endian
platforms
Qinglang Miao (1):
serial: txx9: add missing platform_driver_unregister() on error in
serial_txx9_init
Qiujun Huang (1):
tracing: Fix out of bounds write in get_trace_buf
Qu Wenruo (11):
btrfs: extent_io: Kill the forward declaration of flush_write_bio
btrfs: extent_io: Move the BUG_ON() in flush_write_bio() one level up
btrfs: extent_io: Handle errors better in btree_write_cache_pages()
btrfs: extent_io: add proper error handling to
lock_extent_buffer_for_io()
btrfs: Move btrfs_check_chunk_valid() to tree-check.[ch] and export it
btrfs: tree-checker: Make chunk item checker messages more readable
btrfs: tree-checker: Make btrfs_check_chunk_valid() return EUCLEAN
instead of EIO
btrfs: tree-checker: Check chunk item at tree block read time
btrfs: tree-checker: Verify dev item
btrfs: tree-checker: Fix wrong check on max devid
btrfs: tree-checker: fix the error message for transid error
Rafael J. Wysocki (1):
PM: runtime: Resume the device earlier in __device_release_driver()
Shijie Luo (1):
mm: mempolicy: fix potential pte_unmap_unlock pte error
Steven Rostedt (VMware) (3):
ring-buffer: Fix recursion protection transitions between interrupt
context
ftrace: Fix recursion check for NMI test
ftrace: Handle tracing when switching between context
Vasily Gorbik (1):
lib/crc32test: remove extra local_irq_disable/enable
Vinay Kumar Yadav (2):
chelsio/chtls: fix memory leaks caused by a race
chelsio/chtls: fix always leaking ctrl_skb
Vincent Whitchurch (1):
of: Fix reserved-memory overlap detection
Vineet Gupta (2):
ARC: stack unwinding: avoid indefinite looping
Revert "ARC: entry: fix potential EFA clobber when TIF_SYSCALL_TRACE"
Xiaofei Shen (1):
net: dsa: read mac address from DT for slave device
YueHaibing (1):
sfp: Fix error handing in sfp_probe()
Zhang Qilong (1):
ACPI: NFIT: Fix comparison to '-ENXIO'
Ziyi Cao (1):
USB: serial: option: add Quectel EC200T module support
Zqiang (1):
kthread_worker: prevent queuing delayed work from timer_fn when it is
being canceled
Makefile | 2 +-
arch/arc/kernel/entry.S | 16 +-
arch/arc/kernel/stacktrace.c | 7 +-
arch/arm/boot/dts/sun4i-a10.dtsi | 2 +-
.../dts/marvell/armada-3720-espressobin.dts | 12 +-
arch/x86/kernel/kexec-bzimage64.c | 3 +-
block/blk-cgroup.c | 15 +-
drivers/acpi/nfit/core.c | 2 +-
drivers/base/dd.c | 7 +-
drivers/crypto/chelsio/chtls/chtls_cm.c | 2 +-
drivers/crypto/chelsio/chtls/chtls_hw.c | 3 +
drivers/gpu/drm/i915/i915_gpu_error.c | 3 +
drivers/gpu/drm/vc4/vc4_drv.c | 1 +
drivers/net/ethernet/cadence/macb_main.c | 3 +-
drivers/net/ethernet/freescale/gianfar.c | 14 +-
drivers/net/phy/sfp.c | 3 +-
drivers/net/usb/qmi_wwan.c | 1 +
drivers/of/of_reserved_mem.c | 13 +-
drivers/scsi/scsi_scan.c | 7 +-
drivers/tty/serial/8250/8250_mtk.c | 2 +-
drivers/tty/serial/serial_txx9.c | 3 +
drivers/tty/vt/vt.c | 24 +-
drivers/usb/core/quirks.c | 3 +
drivers/usb/mtu3/mtu3_gadget.c | 1 +
drivers/usb/serial/cyberjack.c | 7 +-
drivers/usb/serial/option.c | 10 +
fs/btrfs/extent_io.c | 171 +++++++++----
fs/btrfs/tree-checker.c | 236 +++++++++++++++++-
fs/btrfs/tree-checker.h | 4 +
fs/btrfs/volumes.c | 123 +--------
fs/btrfs/volumes.h | 9 +
fs/gfs2/glock.c | 3 +-
include/asm-generic/pgtable.h | 4 -
include/linux/mm.h | 9 +
include/net/dsa.h | 1 +
kernel/fork.c | 10 +-
kernel/futex.c | 16 +-
kernel/kthread.c | 3 +-
kernel/signal.c | 19 +-
kernel/trace/ring_buffer.c | 58 ++++-
kernel/trace/trace.c | 2 +-
kernel/trace/trace.h | 26 +-
kernel/trace/trace_selftest.c | 9 +-
lib/crc32test.c | 4 -
lib/fonts/font_10x18.c | 2 +-
lib/fonts/font_6x10.c | 2 +-
lib/fonts/font_6x11.c | 2 +-
lib/fonts/font_7x14.c | 2 +-
lib/fonts/font_8x16.c | 2 +-
lib/fonts/font_8x8.c | 2 +-
lib/fonts/font_acorn_8x8.c | 2 +-
lib/fonts/font_mini_4x6.c | 2 +-
lib/fonts/font_pearl_8x8.c | 2 +-
lib/fonts/font_sun12x22.c | 2 +-
lib/fonts/font_sun8x16.c | 2 +-
mm/mempolicy.c | 6 +-
net/dsa/dsa2.c | 1 +
net/dsa/slave.c | 5 +-
net/sctp/sm_sideeffect.c | 4 +-
net/tipc/core.c | 5 +
net/vmw_vsock/af_vsock.c | 2 +-
sound/usb/pcm.c | 6 +
sound/usb/quirks.c | 1 +
tools/perf/util/util.h | 2 +-
64 files changed, 634 insertions(+), 293 deletions(-)
--
2.25.1
1
66
From: Guenter Roeck <linux(a)roeck-us.net>
perf may fail to build in v4.19.y with the following error.
util/evsel.c: In function ‘perf_evsel__exit’:
util/util.h:25:28: error:
passing argument 1 of ‘free’ discards ‘const’ qualifier from pointer target type
This is observed (at least) with gcc v6.5.0. The underlying problem is
the following statement.
zfree(&evsel->pmu_name);
evsel->pmu_name is decared 'const *'. zfree in turn is defined as
#define zfree(ptr) ({ free(*ptr); *ptr = NULL; })
and thus passes the const * to free(). The problem is not seen
in the upstream kernel since zfree() has been rewritten there.
The problem has been introduced into v4.19.y with the backport of upstream
commit d4953f7ef1a2 (perf parse-events: Fix 3 use after frees found with
clang ASAN).
One possible fix of this problem would be to not declare pmu_name
as const. This patch chooses to typecast the parameter of zfree()
to void *, following the guidance from the upstream kernel which
does the same since commit 7f7c536f23e6a ("tools lib: Adopt
zalloc()/zfree() from tools/perf")
Fixes: a0100a363098 ("perf parse-events: Fix 3 use after frees found with clang ASAN")
Signed-off-by: Guenter Roeck <linux(a)roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
tools/perf/util/util.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/util/util.h b/tools/perf/util/util.h
index dc58254a2b69..8c01b2cfdb1a 100644
--- a/tools/perf/util/util.h
+++ b/tools/perf/util/util.h
@@ -22,7 +22,7 @@ static inline void *zalloc(size_t size)
return calloc(1, size);
}
-#define zfree(ptr) ({ free(*ptr); *ptr = NULL; })
+#define zfree(ptr) ({ free((void *)*ptr); *ptr = NULL; })
struct dirent;
struct nsinfo;
--
2.25.1
1
0

18 Nov '20
From: Israel Rukshin <israelr(a)mellanox.com>
mainline inclusion
from mainline-v5.7-rc1
commit b780d7415aacec855e2f2370cbf98f918b224903
category: bugfix
bugzilla: NA
CVE: NA
Link: https://gitee.com/openeuler/kernel/issues/I1WGZE
--------------------------------
In case nvme_sysfs_delete() is called by the user before taking the ctrl
reference count, the ctrl may be freed during the creation and cause the
bug. Take the reference as soon as the controller is externally visible,
which is done by cdev_device_add() in nvme_init_ctrl(). Also take the
reference count at the core layer instead of taking it on each transport
separately.
Signed-off-by: Israel Rukshin <israelr(a)mellanox.com>
Reviewed-by: Max Gurtovoy <maxg(a)mellanox.com>
Reviewed-by: Christoph Hellwig <hch(a)lst.de>
Signed-off-by: Keith Busch <kbusch(a)kernel.org>
Conflicts:
drivers/nvme/host/tcp.c
[No code about TCP in current version.]
Reviewed-by: Chao Leng <lengchao(a)huawei.com>
Reviewed-by: Jike Cheng <chengjike.cheng(a)huawei.com>
Signed-off-by: Lijie <lijie34(a)huawei.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
Acked-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/nvme/host/core.c | 2 ++
drivers/nvme/host/fc.c | 4 +---
drivers/nvme/host/pci.c | 1 -
drivers/nvme/host/rdma.c | 3 +--
drivers/nvme/target/loop.c | 3 +--
5 files changed, 5 insertions(+), 8 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index b8446637b3c2..b7a40ac4b637 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3779,6 +3779,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
if (ret)
goto out_release_instance;
+ nvme_get_ctrl(ctrl);
cdev_init(&ctrl->cdev, &nvme_dev_fops);
ctrl->cdev.owner = ops->module;
ret = cdev_device_add(&ctrl->cdev, ctrl->device);
@@ -3795,6 +3796,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
return 0;
out_free_name:
+ nvme_put_ctrl(ctrl);
kfree_const(ctrl->device->kobj.name);
out_release_instance:
ida_simple_remove(&nvme_instance_ida, ctrl->instance);
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index ed88d5021772..d75ae1b201ad 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -3146,10 +3146,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
goto fail_ctrl;
}
- nvme_get_ctrl(&ctrl->ctrl);
-
if (!queue_delayed_work(nvme_wq, &ctrl->connect_work, 0)) {
- nvme_put_ctrl(&ctrl->ctrl);
dev_err(ctrl->ctrl.device,
"NVME-FC{%d}: failed to schedule initial connect\n",
ctrl->cnum);
@@ -3174,6 +3171,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
/* initiate nvme ctrl ref counting teardown */
nvme_uninit_ctrl(&ctrl->ctrl);
+ nvme_put_ctrl(&ctrl->ctrl);
/* Remove core ctrl ref. */
nvme_put_ctrl(&ctrl->ctrl);
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index f194bb7ccb04..1a0eec110614 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2558,7 +2558,6 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
dev_info(dev->ctrl.device, "pci function %s\n", dev_name(&pdev->dev));
nvme_reset_ctrl(&dev->ctrl);
- nvme_get_ctrl(&dev->ctrl);
async_schedule(nvme_async_probe, dev);
return 0;
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index d0cf19e593b2..0e998e85e962 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -2050,8 +2050,6 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
dev_info(ctrl->ctrl.device, "new ctrl: NQN \"%s\", addr %pISpcs\n",
ctrl->ctrl.opts->subsysnqn, &ctrl->addr);
- nvme_get_ctrl(&ctrl->ctrl);
-
mutex_lock(&nvme_rdma_ctrl_mutex);
list_add_tail(&ctrl->list, &nvme_rdma_ctrl_list);
mutex_unlock(&nvme_rdma_ctrl_mutex);
@@ -2061,6 +2059,7 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
out_uninit_ctrl:
nvme_uninit_ctrl(&ctrl->ctrl);
nvme_put_ctrl(&ctrl->ctrl);
+ nvme_put_ctrl(&ctrl->ctrl);
if (ret > 0)
ret = -EIO;
return ERR_PTR(ret);
diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index 137a27fa369c..e30080c629fd 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -639,8 +639,6 @@ static struct nvme_ctrl *nvme_loop_create_ctrl(struct device *dev,
dev_info(ctrl->ctrl.device,
"new ctrl: \"%s\"\n", ctrl->ctrl.opts->subsysnqn);
- nvme_get_ctrl(&ctrl->ctrl);
-
changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE);
WARN_ON_ONCE(!changed);
@@ -658,6 +656,7 @@ static struct nvme_ctrl *nvme_loop_create_ctrl(struct device *dev,
kfree(ctrl->queues);
out_uninit_ctrl:
nvme_uninit_ctrl(&ctrl->ctrl);
+ nvme_put_ctrl(&ctrl->ctrl);
out_put_ctrl:
nvme_put_ctrl(&ctrl->ctrl);
if (ret > 0)
--
2.25.1
1
18

17 Nov '20
mainline inclusion
from mainline-v5.10-rc2
commit b40813ddcd6b
category: bugfix
bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1891363
CVE: NA
backport: openEuler-20.09
Here is the testcase:
1. rbd create --size 2G rbdpool/foo
2. rbd-nbd map rbdpool/foo
3. mkfs.ext4 /dev/nbd0
4. mount /dev/nbd0 /mnt
5. rbd resize --size 4G rbdpool/foo
6. ls /mnt
ls will stuck here forever.
--------------------------------
From: Ming Lei <ming.lei(a)redhat.com>
[ Upstream commit b40813ddcd6bf9f01d020804e4cb8febc480b9e4 ]
Mounted NBD device can be resized, one use case is rbd-nbd.
Fix the issue by setting up default block size, then not touch it
in nbd_size_update() any more. This kind of usage is aligned with loop
which has same use case too.
Cc: stable(a)vger.kernel.org
Fixes: c8a83a6b54d0 ("nbd: Use set_blocksize() to set device blocksize")
Reported-by: lining <lining2020x(a)163.com>
Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
Cc: Josef Bacik <josef(a)toxicpanda.com>
Cc: Jan Kara <jack(a)suse.cz>
Tested-by: lining <lining2020x(a)163.com>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: lining <lining_yewu(a)cmss.chinamobile.com>
---
drivers/block/nbd.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index cdf62fb94fb15..9a0fb2d52a76c 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -268,7 +268,7 @@ static void nbd_size_clear(struct nbd_device *nbd)
}
}
-static void nbd_size_update(struct nbd_device *nbd)
+static void nbd_size_update(struct nbd_device *nbd, bool start)
{
struct nbd_config *config = nbd->config;
struct block_device *bdev = bdget_disk(nbd->disk, 0);
@@ -279,7 +279,8 @@ static void nbd_size_update(struct nbd_device *nbd)
if (bdev) {
if (bdev->bd_disk) {
bd_set_size(bdev, config->bytesize);
- set_blocksize(bdev, config->blksize);
+ if (start)
+ set_blocksize(bdev, config->blksize);
} else
bdev->bd_invalidated = 1;
bdput(bdev);
@@ -294,7 +295,7 @@ static void nbd_size_set(struct nbd_device *nbd, loff_t blocksize,
config->blksize = blocksize;
config->bytesize = blocksize * nr_blocks;
if (nbd->task_recv != NULL)
- nbd_size_update(nbd);
+ nbd_size_update(nbd, false);
}
static void nbd_complete_rq(struct request *req)
@@ -1231,7 +1232,7 @@ static int nbd_start_device(struct nbd_device *nbd)
args->index = i;
queue_work(nbd->recv_workq, &args->work);
}
- nbd_size_update(nbd);
+ nbd_size_update(nbd, true);
return error;
}
--
2.27.0
2
1

17 Nov '20
mainline inclusion
from mainline-v5.10-rc2
commit b40813ddcd6b
category: bugfix
bugzilla: 1891363
CVE: NA
--------------------------------
Mounted NBD device can be resized, one use case is rbd-nbd.
Fix the issue by setting up default block size, then not touch it
in nbd_size_update() any more. This kind of usage is aligned with loop
which has same use case too.
Cc: stable(a)vger.kernel.org
Fixes: c8a83a6b54d0 ("nbd: Use set_blocksize() to set device blocksize")
Reported-by: lining <lining2020x(a)163.com>
Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
Cc: Josef Bacik <josef(a)toxicpanda.com>
Cc: Jan Kara <jack(a)suse.cz>
Tested-by: lining <lining2020x(a)163.com>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: lining <lining_yewu(a)cmss.chinamobile.com>
---
drivers/block/nbd.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 816188aa841f..a913153d0b6b 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -293,7 +293,7 @@ static void nbd_size_clear(struct nbd_device *nbd)
}
}
-static void nbd_size_update(struct nbd_device *nbd)
+static void nbd_size_update(struct nbd_device *nbd, bool start)
{
struct nbd_config *config = nbd->config;
struct block_device *bdev = bdget_disk(nbd->disk, 0);
@@ -309,7 +309,8 @@ static void nbd_size_update(struct nbd_device *nbd)
if (bdev) {
if (bdev->bd_disk) {
bd_set_size(bdev, config->bytesize);
- set_blocksize(bdev, config->blksize);
+ if (start)
+ set_blocksize(bdev, config->blksize);
} else
bdev->bd_invalidated = 1;
bdput(bdev);
@@ -324,7 +325,7 @@ static void nbd_size_set(struct nbd_device *nbd, loff_t blocksize,
config->blksize = blocksize;
config->bytesize = blocksize * nr_blocks;
if (nbd->task_recv != NULL)
- nbd_size_update(nbd);
+ nbd_size_update(nbd, false);
}
static void nbd_complete_rq(struct request *req)
@@ -1263,7 +1264,7 @@ static int nbd_start_device(struct nbd_device *nbd)
args->index = i;
queue_work(nbd->recv_workq, &args->work);
}
- nbd_size_update(nbd);
+ nbd_size_update(nbd, true);
return error;
}
--
2.25.1
2
2
Alexander Duyck (1):
e1000: Do not perform reset in reset_task if we are already down
Alistair Popple (1):
mm/rmap: fixup copying of soft dirty and uffd ptes
Aya Levin (2):
net/mlx5e: Fix VLAN cleanup flow
net/mlx5e: Fix VLAN create flow
Chuck Lever (1):
svcrdma: Fix leak of transport addresses
Cong Wang (1):
tipc: fix the skb_unshare() in tipc_buf_append()
Dan Aloni (1):
svcrdma: fix bounce buffers for unaligned offsets and multiple pages
Dumitru Ceara (1):
openvswitch: handle DNAT tuple collision
Eran Ben Elisha (1):
net/mlx5: Don't call timecounter cyc2time directly from 1PPS flow
Hanjun Guo (4):
irqchip/gicv3: Call acpi_put_table() to fix memory leak
tty/amba-pl011: Call acpi_put_table() to fix memory leak
cpufreq : CPPC: Break out if HiSilicon CPPC workaround is matched
cpufreq: CPPC: put ACPI table after using it
Jonathan Lemon (1):
mlx4: handle non-napi callers to napi_poll
Linus Torvalds (1):
tty: make FONTX ioctl use the tty pointer they were actually passed
Nikolai Merinov (1):
partitions/efi: Fix partition name parsing in GUID partition entry
Rohit Maheshwari (1):
net/tls: sendfile fails with ktls offload
Sabrina Dubroca (1):
xfrmi: drop ignore_df check before updating pmtu
Tonghao Zhang (2):
net: openvswitch: use u64 for meter bucket
net: openvswitch: use div_u64() for 64-by-32 divisions
Tuong Lien (1):
tipc: fix memory leak in service subscripting
Yang Shi (1):
mm: madvise: fix vma user-after-free
Yunsheng Lin (1):
net: sch_generic: aviod concurrent reset and enqueue op for lockless
qdisc
kiyin(尹亮) (1):
perf/core: Fix a memory leak in perf_event_parse_addr_filter()
block/partitions/efi.c | 35 +++++++++----
block/partitions/efi.h | 2 +-
drivers/cpufreq/cppc_cpufreq.c | 8 ++-
drivers/irqchip/irq-gic-v3.c | 2 +
drivers/net/ethernet/intel/e1000/e1000_main.c | 18 +++++--
drivers/net/ethernet/mellanox/mlx4/en_rx.c | 3 ++
drivers/net/ethernet/mellanox/mlx4/en_tx.c | 2 +-
.../net/ethernet/mellanox/mlx5/core/en_fs.c | 14 ++++--
.../ethernet/mellanox/mlx5/core/lib/clock.c | 5 +-
drivers/tty/serial/amba-pl011.c | 2 +
drivers/tty/vt/vt_ioctl.c | 32 ++++++------
kernel/events/core.c | 12 ++---
mm/madvise.c | 2 +-
mm/migrate.c | 9 +++-
mm/rmap.c | 7 ++-
net/openvswitch/conntrack.c | 22 +++++----
net/openvswitch/meter.c | 4 +-
net/openvswitch/meter.h | 2 +-
net/sched/sch_generic.c | 49 +++++++++++++------
net/sunrpc/xprtrdma/svc_rdma_backchannel.c | 1 +
net/sunrpc/xprtrdma/svc_rdma_sendto.c | 3 +-
net/tipc/msg.c | 3 +-
net/tipc/topsrv.c | 4 +-
net/tls/tls_device.c | 11 +++--
net/xfrm/xfrm_interface.c | 2 +-
25 files changed, 168 insertions(+), 86 deletions(-)
--
2.25.1
1
24
From: Ding Tianhong <dingtianhong(a)huawei.com>
ascend inclusion
category: feature
bugzilla: NA
CVE: NA
-------------------------------------------------
The mm->owner is only used for MEMCG currently, but the ascend share
pool features will use it later, so make it to a general features and
select it for CONFIG_MEMCG.
Signed-off-by: Tang Yizhou <tangyizhou(a)huawei.com>
Signed-off-by: Ding Tianhong <dingtianhong(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/linux/mm_types.h | 2 +-
init/Kconfig | 1 +
kernel/exit.c | 4 ++--
kernel/fork.c | 4 ++--
mm/Kconfig | 4 ++++
mm/debug.c | 2 +-
6 files changed, 11 insertions(+), 6 deletions(-)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 178e9dee217a..fcfa9a75c18e 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -453,7 +453,7 @@ struct mm_struct {
spinlock_t ioctx_lock;
struct kioctx_table __rcu *ioctx_table;
#endif
-#ifdef CONFIG_MEMCG
+#ifdef CONFIG_MM_OWNER
/*
* "owner" points to a task that is regarded as the canonical
* user/owner of this mm. All of the following must be true in
diff --git a/init/Kconfig b/init/Kconfig
index d1427ae4de9e..6880b55901bb 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -705,6 +705,7 @@ config MEMCG
bool "Memory controller"
select PAGE_COUNTER
select EVENTFD
+ select MM_OWNER
help
Provides control over the memory footprint of tasks in a cgroup.
diff --git a/kernel/exit.c b/kernel/exit.c
index 891d65e3ffd5..4d6f941712b6 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -392,7 +392,7 @@ kill_orphaned_pgrp(struct task_struct *tsk, struct task_struct *parent)
}
}
-#ifdef CONFIG_MEMCG
+#ifdef CONFIG_MM_OWNER
/*
* A task is exiting. If it owned this mm, find a new owner for the mm.
*/
@@ -478,7 +478,7 @@ void mm_update_next_owner(struct mm_struct *mm)
task_unlock(c);
put_task_struct(c);
}
-#endif /* CONFIG_MEMCG */
+#endif /* CONFIG_MM_OWNER */
/*
* Turn us into a lazy TLB process if we
diff --git a/kernel/fork.c b/kernel/fork.c
index 768fe41a7ee3..1ac49d1852cf 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -958,7 +958,7 @@ static void mm_init_aio(struct mm_struct *mm)
static __always_inline void mm_clear_owner(struct mm_struct *mm,
struct task_struct *p)
{
-#ifdef CONFIG_MEMCG
+#ifdef CONFIG_MM_OWNER
if (mm->owner == p)
WRITE_ONCE(mm->owner, NULL);
#endif
@@ -966,7 +966,7 @@ static __always_inline void mm_clear_owner(struct mm_struct *mm,
static void mm_init_owner(struct mm_struct *mm, struct task_struct *p)
{
-#ifdef CONFIG_MEMCG
+#ifdef CONFIG_MM_OWNER
mm->owner = p;
#endif
}
diff --git a/mm/Kconfig b/mm/Kconfig
index dddeb30d645e..76c2197a3f99 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -299,6 +299,10 @@ config VIRT_TO_BUS
deprecated interface virt_to_bus(). All new architectures
should probably not select this.
+config MM_OWNER
+ bool "Enable the ownership the mm owner"
+ help
+ This option enables mm_struct's to have an owner.
config MMU_NOTIFIER
bool
diff --git a/mm/debug.c b/mm/debug.c
index 362ce581671e..2da184b16bce 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -129,7 +129,7 @@ void dump_mm(const struct mm_struct *mm)
#ifdef CONFIG_AIO
"ioctx_table %px\n"
#endif
-#ifdef CONFIG_MEMCG
+#ifdef CONFIG_MM_OWNER
"owner %px "
#endif
"exe_file %px\n"
--
2.25.1
1
9
From: Chenguangli <chenguangli2(a)huawei.com>
driver inclusion
category: feature
bugzilla: NA
-----------------------------------------------------------------------
This module includes cfg, cqm, hwdev, hwif, mgmt, sml. and are mainly used to
initialize chip capabilityes and to initialize resources for communication between
drivers and chip.
Signed-off-by: Chenguangli <chenguangli2(a)huawei.com>
Acked-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/scsi/huawei/hifc/hifc_api_cmd.c | 1155 ++++++
drivers/scsi/huawei/hifc/hifc_api_cmd.h | 268 ++
drivers/scsi/huawei/hifc/hifc_cfg.c | 823 +++++
drivers/scsi/huawei/hifc/hifc_cfg.h | 171 +
drivers/scsi/huawei/hifc/hifc_cmdq.c | 1507 ++++++++
drivers/scsi/huawei/hifc/hifc_cmdq.h | 210 ++
drivers/scsi/huawei/hifc/hifc_cqm_main.c | 694 ++++
drivers/scsi/huawei/hifc/hifc_cqm_main.h | 366 ++
drivers/scsi/huawei/hifc/hifc_cqm_object.c | 3599 +++++++++++++++++++
drivers/scsi/huawei/hifc/hifc_cqm_object.h | 244 ++
drivers/scsi/huawei/hifc/hifc_eqs.c | 1347 +++++++
drivers/scsi/huawei/hifc/hifc_eqs.h | 233 ++
drivers/scsi/huawei/hifc/hifc_hw.h | 611 ++++
drivers/scsi/huawei/hifc/hifc_hwdev.c | 3675 ++++++++++++++++++++
drivers/scsi/huawei/hifc/hifc_hwdev.h | 456 +++
drivers/scsi/huawei/hifc/hifc_hwif.c | 630 ++++
drivers/scsi/huawei/hifc/hifc_hwif.h | 243 ++
drivers/scsi/huawei/hifc/hifc_mgmt.c | 1426 ++++++++
drivers/scsi/huawei/hifc/hifc_mgmt.h | 407 +++
drivers/scsi/huawei/hifc/hifc_sml.c | 361 ++
drivers/scsi/huawei/hifc/hifc_sml.h | 183 +
drivers/scsi/huawei/hifc/hifc_wq.c | 624 ++++
drivers/scsi/huawei/hifc/hifc_wq.h | 165 +
23 files changed, 19398 insertions(+)
create mode 100644 drivers/scsi/huawei/hifc/hifc_api_cmd.c
create mode 100644 drivers/scsi/huawei/hifc/hifc_api_cmd.h
create mode 100644 drivers/scsi/huawei/hifc/hifc_cfg.c
create mode 100644 drivers/scsi/huawei/hifc/hifc_cfg.h
create mode 100644 drivers/scsi/huawei/hifc/hifc_cmdq.c
create mode 100644 drivers/scsi/huawei/hifc/hifc_cmdq.h
create mode 100644 drivers/scsi/huawei/hifc/hifc_cqm_main.c
create mode 100644 drivers/scsi/huawei/hifc/hifc_cqm_main.h
create mode 100644 drivers/scsi/huawei/hifc/hifc_cqm_object.c
create mode 100644 drivers/scsi/huawei/hifc/hifc_cqm_object.h
create mode 100644 drivers/scsi/huawei/hifc/hifc_eqs.c
create mode 100644 drivers/scsi/huawei/hifc/hifc_eqs.h
create mode 100644 drivers/scsi/huawei/hifc/hifc_hw.h
create mode 100644 drivers/scsi/huawei/hifc/hifc_hwdev.c
create mode 100644 drivers/scsi/huawei/hifc/hifc_hwdev.h
create mode 100644 drivers/scsi/huawei/hifc/hifc_hwif.c
create mode 100644 drivers/scsi/huawei/hifc/hifc_hwif.h
create mode 100644 drivers/scsi/huawei/hifc/hifc_mgmt.c
create mode 100644 drivers/scsi/huawei/hifc/hifc_mgmt.h
create mode 100644 drivers/scsi/huawei/hifc/hifc_sml.c
create mode 100644 drivers/scsi/huawei/hifc/hifc_sml.h
create mode 100644 drivers/scsi/huawei/hifc/hifc_wq.c
create mode 100644 drivers/scsi/huawei/hifc/hifc_wq.h
diff --git a/drivers/scsi/huawei/hifc/hifc_api_cmd.c b/drivers/scsi/huawei/hifc/hifc_api_cmd.c
new file mode 100644
index 000000000000..22632f779582
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_api_cmd.c
@@ -0,0 +1,1155 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/completion.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/semaphore.h>
+#include <linux/jiffies.h>
+#include <linux/delay.h>
+
+#include "hifc_knl_adp.h"
+#include "hifc_hw.h"
+#include "hifc_hwdev.h"
+#include "hifc_hwif.h"
+#include "hifc_api_cmd.h"
+
+#define API_CMD_CHAIN_CELL_SIZE_SHIFT 6U
+
+#define API_CMD_CELL_DESC_SIZE 8
+#define API_CMD_CELL_DATA_ADDR_SIZE 8
+
+#define API_CHAIN_NUM_CELLS 32
+#define API_CHAIN_CELL_SIZE 128
+#define API_CHAIN_RSP_DATA_SIZE 128
+
+#define API_CMD_CELL_WB_ADDR_SIZE 8
+
+#define API_CHAIN_CELL_ALIGNMENT 8
+
+#define API_CMD_TIMEOUT 10000
+#define API_CMD_STATUS_TIMEOUT 100000
+
+#define API_CMD_BUF_SIZE 2048ULL
+
+#define API_CMD_NODE_ALIGN_SIZE 512ULL
+#define API_PAYLOAD_ALIGN_SIZE 64ULL
+
+#define API_CHAIN_RESP_ALIGNMENT 64ULL
+
+#define COMPLETION_TIMEOUT_DEFAULT 1000UL
+#define POLLING_COMPLETION_TIMEOUT_DEFAULT 1000U
+
+#define API_CMD_RESPONSE_DATA_PADDR(val) be64_to_cpu(*((u64 *)(val)))
+
+#define READ_API_CMD_PRIV_DATA(id, token) (((id) << 16) + (token))
+#define WRITE_API_CMD_PRIV_DATA(id) (((u8)id) << 16)
+
+#define MASKED_IDX(chain, idx) ((idx) & ((chain)->num_cells - 1))
+
+#define SIZE_4BYTES(size) (ALIGN((u32)(size), 4U) >> 2)
+#define SIZE_8BYTES(size) (ALIGN((u32)(size), 8U) >> 3)
+
+enum api_cmd_data_format {
+ SGL_DATA = 1,
+};
+
+enum api_cmd_type {
+ API_CMD_WRITE_TYPE = 0,
+ API_CMD_READ_TYPE = 1,
+};
+
+enum api_cmd_bypass {
+ NOT_BYPASS = 0,
+ BYPASS = 1,
+};
+
+enum api_cmd_resp_aeq {
+ NOT_TRIGGER = 0,
+ TRIGGER = 1,
+};
+
+static u8 xor_chksum_set(void *data)
+{
+ int idx;
+ u8 checksum = 0;
+ u8 *val = data;
+
+ for (idx = 0; idx < 7; idx++)
+ checksum ^= val[idx];
+
+ return checksum;
+}
+
+static void set_prod_idx(struct hifc_api_cmd_chain *chain)
+{
+ enum hifc_api_cmd_chain_type chain_type = chain->chain_type;
+ struct hifc_hwif *hwif = chain->hwdev->hwif;
+ u32 hw_prod_idx_addr = HIFC_CSR_API_CMD_CHAIN_PI_ADDR(chain_type);
+ u32 prod_idx = chain->prod_idx;
+
+ hifc_hwif_write_reg(hwif, hw_prod_idx_addr, prod_idx);
+}
+
+static u32 get_hw_cons_idx(struct hifc_api_cmd_chain *chain)
+{
+ u32 addr, val;
+
+ addr = HIFC_CSR_API_CMD_STATUS_0_ADDR(chain->chain_type);
+ val = hifc_hwif_read_reg(chain->hwdev->hwif, addr);
+
+ return HIFC_API_CMD_STATUS_GET(val, CONS_IDX);
+}
+
+static void dump_api_chain_reg(struct hifc_api_cmd_chain *chain)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ u32 addr, val;
+
+ addr = HIFC_CSR_API_CMD_STATUS_0_ADDR(chain->chain_type);
+ val = hifc_hwif_read_reg(chain->hwdev->hwif, addr);
+
+ sdk_err(dev, "Chain type: 0x%x, cpld error: 0x%x, check error: 0x%x, current fsm: 0x%x\n",
+ chain->chain_type, HIFC_API_CMD_STATUS_GET(val, CPLD_ERR),
+ HIFC_API_CMD_STATUS_GET(val, CHKSUM_ERR),
+ HIFC_API_CMD_STATUS_GET(val, FSM));
+
+ sdk_err(dev, "Chain hw current ci: 0x%x\n",
+ HIFC_API_CMD_STATUS_GET(val, CONS_IDX));
+
+ addr = HIFC_CSR_API_CMD_CHAIN_PI_ADDR(chain->chain_type);
+ val = hifc_hwif_read_reg(chain->hwdev->hwif, addr);
+ sdk_err(dev, "Chain hw current pi: 0x%x\n", val);
+}
+
+/**
+ * chain_busy - check if the chain is still processing last requests
+ * @chain: chain to check
+ * Return: 0 - success, negative - failure
+ **/
+static int chain_busy(struct hifc_api_cmd_chain *chain)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ struct hifc_api_cmd_cell_ctxt *ctxt;
+ u64 resp_header;
+
+ ctxt = &chain->cell_ctxt[chain->prod_idx];
+
+ switch (chain->chain_type) {
+ case HIFC_API_CMD_MULTI_READ:
+ case HIFC_API_CMD_POLL_READ:
+ resp_header = be64_to_cpu(ctxt->resp->header);
+ if (ctxt->status &&
+ !HIFC_API_CMD_RESP_HEADER_VALID(resp_header)) {
+ sdk_err(dev, "Context(0x%x) busy!, pi: %d, resp_header: 0x%08x%08x\n",
+ ctxt->status, chain->prod_idx,
+ upper_32_bits(resp_header),
+ lower_32_bits(resp_header));
+ dump_api_chain_reg(chain);
+ return -EBUSY;
+ }
+ break;
+ case HIFC_API_CMD_POLL_WRITE:
+ case HIFC_API_CMD_WRITE_TO_MGMT_CPU:
+ case HIFC_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ chain->cons_idx = get_hw_cons_idx(chain);
+
+ if (chain->cons_idx == MASKED_IDX(chain, chain->prod_idx + 1)) {
+ sdk_err(dev, "API CMD chain %d is busy, cons_idx = %d, prod_idx = %d\n",
+ chain->chain_type, chain->cons_idx,
+ chain->prod_idx);
+ dump_api_chain_reg(chain);
+ return -EBUSY;
+ }
+ break;
+ default:
+ sdk_err(dev, "Unknown Chain type %d\n", chain->chain_type);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/**
+ * get_cell_data_size - get the data size of specific cell type
+ * @type: chain type
+ * @cmd_size: the command size
+ * Return: cell_data_size
+ **/
+static u16 get_cell_data_size(enum hifc_api_cmd_chain_type type, u16 cmd_size)
+{
+ u16 cell_data_size = 0;
+
+ switch (type) {
+ case HIFC_API_CMD_POLL_READ:
+ cell_data_size = ALIGN(API_CMD_CELL_DESC_SIZE +
+ API_CMD_CELL_WB_ADDR_SIZE +
+ API_CMD_CELL_DATA_ADDR_SIZE,
+ API_CHAIN_CELL_ALIGNMENT);
+ break;
+
+ case HIFC_API_CMD_WRITE_TO_MGMT_CPU:
+ case HIFC_API_CMD_POLL_WRITE:
+ case HIFC_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ cell_data_size = ALIGN(API_CMD_CELL_DESC_SIZE +
+ API_CMD_CELL_DATA_ADDR_SIZE,
+ API_CHAIN_CELL_ALIGNMENT);
+ break;
+ default:
+ break;
+ }
+
+ return cell_data_size;
+}
+
+/**
+ * prepare_cell_ctrl - prepare the ctrl of the cell for the command
+ * @cell_ctrl: the control of the cell to set the control into it
+ * @cell_len: the size of the cell
+ **/
+static void prepare_cell_ctrl(u64 *cell_ctrl, u16 cell_len)
+{
+ u64 ctrl;
+ u8 chksum;
+
+ ctrl = HIFC_API_CMD_CELL_CTRL_SET(SIZE_8BYTES(cell_len), CELL_LEN) |
+ HIFC_API_CMD_CELL_CTRL_SET(0ULL, RD_DMA_ATTR_OFF) |
+ HIFC_API_CMD_CELL_CTRL_SET(0ULL, WR_DMA_ATTR_OFF);
+
+ chksum = xor_chksum_set(&ctrl);
+
+ ctrl |= HIFC_API_CMD_CELL_CTRL_SET(chksum, XOR_CHKSUM);
+
+ /* The data in the HW should be in Big Endian Format */
+ *cell_ctrl = cpu_to_be64(ctrl);
+}
+
+/**
+ * prepare_api_cmd - prepare API CMD command
+ * @chain: chain for the command
+ * @cell: the cell of the command
+ * @dest: destination node on the card that will receive the command
+ * @cmd: command data
+ * @cmd_size: the command size
+ **/
+static void prepare_api_cmd(struct hifc_api_cmd_chain *chain,
+ struct hifc_api_cmd_cell *cell,
+ enum hifc_node_id dest,
+ const void *cmd, u16 cmd_size)
+{
+ struct hifc_api_cmd_cell_ctxt *cell_ctxt;
+ u32 priv;
+
+ cell_ctxt = &chain->cell_ctxt[chain->prod_idx];
+
+ switch (chain->chain_type) {
+ case HIFC_API_CMD_POLL_READ:
+ priv = READ_API_CMD_PRIV_DATA(chain->chain_type,
+ cell_ctxt->saved_prod_idx);
+ cell->desc = HIFC_API_CMD_DESC_SET(SGL_DATA, API_TYPE) |
+ HIFC_API_CMD_DESC_SET(API_CMD_READ_TYPE, RD_WR) |
+ HIFC_API_CMD_DESC_SET(BYPASS, MGMT_BYPASS) |
+ HIFC_API_CMD_DESC_SET(NOT_TRIGGER, RESP_AEQE_EN) |
+ HIFC_API_CMD_DESC_SET(priv, PRIV_DATA);
+ break;
+ case HIFC_API_CMD_POLL_WRITE:
+ priv = WRITE_API_CMD_PRIV_DATA(chain->chain_type);
+ cell->desc = HIFC_API_CMD_DESC_SET(SGL_DATA, API_TYPE) |
+ HIFC_API_CMD_DESC_SET(API_CMD_WRITE_TYPE, RD_WR) |
+ HIFC_API_CMD_DESC_SET(BYPASS, MGMT_BYPASS) |
+ HIFC_API_CMD_DESC_SET(NOT_TRIGGER, RESP_AEQE_EN) |
+ HIFC_API_CMD_DESC_SET(priv, PRIV_DATA);
+ break;
+ case HIFC_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ case HIFC_API_CMD_WRITE_TO_MGMT_CPU:
+ priv = WRITE_API_CMD_PRIV_DATA(chain->chain_type);
+ cell->desc = HIFC_API_CMD_DESC_SET(SGL_DATA, API_TYPE) |
+ HIFC_API_CMD_DESC_SET(API_CMD_WRITE_TYPE, RD_WR) |
+ HIFC_API_CMD_DESC_SET(NOT_BYPASS, MGMT_BYPASS) |
+ HIFC_API_CMD_DESC_SET(TRIGGER, RESP_AEQE_EN) |
+ HIFC_API_CMD_DESC_SET(priv, PRIV_DATA);
+ break;
+ default:
+ sdk_err(chain->hwdev->dev_hdl, "Unknown Chain type: %d\n",
+ chain->chain_type);
+ return;
+ }
+
+ cell->desc |= HIFC_API_CMD_DESC_SET(dest, DEST) |
+ HIFC_API_CMD_DESC_SET(SIZE_4BYTES(cmd_size), SIZE);
+
+ cell->desc |= HIFC_API_CMD_DESC_SET(xor_chksum_set(&cell->desc),
+ XOR_CHKSUM);
+
+ /* The data in the HW should be in Big Endian Format */
+ cell->desc = cpu_to_be64(cell->desc);
+
+ memcpy(cell_ctxt->api_cmd_vaddr, cmd, cmd_size);
+}
+
+/**
+ * prepare_cell - prepare cell ctrl and cmd in the current producer cell
+ * @chain: chain for the command
+ * @dest: destination node on the card that will receive the command
+ * @cmd: command data
+ * @cmd_size: the command size
+ **/
+static void prepare_cell(struct hifc_api_cmd_chain *chain,
+ enum hifc_node_id dest,
+ void *cmd, u16 cmd_size)
+{
+ struct hifc_api_cmd_cell *curr_node;
+ u16 cell_size;
+
+ curr_node = chain->curr_node;
+
+ cell_size = get_cell_data_size(chain->chain_type, cmd_size);
+
+ prepare_cell_ctrl(&curr_node->ctrl, cell_size);
+ prepare_api_cmd(chain, curr_node, dest, cmd, cmd_size);
+}
+
+static inline void cmd_chain_prod_idx_inc(struct hifc_api_cmd_chain *chain)
+{
+ chain->prod_idx = MASKED_IDX(chain, chain->prod_idx + 1);
+}
+
+static void issue_api_cmd(struct hifc_api_cmd_chain *chain)
+{
+ set_prod_idx(chain);
+}
+
+/**
+ * api_cmd_status_update - update the status of the chain
+ * @chain: chain to update
+ **/
+static void api_cmd_status_update(struct hifc_api_cmd_chain *chain)
+{
+ struct hifc_api_cmd_status *wb_status;
+ enum hifc_api_cmd_chain_type chain_type;
+ u64 status_header;
+ u32 buf_desc;
+
+ wb_status = chain->wb_status;
+
+ buf_desc = be32_to_cpu(wb_status->buf_desc);
+ if (HIFC_API_CMD_STATUS_GET(buf_desc, CHKSUM_ERR))
+ return;
+
+ status_header = be64_to_cpu(wb_status->header);
+ chain_type = HIFC_API_CMD_STATUS_HEADER_GET(status_header, CHAIN_ID);
+ if (chain_type >= HIFC_API_CMD_MAX)
+ return;
+
+ if (chain_type != chain->chain_type)
+ return;
+
+ chain->cons_idx = HIFC_API_CMD_STATUS_GET(buf_desc, CONS_IDX);
+}
+
+/**
+ * wait_for_status_poll - wait for write to mgmt command to complete
+ * @chain: the chain of the command
+ * Return: 0 - success, negative - failure
+ **/
+static int wait_for_status_poll(struct hifc_api_cmd_chain *chain)
+{
+ int err = -ETIMEDOUT;
+ u32 cnt = 0;
+
+ while (cnt < API_CMD_STATUS_TIMEOUT &&
+ chain->hwdev->chip_present_flag) {
+ api_cmd_status_update(chain);
+
+ /* SYNC API CMD cmd should start after prev cmd finished */
+ if (chain->cons_idx == chain->prod_idx) {
+ err = 0;
+ break;
+ }
+
+ usleep_range(50, 100);
+ cnt++;
+ }
+
+ return err;
+}
+
+static void copy_resp_data(struct hifc_api_cmd_cell_ctxt *ctxt, void *ack,
+ u16 ack_size)
+{
+ struct hifc_api_cmd_resp_fmt *resp = ctxt->resp;
+
+ memcpy(ack, &resp->resp_data, ack_size);
+ ctxt->status = 0;
+}
+
+/**
+ * prepare_cell - polling for respense data of the read api-command
+ * @chain: pointer to api cmd chain
+ *
+ * Return: 0 - success, negative - failure
+ **/
+static int wait_for_resp_polling(struct hifc_api_cmd_cell_ctxt *ctxt)
+{
+ u64 resp_header;
+ int ret = -ETIMEDOUT;
+ u32 cnt = 0;
+
+ while (cnt < POLLING_COMPLETION_TIMEOUT_DEFAULT) {
+ resp_header = be64_to_cpu(ctxt->resp->header);
+
+ rmb(); /* read the latest header */
+
+ if (HIFC_API_CMD_RESP_HEADER_VALID(resp_header)) {
+ ret = 0;
+ break;
+ }
+ usleep_range(100, 1000);
+ cnt++;
+ }
+
+ if (ret)
+ pr_err("Wait for api chain response timeout\n");
+
+ return ret;
+}
+
+/**
+ * wait_for_api_cmd_completion - wait for command to complete
+ * @chain: chain for the command
+ * Return: 0 - success, negative - failure
+ **/
+static int wait_for_api_cmd_completion(struct hifc_api_cmd_chain *chain,
+ struct hifc_api_cmd_cell_ctxt *ctxt,
+ void *ack, u16 ack_size)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ int err = 0;
+
+ switch (chain->chain_type) {
+ case HIFC_API_CMD_POLL_READ:
+ err = wait_for_resp_polling(ctxt);
+ if (!err)
+ copy_resp_data(ctxt, ack, ack_size);
+ break;
+ case HIFC_API_CMD_POLL_WRITE:
+ case HIFC_API_CMD_WRITE_TO_MGMT_CPU:
+ err = wait_for_status_poll(chain);
+ if (err) {
+ sdk_err(dev, "API CMD Poll status timeout, chain type: %d\n",
+ chain->chain_type);
+ break;
+ }
+ break;
+ case HIFC_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ /* No need to wait */
+ break;
+ default:
+ sdk_err(dev, "Unknown API CMD Chain type: %d\n",
+ chain->chain_type);
+ err = -EINVAL;
+ break;
+ }
+
+ if (err)
+ dump_api_chain_reg(chain);
+
+ return err;
+}
+
+static inline void update_api_cmd_ctxt(struct hifc_api_cmd_chain *chain,
+ struct hifc_api_cmd_cell_ctxt *ctxt)
+{
+ ctxt->status = 1;
+ ctxt->saved_prod_idx = chain->prod_idx;
+ if (ctxt->resp) {
+ ctxt->resp->header = 0;
+
+ /* make sure "header" was cleared */
+ wmb();
+ }
+}
+
+/**
+ * api_cmd - API CMD command
+ * @chain: chain for the command
+ * @dest: destination node on the card that will receive the command
+ * @cmd: command data
+ * @size: the command size
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd(struct hifc_api_cmd_chain *chain,
+ enum hifc_node_id dest,
+ void *cmd, u16 cmd_size, void *ack, u16 ack_size)
+{
+ struct hifc_api_cmd_cell_ctxt *ctxt;
+
+ if (chain->chain_type == HIFC_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_lock(&chain->async_lock);
+ else
+ down(&chain->sem);
+ ctxt = &chain->cell_ctxt[chain->prod_idx];
+ if (chain_busy(chain)) {
+ if (chain->chain_type == HIFC_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_unlock(&chain->async_lock);
+ else
+ up(&chain->sem);
+ return -EBUSY;
+ }
+ update_api_cmd_ctxt(chain, ctxt);
+
+ prepare_cell(chain, dest, cmd, cmd_size);
+
+ cmd_chain_prod_idx_inc(chain);
+
+ wmb(); /* issue the command */
+
+ issue_api_cmd(chain);
+
+ /* incremented prod idx, update ctxt */
+
+ chain->curr_node = chain->cell_ctxt[chain->prod_idx].cell_vaddr;
+ if (chain->chain_type == HIFC_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_unlock(&chain->async_lock);
+ else
+ up(&chain->sem);
+
+ return wait_for_api_cmd_completion(chain, ctxt, ack, ack_size);
+}
+
+/**
+ * hifc_api_cmd_write - Write API CMD command
+ * @chain: chain for write command
+ * @dest: destination node on the card that will receive the command
+ * @cmd: command data
+ * @size: the command size
+ * Return: 0 - success, negative - failure
+ **/
+int hifc_api_cmd_write(struct hifc_api_cmd_chain *chain,
+ enum hifc_node_id dest, void *cmd, u16 size)
+{
+ /* Verify the chain type */
+ return api_cmd(chain, dest, cmd, size, NULL, 0);
+}
+
+int hifc_api_cmd_read(struct hifc_api_cmd_chain *chain,
+ enum hifc_node_id dest,
+ void *cmd, u16 size, void *ack, u16 ack_size)
+{
+ return api_cmd(chain, dest, cmd, size, ack, ack_size);
+}
+
+/**
+ * api_cmd_hw_restart - restart the chain in the HW
+ * @chain: the API CMD specific chain to restart
+ **/
+static int api_cmd_hw_restart(struct hifc_api_cmd_chain *cmd_chain)
+{
+ struct hifc_hwif *hwif = cmd_chain->hwdev->hwif;
+ u32 reg_addr, val;
+ int err;
+ u32 cnt = 0;
+
+ /* Read Modify Write */
+ reg_addr = HIFC_CSR_API_CMD_CHAIN_REQ_ADDR(cmd_chain->chain_type);
+ val = hifc_hwif_read_reg(hwif, reg_addr);
+
+ val = HIFC_API_CMD_CHAIN_REQ_CLEAR(val, RESTART);
+ val |= HIFC_API_CMD_CHAIN_REQ_SET(1, RESTART);
+
+ hifc_hwif_write_reg(hwif, reg_addr, val);
+
+ err = -ETIMEDOUT;
+ while (cnt < API_CMD_TIMEOUT) {
+ val = hifc_hwif_read_reg(hwif, reg_addr);
+
+ if (!HIFC_API_CMD_CHAIN_REQ_GET(val, RESTART)) {
+ err = 0;
+ break;
+ }
+
+ usleep_range(900, 1000);
+ cnt++;
+ }
+
+ return err;
+}
+
+/**
+ * api_cmd_ctrl_init - set the control register of a chain
+ * @chain: the API CMD specific chain to set control register for
+ **/
+static void api_cmd_ctrl_init(struct hifc_api_cmd_chain *chain)
+{
+ struct hifc_hwif *hwif = chain->hwdev->hwif;
+ u32 reg_addr, ctrl;
+ u32 size;
+
+ /* Read Modify Write */
+ reg_addr = HIFC_CSR_API_CMD_CHAIN_CTRL_ADDR(chain->chain_type);
+
+ size = (u32)ilog2(chain->cell_size >> API_CMD_CHAIN_CELL_SIZE_SHIFT);
+
+ ctrl = hifc_hwif_read_reg(hwif, reg_addr);
+
+ ctrl = HIFC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, AEQE_EN) &
+ HIFC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, CELL_SIZE);
+
+ ctrl |= HIFC_API_CMD_CHAIN_CTRL_SET(0, AEQE_EN) |
+ HIFC_API_CMD_CHAIN_CTRL_SET(size, CELL_SIZE);
+
+ hifc_hwif_write_reg(hwif, reg_addr, ctrl);
+}
+
+/**
+ * api_cmd_set_status_addr - set the status address of a chain in the HW
+ * @chain: the API CMD specific chain to set status address for
+ **/
+static void api_cmd_set_status_addr(struct hifc_api_cmd_chain *chain)
+{
+ struct hifc_hwif *hwif = chain->hwdev->hwif;
+ u32 addr, val;
+
+ addr = HIFC_CSR_API_CMD_STATUS_HI_ADDR(chain->chain_type);
+ val = upper_32_bits(chain->wb_status_paddr);
+ hifc_hwif_write_reg(hwif, addr, val);
+
+ addr = HIFC_CSR_API_CMD_STATUS_LO_ADDR(chain->chain_type);
+ val = lower_32_bits(chain->wb_status_paddr);
+ hifc_hwif_write_reg(hwif, addr, val);
+}
+
+/**
+ * api_cmd_set_num_cells - set the number cells of a chain in the HW
+ * @chain: the API CMD specific chain to set the number of cells for
+ **/
+static void api_cmd_set_num_cells(struct hifc_api_cmd_chain *chain)
+{
+ struct hifc_hwif *hwif = chain->hwdev->hwif;
+ u32 addr, val;
+
+ addr = HIFC_CSR_API_CMD_CHAIN_NUM_CELLS_ADDR(chain->chain_type);
+ val = chain->num_cells;
+ hifc_hwif_write_reg(hwif, addr, val);
+}
+
+/**
+ * api_cmd_head_init - set the head cell of a chain in the HW
+ * @chain: the API CMD specific chain to set the head for
+ **/
+static void api_cmd_head_init(struct hifc_api_cmd_chain *chain)
+{
+ struct hifc_hwif *hwif = chain->hwdev->hwif;
+ u32 addr, val;
+
+ addr = HIFC_CSR_API_CMD_CHAIN_HEAD_HI_ADDR(chain->chain_type);
+ val = upper_32_bits(chain->head_cell_paddr);
+ hifc_hwif_write_reg(hwif, addr, val);
+
+ addr = HIFC_CSR_API_CMD_CHAIN_HEAD_LO_ADDR(chain->chain_type);
+ val = lower_32_bits(chain->head_cell_paddr);
+ hifc_hwif_write_reg(hwif, addr, val);
+}
+
+/**
+ * wait_for_ready_chain - wait for the chain to be ready
+ * @chain: the API CMD specific chain to wait for
+ * Return: 0 - success, negative - failure
+ **/
+static int wait_for_ready_chain(struct hifc_api_cmd_chain *chain)
+{
+ struct hifc_hwif *hwif = chain->hwdev->hwif;
+ u32 addr, val;
+ u32 hw_cons_idx;
+ u32 cnt = 0;
+ int err;
+
+ addr = HIFC_CSR_API_CMD_STATUS_0_ADDR(chain->chain_type);
+ err = -ETIMEDOUT;
+ while (cnt < API_CMD_TIMEOUT) {
+ val = hifc_hwif_read_reg(hwif, addr);
+ hw_cons_idx = HIFC_API_CMD_STATUS_GET(val, CONS_IDX);
+
+ /* wait for HW cons idx to be updated */
+ if (hw_cons_idx == chain->cons_idx) {
+ err = 0;
+ break;
+ }
+
+ usleep_range(900, 1000);
+ cnt++;
+ }
+
+ return err;
+}
+
+/**
+ * api_cmd_chain_hw_clean - clean the HW
+ * @chain: the API CMD specific chain
+ **/
+static void api_cmd_chain_hw_clean(struct hifc_api_cmd_chain *chain)
+{
+ struct hifc_hwif *hwif = chain->hwdev->hwif;
+ u32 addr, ctrl;
+
+ addr = HIFC_CSR_API_CMD_CHAIN_CTRL_ADDR(chain->chain_type);
+
+ ctrl = hifc_hwif_read_reg(hwif, addr);
+ ctrl = HIFC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, RESTART_EN) &
+ HIFC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, XOR_ERR) &
+ HIFC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, AEQE_EN) &
+ HIFC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, XOR_CHK_EN) &
+ HIFC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, CELL_SIZE);
+
+ hifc_hwif_write_reg(hwif, addr, ctrl);
+}
+
+/**
+ * api_cmd_chain_hw_init - initialize the chain in the HW
+ * @chain: the API CMD specific chain to initialize in HW
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd_chain_hw_init(struct hifc_api_cmd_chain *chain)
+{
+ api_cmd_chain_hw_clean(chain);
+
+ api_cmd_set_status_addr(chain);
+
+ if (api_cmd_hw_restart(chain)) {
+ sdk_err(chain->hwdev->dev_hdl, "Failed to restart api_cmd_hw\n");
+ return -EBUSY;
+ }
+
+ api_cmd_ctrl_init(chain);
+ api_cmd_set_num_cells(chain);
+ api_cmd_head_init(chain);
+
+ return wait_for_ready_chain(chain);
+}
+
+/**
+ * alloc_cmd_buf - allocate a dma buffer for API CMD command
+ * @chain: the API CMD specific chain for the cmd
+ * @cell: the cell in the HW for the cmd
+ * @cell_idx: the index of the cell
+ * Return: 0 - success, negative - failure
+ **/
+static int alloc_cmd_buf(struct hifc_api_cmd_chain *chain,
+ struct hifc_api_cmd_cell *cell, u32 cell_idx)
+{
+ struct hifc_api_cmd_cell_ctxt *cell_ctxt;
+ void *dev = chain->hwdev->dev_hdl;
+ void *buf_vaddr;
+ u64 buf_paddr;
+ int err = 0;
+
+ buf_vaddr = (u8 *)((u64)chain->buf_vaddr_base +
+ chain->buf_size_align * cell_idx);
+ buf_paddr = chain->buf_paddr_base +
+ chain->buf_size_align * cell_idx;
+
+ cell_ctxt = &chain->cell_ctxt[cell_idx];
+
+ cell_ctxt->api_cmd_vaddr = buf_vaddr;
+
+ /* set the cmd DMA address in the cell */
+ switch (chain->chain_type) {
+ case HIFC_API_CMD_POLL_READ:
+ cell->read.hw_cmd_paddr = cpu_to_be64(buf_paddr);
+ break;
+ case HIFC_API_CMD_WRITE_TO_MGMT_CPU:
+ case HIFC_API_CMD_POLL_WRITE:
+ case HIFC_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ /* The data in the HW should be in Big Endian Format */
+ cell->write.hw_cmd_paddr = cpu_to_be64(buf_paddr);
+ break;
+ default:
+ sdk_err(dev, "Unknown API CMD Chain type: %d\n",
+ chain->chain_type);
+ err = -EINVAL;
+ break;
+ }
+
+ return err;
+}
+
+static void alloc_resp_buf(struct hifc_api_cmd_chain *chain,
+ struct hifc_api_cmd_cell *cell, u32 cell_idx)
+{
+ struct hifc_api_cmd_cell_ctxt *cell_ctxt;
+ void *resp_vaddr;
+ u64 resp_paddr;
+
+ resp_vaddr = (u8 *)((u64)chain->rsp_vaddr_base +
+ chain->rsp_size_align * cell_idx);
+ resp_paddr = chain->rsp_paddr_base +
+ chain->rsp_size_align * cell_idx;
+
+ cell_ctxt = &chain->cell_ctxt[cell_idx];
+
+ cell_ctxt->resp = resp_vaddr;
+ cell->read.hw_wb_resp_paddr = cpu_to_be64(resp_paddr);
+}
+
+static int hifc_alloc_api_cmd_cell_buf(struct hifc_api_cmd_chain *chain,
+ u32 cell_idx,
+ struct hifc_api_cmd_cell *node)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ int err;
+
+ /* For read chain, we should allocate buffer for the response data */
+ if (chain->chain_type == HIFC_API_CMD_MULTI_READ ||
+ chain->chain_type == HIFC_API_CMD_POLL_READ)
+ alloc_resp_buf(chain, node, cell_idx);
+
+ switch (chain->chain_type) {
+ case HIFC_API_CMD_WRITE_TO_MGMT_CPU:
+ case HIFC_API_CMD_POLL_WRITE:
+ case HIFC_API_CMD_POLL_READ:
+ case HIFC_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ err = alloc_cmd_buf(chain, node, cell_idx);
+ if (err) {
+ sdk_err(dev, "Failed to allocate cmd buffer\n");
+ goto alloc_cmd_buf_err;
+ }
+ break;
+ /* For api command write and api command read, the data section
+ * is directly inserted in the cell, so no need to allocate.
+ */
+ case HIFC_API_CMD_MULTI_READ:
+ chain->cell_ctxt[cell_idx].api_cmd_vaddr =
+ &node->read.hw_cmd_paddr;
+ break;
+ default:
+ sdk_err(dev, "Unsupported API CMD chain type\n");
+ err = -EINVAL;
+ goto alloc_cmd_buf_err;
+ }
+
+ return 0;
+
+alloc_cmd_buf_err:
+
+ return err;
+}
+
+/**
+ * api_cmd_create_cell - create API CMD cell of specific chain
+ * @chain: the API CMD specific chain to create its cell
+ * @cell_idx: the cell index to create
+ * @pre_node: previous cell
+ * @node_vaddr: the virt addr of the cell
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd_create_cell(struct hifc_api_cmd_chain *chain, u32 cell_idx,
+ struct hifc_api_cmd_cell *pre_node,
+ struct hifc_api_cmd_cell **node_vaddr)
+{
+ struct hifc_api_cmd_cell_ctxt *cell_ctxt;
+ struct hifc_api_cmd_cell *node;
+ void *cell_vaddr;
+ u64 cell_paddr;
+ int err;
+
+ cell_vaddr = (void *)((u64)chain->cell_vaddr_base +
+ chain->cell_size_align * cell_idx);
+ cell_paddr = chain->cell_paddr_base +
+ chain->cell_size_align * cell_idx;
+
+ cell_ctxt = &chain->cell_ctxt[cell_idx];
+ cell_ctxt->cell_vaddr = cell_vaddr;
+ node = cell_ctxt->cell_vaddr;
+
+ if (!pre_node) {
+ chain->head_node = cell_vaddr;
+ chain->head_cell_paddr = cell_paddr;
+ } else {
+ /* The data in the HW should be in Big Endian Format */
+ pre_node->next_cell_paddr = cpu_to_be64(cell_paddr);
+ }
+
+ /* Driver software should make sure that there is an empty API
+ * command cell at the end the chain
+ */
+ node->next_cell_paddr = 0;
+
+ err = hifc_alloc_api_cmd_cell_buf(chain, cell_idx, node);
+ if (err)
+ return err;
+
+ *node_vaddr = node;
+
+ return 0;
+}
+
+/**
+ * api_cmd_create_cells - create API CMD cells for specific chain
+ * @chain: the API CMD specific chain
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd_create_cells(struct hifc_api_cmd_chain *chain)
+{
+ struct hifc_api_cmd_cell *node = NULL, *pre_node = NULL;
+ void *dev = chain->hwdev->dev_hdl;
+ u32 cell_idx;
+ int err;
+
+ for (cell_idx = 0; cell_idx < chain->num_cells; cell_idx++) {
+ err = api_cmd_create_cell(chain, cell_idx, pre_node, &node);
+ if (err) {
+ sdk_err(dev, "Failed to create API CMD cell\n");
+ return err;
+ }
+
+ pre_node = node;
+ }
+
+ if (!node)
+ return -EFAULT;
+
+ /* set the Final node to point on the start */
+ node->next_cell_paddr = cpu_to_be64(chain->head_cell_paddr);
+
+ /* set the current node to be the head */
+ chain->curr_node = chain->head_node;
+ return 0;
+}
+
+/**
+ * api_chain_init - initialize API CMD specific chain
+ * @chain: the API CMD specific chain to initialize
+ * @attr: attributes to set in the chain
+ * Return: 0 - success, negative - failure
+ **/
+static int api_chain_init(struct hifc_api_cmd_chain *chain,
+ struct hifc_api_cmd_chain_attr *attr)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ size_t cell_ctxt_size;
+ size_t cells_buf_size;
+ int err;
+
+ chain->chain_type = attr->chain_type;
+ chain->num_cells = attr->num_cells;
+ chain->cell_size = attr->cell_size;
+ chain->rsp_size = attr->rsp_size;
+
+ chain->prod_idx = 0;
+ chain->cons_idx = 0;
+
+ if (chain->chain_type == HIFC_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_lock_init(&chain->async_lock);
+ else
+ sema_init(&chain->sem, 1);
+
+ cell_ctxt_size = chain->num_cells * sizeof(*chain->cell_ctxt);
+ if (!cell_ctxt_size) {
+ sdk_err(dev, "Api chain cell size cannot be zero\n");
+ err = -EINVAL;
+ goto alloc_cell_ctxt_err;
+ }
+
+ chain->cell_ctxt = kzalloc(cell_ctxt_size, GFP_KERNEL);
+ if (!chain->cell_ctxt) {
+ sdk_err(dev, "Failed to allocate cell contexts for a chain\n");
+ err = -ENOMEM;
+ goto alloc_cell_ctxt_err;
+ }
+
+ chain->wb_status = dma_zalloc_coherent(dev,
+ sizeof(*chain->wb_status),
+ &chain->wb_status_paddr,
+ GFP_KERNEL);
+ if (!chain->wb_status) {
+ sdk_err(dev, "Failed to allocate DMA wb status\n");
+ err = -ENOMEM;
+ goto alloc_wb_status_err;
+ }
+
+ chain->cell_size_align = ALIGN((u64)chain->cell_size,
+ API_CMD_NODE_ALIGN_SIZE);
+ chain->rsp_size_align = ALIGN((u64)chain->rsp_size,
+ API_CHAIN_RESP_ALIGNMENT);
+ chain->buf_size_align = ALIGN(API_CMD_BUF_SIZE, API_PAYLOAD_ALIGN_SIZE);
+
+ cells_buf_size = (chain->cell_size_align + chain->rsp_size_align +
+ chain->buf_size_align) * chain->num_cells;
+
+ err = hifc_dma_zalloc_coherent_align(dev, cells_buf_size,
+ API_CMD_NODE_ALIGN_SIZE,
+ GFP_KERNEL,
+ &chain->cells_addr);
+ if (err) {
+ sdk_err(dev, "Failed to allocate API CMD cells buffer\n");
+ goto alloc_cells_buf_err;
+ }
+
+ chain->cell_vaddr_base = chain->cells_addr.align_vaddr;
+ chain->cell_paddr_base = chain->cells_addr.align_paddr;
+
+ chain->rsp_vaddr_base = (u8 *)((u64)chain->cell_vaddr_base +
+ chain->cell_size_align * chain->num_cells);
+ chain->rsp_paddr_base = chain->cell_paddr_base +
+ chain->cell_size_align * chain->num_cells;
+
+ chain->buf_vaddr_base = (u8 *)((u64)chain->rsp_vaddr_base +
+ chain->rsp_size_align * chain->num_cells);
+ chain->buf_paddr_base = chain->rsp_paddr_base +
+ chain->rsp_size_align * chain->num_cells;
+
+ return 0;
+
+alloc_cells_buf_err:
+ dma_free_coherent(dev, sizeof(*chain->wb_status),
+ chain->wb_status, chain->wb_status_paddr);
+
+alloc_wb_status_err:
+ kfree(chain->cell_ctxt);
+
+alloc_cell_ctxt_err:
+ return err;
+}
+
+/**
+ * api_chain_free - free API CMD specific chain
+ * @chain: the API CMD specific chain to free
+ **/
+static void api_chain_free(struct hifc_api_cmd_chain *chain)
+{
+ void *dev = chain->hwdev->dev_hdl;
+
+ hifc_dma_free_coherent_align(dev, &chain->cells_addr);
+
+ dma_free_coherent(dev, sizeof(*chain->wb_status),
+ chain->wb_status, chain->wb_status_paddr);
+ kfree(chain->cell_ctxt);
+}
+
+/**
+ * api_cmd_create_chain - create API CMD specific chain
+ * @chain: the API CMD specific chain to create
+ * @attr: attributes to set in the chain
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd_create_chain(struct hifc_api_cmd_chain **cmd_chain,
+ struct hifc_api_cmd_chain_attr *attr)
+{
+ struct hifc_hwdev *hwdev = attr->hwdev;
+ struct hifc_api_cmd_chain *chain;
+ int err;
+
+ if (attr->num_cells & (attr->num_cells - 1)) {
+ sdk_err(hwdev->dev_hdl, "Invalid number of cells, must be power of 2\n");
+ return -EINVAL;
+ }
+
+ chain = kzalloc(sizeof(*chain), GFP_KERNEL);
+ if (!chain)
+ return -ENOMEM;
+
+ chain->hwdev = hwdev;
+
+ err = api_chain_init(chain, attr);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize chain\n");
+ goto chain_init_err;
+ }
+
+ err = api_cmd_create_cells(chain);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to create cells for API CMD chain\n");
+ goto create_cells_err;
+ }
+
+ err = api_cmd_chain_hw_init(chain);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize chain HW\n");
+ goto chain_hw_init_err;
+ }
+
+ *cmd_chain = chain;
+ return 0;
+
+chain_hw_init_err:
+create_cells_err:
+ api_chain_free(chain);
+
+chain_init_err:
+ kfree(chain);
+ return err;
+}
+
+/**
+ * api_cmd_destroy_chain - destroy API CMD specific chain
+ * @chain: the API CMD specific chain to destroy
+ **/
+static void api_cmd_destroy_chain(struct hifc_api_cmd_chain *chain)
+{
+ api_chain_free(chain);
+ kfree(chain);
+}
+
+/**
+ * hifc_api_cmd_init - Initialize all the API CMD chains
+ * @hwdev: the pointer to hw device
+ * @chain: the API CMD chains that will be initialized
+ * Return: 0 - success, negative - failure
+ **/
+int hifc_api_cmd_init(struct hifc_hwdev *hwdev,
+ struct hifc_api_cmd_chain **chain)
+{
+ void *dev = hwdev->dev_hdl;
+ struct hifc_api_cmd_chain_attr attr;
+ enum hifc_api_cmd_chain_type chain_type, i;
+ int err;
+
+ attr.hwdev = hwdev;
+ attr.num_cells = API_CHAIN_NUM_CELLS;
+ attr.cell_size = API_CHAIN_CELL_SIZE;
+ attr.rsp_size = API_CHAIN_RSP_DATA_SIZE;
+
+ chain_type = HIFC_API_CMD_WRITE_TO_MGMT_CPU;
+ for (; chain_type < HIFC_API_CMD_MAX; chain_type++) {
+ attr.chain_type = chain_type;
+
+ err = api_cmd_create_chain(&chain[chain_type], &attr);
+ if (err) {
+ sdk_err(dev, "Failed to create chain %d\n", chain_type);
+ goto create_chain_err;
+ }
+ }
+
+ return 0;
+
+create_chain_err:
+ i = HIFC_API_CMD_WRITE_TO_MGMT_CPU;
+ for (; i < chain_type; i++)
+ api_cmd_destroy_chain(chain[i]);
+
+ return err;
+}
+
+/**
+ * hifc_api_cmd_free - free the API CMD chains
+ * @chain: the API CMD chains that will be freed
+ **/
+void hifc_api_cmd_free(struct hifc_api_cmd_chain **chain)
+{
+ enum hifc_api_cmd_chain_type chain_type;
+
+ chain_type = HIFC_API_CMD_WRITE_TO_MGMT_CPU;
+
+ for (; chain_type < HIFC_API_CMD_MAX; chain_type++)
+ api_cmd_destroy_chain(chain[chain_type]);
+}
+
diff --git a/drivers/scsi/huawei/hifc/hifc_api_cmd.h b/drivers/scsi/huawei/hifc/hifc_api_cmd.h
new file mode 100644
index 000000000000..bd14db34a119
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_api_cmd.h
@@ -0,0 +1,268 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+
+#ifndef HIFC_API_CMD_H_
+#define HIFC_API_CMD_H_
+
+#define HIFC_API_CMD_CELL_CTRL_CELL_LEN_SHIFT 0
+#define HIFC_API_CMD_CELL_CTRL_RD_DMA_ATTR_OFF_SHIFT 16
+#define HIFC_API_CMD_CELL_CTRL_WR_DMA_ATTR_OFF_SHIFT 24
+#define HIFC_API_CMD_CELL_CTRL_XOR_CHKSUM_SHIFT 56
+
+#define HIFC_API_CMD_CELL_CTRL_CELL_LEN_MASK 0x3FU
+#define HIFC_API_CMD_CELL_CTRL_RD_DMA_ATTR_OFF_MASK 0x3FU
+#define HIFC_API_CMD_CELL_CTRL_WR_DMA_ATTR_OFF_MASK 0x3FU
+#define HIFC_API_CMD_CELL_CTRL_XOR_CHKSUM_MASK 0xFFU
+
+#define HIFC_API_CMD_CELL_CTRL_SET(val, member) \
+ ((((u64)val) & HIFC_API_CMD_CELL_CTRL_##member##_MASK) << \
+ HIFC_API_CMD_CELL_CTRL_##member##_SHIFT)
+
+#define HIFC_API_CMD_DESC_API_TYPE_SHIFT 0
+#define HIFC_API_CMD_DESC_RD_WR_SHIFT 1
+#define HIFC_API_CMD_DESC_MGMT_BYPASS_SHIFT 2
+#define HIFC_API_CMD_DESC_RESP_AEQE_EN_SHIFT 3
+#define HIFC_API_CMD_DESC_PRIV_DATA_SHIFT 8
+#define HIFC_API_CMD_DESC_DEST_SHIFT 32
+#define HIFC_API_CMD_DESC_SIZE_SHIFT 40
+#define HIFC_API_CMD_DESC_XOR_CHKSUM_SHIFT 56
+
+#define HIFC_API_CMD_DESC_API_TYPE_MASK 0x1U
+#define HIFC_API_CMD_DESC_RD_WR_MASK 0x1U
+#define HIFC_API_CMD_DESC_MGMT_BYPASS_MASK 0x1U
+#define HIFC_API_CMD_DESC_RESP_AEQE_EN_MASK 0x1U
+#define HIFC_API_CMD_DESC_DEST_MASK 0x1FU
+#define HIFC_API_CMD_DESC_SIZE_MASK 0x7FFU
+#define HIFC_API_CMD_DESC_XOR_CHKSUM_MASK 0xFFU
+#define HIFC_API_CMD_DESC_PRIV_DATA_MASK 0xFFFFFFU
+
+#define HIFC_API_CMD_DESC_SET(val, member) \
+ ((((u64)val) & HIFC_API_CMD_DESC_##member##_MASK) << \
+ HIFC_API_CMD_DESC_##member##_SHIFT)
+#define HIFC_API_CMD_STATUS_HEADER_VALID_SHIFT 0
+#define HIFC_API_CMD_STATUS_HEADER_CHAIN_ID_SHIFT 16
+
+#define HIFC_API_CMD_STATUS_HEADER_VALID_MASK 0xFFU
+#define HIFC_API_CMD_STATUS_HEADER_CHAIN_ID_MASK 0xFFU
+#define HIFC_API_CMD_STATUS_HEADER_GET(val, member) \
+ (((val) >> HIFC_API_CMD_STATUS_HEADER_##member##_SHIFT) & \
+ HIFC_API_CMD_STATUS_HEADER_##member##_MASK)
+#define HIFC_API_CMD_CHAIN_REQ_RESTART_SHIFT 1
+#define HIFC_API_CMD_CHAIN_REQ_RESTART_MASK 0x1U
+#define HIFC_API_CMD_CHAIN_REQ_WB_TRIGGER_MASK 0x1U
+#define HIFC_API_CMD_CHAIN_REQ_SET(val, member) \
+ (((val) & HIFC_API_CMD_CHAIN_REQ_##member##_MASK) << \
+ HIFC_API_CMD_CHAIN_REQ_##member##_SHIFT)
+
+#define HIFC_API_CMD_CHAIN_REQ_GET(val, member) \
+ (((val) >> HIFC_API_CMD_CHAIN_REQ_##member##_SHIFT) & \
+ HIFC_API_CMD_CHAIN_REQ_##member##_MASK)
+
+#define HIFC_API_CMD_CHAIN_REQ_CLEAR(val, member) \
+ ((val) & (~(HIFC_API_CMD_CHAIN_REQ_##member##_MASK \
+ << HIFC_API_CMD_CHAIN_REQ_##member##_SHIFT)))
+
+#define HIFC_API_CMD_CHAIN_CTRL_RESTART_EN_SHIFT 1
+#define HIFC_API_CMD_CHAIN_CTRL_XOR_ERR_SHIFT 2
+#define HIFC_API_CMD_CHAIN_CTRL_AEQE_EN_SHIFT 4
+#define HIFC_API_CMD_CHAIN_CTRL_AEQ_ID_SHIFT 8
+#define HIFC_API_CMD_CHAIN_CTRL_XOR_CHK_EN_SHIFT 28
+#define HIFC_API_CMD_CHAIN_CTRL_CELL_SIZE_SHIFT 30
+
+#define HIFC_API_CMD_CHAIN_CTRL_RESTART_EN_MASK 0x1U
+#define HIFC_API_CMD_CHAIN_CTRL_XOR_ERR_MASK 0x1U
+#define HIFC_API_CMD_CHAIN_CTRL_AEQE_EN_MASK 0x1U
+#define HIFC_API_CMD_CHAIN_CTRL_AEQ_ID_MASK 0x3U
+#define HIFC_API_CMD_CHAIN_CTRL_XOR_CHK_EN_MASK 0x3U
+#define HIFC_API_CMD_CHAIN_CTRL_CELL_SIZE_MASK 0x3U
+
+#define HIFC_API_CMD_CHAIN_CTRL_SET(val, member) \
+ (((val) & HIFC_API_CMD_CHAIN_CTRL_##member##_MASK) << \
+ HIFC_API_CMD_CHAIN_CTRL_##member##_SHIFT)
+
+#define HIFC_API_CMD_CHAIN_CTRL_CLEAR(val, member) \
+ ((val) & (~(HIFC_API_CMD_CHAIN_CTRL_##member##_MASK \
+ << HIFC_API_CMD_CHAIN_CTRL_##member##_SHIFT)))
+
+#define HIFC_API_CMD_RESP_HEAD_VALID_MASK 0xFF
+#define HIFC_API_CMD_RESP_HEAD_VALID_CODE 0xFF
+
+#define HIFC_API_CMD_RESP_HEADER_VALID(val) \
+ (((val) & HIFC_API_CMD_RESP_HEAD_VALID_MASK) == \
+ HIFC_API_CMD_RESP_HEAD_VALID_CODE)
+#define HIFC_API_CMD_STATUS_CONS_IDX_MASK 0xFFFFFFU
+#define HIFC_API_CMD_STATUS_CONS_IDX_SHIFT 0
+#define HIFC_API_CMD_STATUS_FSM_MASK 0xFU
+#define HIFC_API_CMD_STATUS_FSM_SHIFT 24
+#define HIFC_API_CMD_STATUS_CHKSUM_ERR_MASK 0x3U
+#define HIFC_API_CMD_STATUS_CHKSUM_ERR_SHIFT 28
+#define HIFC_API_CMD_STATUS_CPLD_ERR_MASK 0x1U
+#define HIFC_API_CMD_STATUS_CPLD_ERR_SHIFT 30
+
+#define HIFC_API_CMD_STATUS_GET(val, member) \
+ (((val) >> HIFC_API_CMD_STATUS_##member##_SHIFT) & \
+ HIFC_API_CMD_STATUS_##member##_MASK)
+
+/* API CMD registers */
+#define HIFC_CSR_API_CMD_BASE 0xF000
+
+#define HIFC_CSR_API_CMD_STRIDE 0x100
+
+#define HIFC_CSR_API_CMD_CHAIN_HEAD_HI_ADDR(idx) \
+ (HIFC_CSR_API_CMD_BASE + 0x0 + (idx) * HIFC_CSR_API_CMD_STRIDE)
+
+#define HIFC_CSR_API_CMD_CHAIN_HEAD_LO_ADDR(idx) \
+ (HIFC_CSR_API_CMD_BASE + 0x4 + (idx) * HIFC_CSR_API_CMD_STRIDE)
+
+#define HIFC_CSR_API_CMD_STATUS_HI_ADDR(idx) \
+ (HIFC_CSR_API_CMD_BASE + 0x8 + (idx) * HIFC_CSR_API_CMD_STRIDE)
+
+#define HIFC_CSR_API_CMD_STATUS_LO_ADDR(idx) \
+ (HIFC_CSR_API_CMD_BASE + 0xC + (idx) * HIFC_CSR_API_CMD_STRIDE)
+
+#define HIFC_CSR_API_CMD_CHAIN_NUM_CELLS_ADDR(idx) \
+ (HIFC_CSR_API_CMD_BASE + 0x10 + (idx) * HIFC_CSR_API_CMD_STRIDE)
+
+#define HIFC_CSR_API_CMD_CHAIN_CTRL_ADDR(idx) \
+ (HIFC_CSR_API_CMD_BASE + 0x14 + (idx) * HIFC_CSR_API_CMD_STRIDE)
+
+#define HIFC_CSR_API_CMD_CHAIN_PI_ADDR(idx) \
+ (HIFC_CSR_API_CMD_BASE + 0x1C + (idx) * HIFC_CSR_API_CMD_STRIDE)
+
+#define HIFC_CSR_API_CMD_CHAIN_REQ_ADDR(idx) \
+ (HIFC_CSR_API_CMD_BASE + 0x20 + (idx) * HIFC_CSR_API_CMD_STRIDE)
+
+#define HIFC_CSR_API_CMD_STATUS_0_ADDR(idx) \
+ (HIFC_CSR_API_CMD_BASE + 0x30 + (idx) * HIFC_CSR_API_CMD_STRIDE)
+
+enum hifc_api_cmd_chain_type {
+ /* write command with completion notification */
+ HIFC_API_CMD_WRITE = 0,
+ /* read command with completion notification */
+ HIFC_API_CMD_READ = 1,
+ /* write to mgmt cpu command with completion */
+ HIFC_API_CMD_WRITE_TO_MGMT_CPU = 2,
+ /* multi read command with completion notification - not used */
+ HIFC_API_CMD_MULTI_READ = 3,
+ /* write command without completion notification */
+ HIFC_API_CMD_POLL_WRITE = 4,
+ /* read command without completion notification */
+ HIFC_API_CMD_POLL_READ = 5,
+ /* read from mgmt cpu command with completion */
+ HIFC_API_CMD_WRITE_ASYNC_TO_MGMT_CPU = 6,
+ HIFC_API_CMD_MAX,
+};
+
+struct hifc_api_cmd_status {
+ u64 header;
+ u32 buf_desc;
+ u32 cell_addr_hi;
+ u32 cell_addr_lo;
+ u32 rsvd0;
+ u64 rsvd1;
+};
+
+/* HW struct */
+struct hifc_api_cmd_cell {
+ u64 ctrl;
+
+ /* address is 64 bit in HW struct */
+ u64 next_cell_paddr;
+
+ u64 desc;
+
+ /* HW struct */
+ union {
+ struct {
+ u64 hw_cmd_paddr;
+ } write;
+
+ struct {
+ u64 hw_wb_resp_paddr;
+ u64 hw_cmd_paddr;
+ } read;
+ };
+};
+
+struct hifc_api_cmd_resp_fmt {
+ u64 header;
+ u64 rsvd[3];
+ u64 resp_data;
+};
+
+struct hifc_api_cmd_cell_ctxt {
+ struct hifc_api_cmd_cell *cell_vaddr;
+
+ void *api_cmd_vaddr;
+
+ struct hifc_api_cmd_resp_fmt *resp;
+
+ struct completion done;
+ int status;
+
+ u32 saved_prod_idx;
+};
+
+struct hifc_api_cmd_chain_attr {
+ struct hifc_hwdev *hwdev;
+ enum hifc_api_cmd_chain_type chain_type;
+
+ u32 num_cells;
+ u16 rsp_size;
+ u16 cell_size;
+};
+
+struct hifc_api_cmd_chain {
+ struct hifc_hwdev *hwdev;
+ enum hifc_api_cmd_chain_type chain_type;
+
+ u32 num_cells;
+ u16 cell_size;
+ u16 rsp_size;
+
+ /* HW members is 24 bit format */
+ u32 prod_idx;
+ u32 cons_idx;
+
+ struct semaphore sem;
+ /* Async cmd can not be scheduling */
+ spinlock_t async_lock;
+
+ dma_addr_t wb_status_paddr;
+ struct hifc_api_cmd_status *wb_status;
+
+ dma_addr_t head_cell_paddr;
+ struct hifc_api_cmd_cell *head_node;
+
+ struct hifc_api_cmd_cell_ctxt *cell_ctxt;
+ struct hifc_api_cmd_cell *curr_node;
+
+ struct hifc_dma_addr_align cells_addr;
+
+ u8 *cell_vaddr_base;
+ u64 cell_paddr_base;
+ u8 *rsp_vaddr_base;
+ u64 rsp_paddr_base;
+ u8 *buf_vaddr_base;
+ u64 buf_paddr_base;
+ u64 cell_size_align;
+ u64 rsp_size_align;
+ u64 buf_size_align;
+};
+
+int hifc_api_cmd_write(struct hifc_api_cmd_chain *chain,
+ enum hifc_node_id dest, void *cmd, u16 size);
+
+int hifc_api_cmd_read(struct hifc_api_cmd_chain *chain,
+ enum hifc_node_id dest, void *cmd, u16 size,
+ void *ack, u16 ack_size);
+
+int hifc_api_cmd_init(struct hifc_hwdev *hwdev,
+ struct hifc_api_cmd_chain **chain);
+
+void hifc_api_cmd_free(struct hifc_api_cmd_chain **chain);
+
+#endif
diff --git a/drivers/scsi/huawei/hifc/hifc_cfg.c b/drivers/scsi/huawei/hifc/hifc_cfg.c
new file mode 100644
index 000000000000..5ebe5d754c41
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_cfg.c
@@ -0,0 +1,823 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/mutex.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/completion.h>
+#include <linux/semaphore.h>
+#include <linux/vmalloc.h>
+
+#include "hifc_knl_adp.h"
+#include "hifc_hw.h"
+#include "hifc_hwdev.h"
+#include "hifc_hwif.h"
+#include "hifc_cqm_main.h"
+#include "hifc_api_cmd.h"
+#include "hifc_hw.h"
+#include "hifc_mgmt.h"
+#include "hifc_cfg.h"
+
+uint intr_mode;
+
+int hifc_sync_time(void *hwdev, u64 time)
+{
+ struct hifc_sync_time_info time_info = {0};
+ u16 out_size = sizeof(time_info);
+ int err;
+
+ time_info.mstime = time;
+ err = hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_SYNC_TIME, &time_info,
+ sizeof(time_info), &time_info, &out_size,
+ 0);
+ if (err || time_info.status || !out_size) {
+ sdk_err(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Failed to sync time to mgmt, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, time_info.status, out_size);
+ }
+
+ return err;
+}
+
+static void parse_pub_res_cap(struct service_cap *cap,
+ struct hifc_dev_cap *dev_cap,
+ enum func_type type)
+{
+ cap->port_id = dev_cap->port_id;
+ cap->force_up = dev_cap->force_up;
+
+ pr_info("Get public resource capbility, force_up: 0x%x\n",
+ cap->force_up);
+ /* FC need max queue number, but max queue number info is in
+ * l2nic cap, we also put max queue num info in public cap, so
+ * FC can get correct max queue number info.
+ */
+ cap->max_sqs = dev_cap->nic_max_sq + 1;
+ cap->max_rqs = dev_cap->nic_max_rq + 1;
+
+ cap->host_total_function = dev_cap->host_total_func;
+ cap->host_oq_id_mask_val = dev_cap->host_oq_id_mask_val;
+ cap->max_connect_num = dev_cap->max_conn_num;
+ cap->max_stick2cache_num = dev_cap->max_stick2cache_num;
+
+ pr_info("Get public resource capbility, svc_cap_en: 0x%x\n",
+ dev_cap->svc_cap_en);
+ pr_info("port_id=0x%x\n", cap->port_id);
+ pr_info("Host_total_function=0x%x, host_oq_id_mask_val=0x%x\n",
+ cap->host_total_function, cap->host_oq_id_mask_val);
+}
+
+static void parse_fc_res_cap(struct service_cap *cap,
+ struct hifc_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct dev_fc_svc_cap *fc_cap = &cap->fc_cap.dev_fc_cap;
+
+ fc_cap->max_parent_qpc_num = dev_cap->fc_max_pctx;
+ fc_cap->scq_num = dev_cap->fc_max_scq;
+ fc_cap->srq_num = dev_cap->fc_max_srq;
+ fc_cap->max_child_qpc_num = dev_cap->fc_max_cctx;
+ fc_cap->vp_id_start = dev_cap->fc_vp_id_start;
+ fc_cap->vp_id_end = dev_cap->fc_vp_id_end;
+
+ pr_info("Get fc resource capbility\n");
+ pr_info("Max_parent_qpc_num=0x%x, scq_num=0x%x, srq_num=0x%x, max_child_qpc_num=0x%x\n",
+ fc_cap->max_parent_qpc_num, fc_cap->scq_num, fc_cap->srq_num,
+ fc_cap->max_child_qpc_num);
+ pr_info("Vp_id_start=0x%x, vp_id_end=0x%x\n",
+ fc_cap->vp_id_start, fc_cap->vp_id_end);
+}
+
+static void parse_dev_cap(struct hifc_hwdev *dev,
+ struct hifc_dev_cap *dev_cap, enum func_type type)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+
+ /* Public resource */
+ parse_pub_res_cap(cap, dev_cap, type);
+
+ /* PPF managed dynamic resource */
+
+ parse_fc_res_cap(cap, dev_cap, type);
+}
+
+static int get_cap_from_fw(struct hifc_hwdev *dev, enum func_type type)
+{
+ struct hifc_dev_cap dev_cap = {0};
+ u16 out_len = sizeof(dev_cap);
+ int err;
+
+ dev_cap.version = HIFC_CMD_VER_FUNC_ID;
+ err = hifc_global_func_id_get(dev, &dev_cap.func_id);
+ if (err)
+ return err;
+
+ sdk_info(dev->dev_hdl, "Get cap from fw, func_idx: %d\n",
+ dev_cap.func_id);
+
+ err = hifc_msg_to_mgmt_sync(dev, HIFC_MOD_CFGM, HIFC_CFG_NIC_CAP,
+ &dev_cap, sizeof(dev_cap),
+ &dev_cap, &out_len, 0);
+ if (err || dev_cap.status || !out_len) {
+ sdk_err(dev->dev_hdl,
+ "Failed to get capability from FW, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, dev_cap.status, out_len);
+ return -EFAULT;
+ }
+
+ parse_dev_cap(dev, &dev_cap, type);
+ return 0;
+}
+
+static void fc_param_fix(struct hifc_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct fc_service_cap *fc_cap = &cap->fc_cap;
+
+ fc_cap->parent_qpc_size = FC_PCTX_SZ;
+ fc_cap->child_qpc_size = FC_CCTX_SZ;
+ fc_cap->sqe_size = FC_SQE_SZ;
+
+ fc_cap->scqc_size = FC_SCQC_SZ;
+ fc_cap->scqe_size = FC_SCQE_SZ;
+
+ fc_cap->srqc_size = FC_SRQC_SZ;
+ fc_cap->srqe_size = FC_SRQE_SZ;
+}
+
+static void cfg_get_eq_num(struct hifc_hwdev *dev)
+{
+ struct cfg_eq_info *eq_info = &dev->cfg_mgmt->eq_info;
+
+ eq_info->num_ceq = dev->hwif->attr.num_ceqs;
+ eq_info->num_ceq_remain = eq_info->num_ceq;
+}
+
+static int cfg_init_eq(struct hifc_hwdev *dev)
+{
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+ struct cfg_eq *eq;
+ u8 num_ceq, i = 0;
+
+ cfg_get_eq_num(dev);
+ num_ceq = cfg_mgmt->eq_info.num_ceq;
+
+ sdk_info(dev->dev_hdl, "Cfg mgmt: ceqs=0x%x, remain=0x%x\n",
+ cfg_mgmt->eq_info.num_ceq, cfg_mgmt->eq_info.num_ceq_remain);
+
+ if (!num_ceq) {
+ sdk_err(dev->dev_hdl, "Ceq num cfg in fw is zero\n");
+ return -EFAULT;
+ }
+ eq = kcalloc(num_ceq, sizeof(*eq), GFP_KERNEL);
+ if (!eq)
+ return -ENOMEM;
+
+ for (i = 0; i < num_ceq; ++i) {
+ eq[i].eqn = i;
+ eq[i].free = CFG_FREE;
+ eq[i].type = SERVICE_T_MAX;
+ }
+
+ cfg_mgmt->eq_info.eq = eq;
+
+ mutex_init(&cfg_mgmt->eq_info.eq_mutex);
+
+ return 0;
+}
+
+static int cfg_init_interrupt(struct hifc_hwdev *dev)
+{
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+ struct cfg_irq_info *irq_info = &cfg_mgmt->irq_param_info;
+ u16 intr_num = dev->hwif->attr.num_irqs;
+
+ if (!intr_num) {
+ sdk_err(dev->dev_hdl, "Irq num cfg in fw is zero\n");
+ return -EFAULT;
+ }
+ irq_info->alloc_info = kcalloc(intr_num, sizeof(*irq_info->alloc_info),
+ GFP_KERNEL);
+ if (!irq_info->alloc_info)
+ return -ENOMEM;
+
+ irq_info->num_irq_hw = intr_num;
+
+ cfg_mgmt->svc_cap.interrupt_type = intr_mode;
+
+ mutex_init(&irq_info->irq_mutex);
+
+ return 0;
+}
+
+static int cfg_enable_interrupt(struct hifc_hwdev *dev)
+{
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+ u16 nreq = cfg_mgmt->irq_param_info.num_irq_hw;
+
+ void *pcidev = dev->pcidev_hdl;
+ struct irq_alloc_info_st *irq_info;
+ struct msix_entry *entry;
+ u16 i = 0;
+ int actual_irq;
+
+ irq_info = cfg_mgmt->irq_param_info.alloc_info;
+
+ sdk_info(dev->dev_hdl, "Interrupt type: %d, irq num: %d.\n",
+ cfg_mgmt->svc_cap.interrupt_type, nreq);
+
+ switch (cfg_mgmt->svc_cap.interrupt_type) {
+ case INTR_TYPE_MSIX:
+
+ if (!nreq) {
+ sdk_err(dev->dev_hdl, "Interrupt number cannot be zero\n");
+ return -EINVAL;
+ }
+ entry = kcalloc(nreq, sizeof(*entry), GFP_KERNEL);
+ if (!entry)
+ return -ENOMEM;
+
+ for (i = 0; i < nreq; i++)
+ entry[i].entry = i;
+
+ actual_irq = pci_enable_msix_range(pcidev, entry,
+ VECTOR_THRESHOLD, nreq);
+ if (actual_irq < 0) {
+ sdk_err(dev->dev_hdl, "Alloc msix entries with threshold 2 failed.\n");
+ kfree(entry);
+ return -ENOMEM;
+ }
+
+ nreq = (u16)actual_irq;
+ cfg_mgmt->irq_param_info.num_total = nreq;
+ cfg_mgmt->irq_param_info.num_irq_remain = nreq;
+ sdk_info(dev->dev_hdl, "Request %d msix vector success.\n",
+ nreq);
+
+ for (i = 0; i < nreq; ++i) {
+ /* u16 driver uses to specify entry, OS writes */
+ irq_info[i].info.msix_entry_idx = entry[i].entry;
+ /* u32 kernel uses to write allocated vector */
+ irq_info[i].info.irq_id = entry[i].vector;
+ irq_info[i].type = SERVICE_T_MAX;
+ irq_info[i].free = CFG_FREE;
+ }
+
+ kfree(entry);
+
+ break;
+
+ default:
+ sdk_err(dev->dev_hdl, "Unsupport interrupt type %d\n",
+ cfg_mgmt->svc_cap.interrupt_type);
+ break;
+ }
+
+ return 0;
+}
+
+int hifc_alloc_irqs(void *hwdev, enum hifc_service_type type, u16 num,
+ struct irq_info *irq_info_array, u16 *act_num)
+{
+ struct hifc_hwdev *dev = hwdev;
+ struct cfg_mgmt_info *cfg_mgmt;
+ struct cfg_irq_info *irq_info;
+ struct irq_alloc_info_st *alloc_info;
+ int max_num_irq;
+ u16 free_num_irq;
+ int i, j;
+
+ if (!hwdev || !irq_info_array || !act_num)
+ return -EINVAL;
+
+ cfg_mgmt = dev->cfg_mgmt;
+ irq_info = &cfg_mgmt->irq_param_info;
+ alloc_info = irq_info->alloc_info;
+ max_num_irq = irq_info->num_total;
+ free_num_irq = irq_info->num_irq_remain;
+
+ mutex_lock(&irq_info->irq_mutex);
+
+ if (num > free_num_irq) {
+ if (free_num_irq == 0) {
+ sdk_err(dev->dev_hdl,
+ "no free irq resource in cfg mgmt.\n");
+ mutex_unlock(&irq_info->irq_mutex);
+ return -ENOMEM;
+ }
+
+ sdk_warn(dev->dev_hdl, "only %d irq resource in cfg mgmt.\n",
+ free_num_irq);
+ num = free_num_irq;
+ }
+
+ *act_num = 0;
+
+ for (i = 0; i < num; i++) {
+ for (j = 0; j < max_num_irq; j++) {
+ if (alloc_info[j].free == CFG_FREE) {
+ if (irq_info->num_irq_remain == 0) {
+ sdk_err(dev->dev_hdl, "No free irq resource in cfg mgmt\n");
+ mutex_unlock(&irq_info->irq_mutex);
+ return -EINVAL;
+ }
+ alloc_info[j].type = type;
+ alloc_info[j].free = CFG_BUSY;
+
+ irq_info_array[i].msix_entry_idx =
+ alloc_info[j].info.msix_entry_idx;
+ irq_info_array[i].irq_id =
+ alloc_info[j].info.irq_id;
+ (*act_num)++;
+ irq_info->num_irq_remain--;
+
+ break;
+ }
+ }
+ }
+
+ mutex_unlock(&irq_info->irq_mutex);
+ return 0;
+}
+
+void hifc_free_irq(void *hwdev, enum hifc_service_type type, u32 irq_id)
+{
+ struct hifc_hwdev *dev = hwdev;
+ struct cfg_mgmt_info *cfg_mgmt;
+ struct cfg_irq_info *irq_info;
+ struct irq_alloc_info_st *alloc_info;
+ int max_num_irq;
+ int i;
+
+ if (!hwdev)
+ return;
+
+ cfg_mgmt = dev->cfg_mgmt;
+ irq_info = &cfg_mgmt->irq_param_info;
+ alloc_info = irq_info->alloc_info;
+ max_num_irq = irq_info->num_total;
+
+ mutex_lock(&irq_info->irq_mutex);
+
+ for (i = 0; i < max_num_irq; i++) {
+ if (irq_id == alloc_info[i].info.irq_id &&
+ type == alloc_info[i].type) {
+ if (alloc_info[i].free == CFG_BUSY) {
+ alloc_info[i].free = CFG_FREE;
+ irq_info->num_irq_remain++;
+ if (irq_info->num_irq_remain > max_num_irq) {
+ sdk_err(dev->dev_hdl, "Find target,but over range\n");
+ mutex_unlock(&irq_info->irq_mutex);
+ return;
+ }
+ break;
+ }
+ }
+ }
+
+ if (i >= max_num_irq)
+ sdk_warn(dev->dev_hdl, "Irq %d don`t need to free\n", irq_id);
+
+ mutex_unlock(&irq_info->irq_mutex);
+}
+
+int init_cfg_mgmt(struct hifc_hwdev *dev)
+{
+ int err;
+ struct cfg_mgmt_info *cfg_mgmt;
+
+ cfg_mgmt = kzalloc(sizeof(*cfg_mgmt), GFP_KERNEL);
+ if (!cfg_mgmt)
+ return -ENOMEM;
+
+ dev->cfg_mgmt = cfg_mgmt;
+ cfg_mgmt->hwdev = dev;
+
+ err = cfg_init_eq(dev);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Failed to init cfg event queue, err: %d\n",
+ err);
+ goto free_mgmt_mem;
+ }
+
+ err = cfg_init_interrupt(dev);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Failed to init cfg interrupt, err: %d\n",
+ err);
+ goto free_eq_mem;
+ }
+
+ err = cfg_enable_interrupt(dev);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Failed to enable cfg interrupt, err: %d\n",
+ err);
+ goto free_interrupt_mem;
+ }
+
+ return 0;
+
+free_interrupt_mem:
+ kfree(cfg_mgmt->irq_param_info.alloc_info);
+
+ cfg_mgmt->irq_param_info.alloc_info = NULL;
+
+free_eq_mem:
+ kfree(cfg_mgmt->eq_info.eq);
+
+ cfg_mgmt->eq_info.eq = NULL;
+
+free_mgmt_mem:
+ kfree(cfg_mgmt);
+ return err;
+}
+
+void free_cfg_mgmt(struct hifc_hwdev *dev)
+{
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+
+ /* if the allocated resource were recycled */
+ if (cfg_mgmt->irq_param_info.num_irq_remain !=
+ cfg_mgmt->irq_param_info.num_total ||
+ cfg_mgmt->eq_info.num_ceq_remain != cfg_mgmt->eq_info.num_ceq)
+ sdk_err(dev->dev_hdl, "Can't reclaim all irq and event queue, please check\n");
+
+ switch (cfg_mgmt->svc_cap.interrupt_type) {
+ case INTR_TYPE_MSIX:
+ pci_disable_msix(dev->pcidev_hdl);
+ break;
+
+ case INTR_TYPE_MSI:
+ pci_disable_msi(dev->pcidev_hdl);
+ break;
+
+ case INTR_TYPE_INT:
+ default:
+ break;
+ }
+
+ kfree(cfg_mgmt->irq_param_info.alloc_info);
+ cfg_mgmt->irq_param_info.alloc_info = NULL;
+
+ kfree(cfg_mgmt->eq_info.eq);
+ cfg_mgmt->eq_info.eq = NULL;
+
+ kfree(cfg_mgmt);
+}
+
+int init_capability(struct hifc_hwdev *dev)
+{
+ int err;
+ enum func_type type = HIFC_FUNC_TYPE(dev);
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+
+ cfg_mgmt->svc_cap.timer_en = 1;
+ cfg_mgmt->svc_cap.test_xid_alloc_mode = 1;
+ cfg_mgmt->svc_cap.test_gpa_check_enable = 1;
+
+ err = get_cap_from_fw(dev, type);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Failed to get PF/PPF capability\n");
+ return err;
+ }
+
+ fc_param_fix(dev);
+
+ if (dev->cfg_mgmt->svc_cap.force_up)
+ dev->feature_cap |= HIFC_FUNC_FORCE_LINK_UP;
+
+ sdk_info(dev->dev_hdl, "Init capability success\n");
+ return 0;
+}
+
+void free_capability(struct hifc_hwdev *dev)
+{
+ sdk_info(dev->dev_hdl, "Free capability success");
+}
+
+bool hifc_support_fc(void *hwdev, struct fc_service_cap *cap)
+{
+ struct hifc_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.fc_cap, sizeof(*cap));
+
+ return true;
+}
+
+u8 hifc_host_oq_id_mask(void *hwdev)
+{
+ struct hifc_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting host oq id mask\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.host_oq_id_mask_val;
+}
+
+u16 hifc_func_max_qnum(void *hwdev)
+{
+ struct hifc_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting function max queue number\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.max_sqs;
+}
+
+/* Caller should ensure atomicity when calling this function */
+int hifc_stateful_init(void *hwdev)
+{
+ struct hifc_hwdev *dev = hwdev;
+ int err;
+
+ if (!dev)
+ return -EINVAL;
+
+ if (dev->statufull_ref_cnt++)
+ return 0;
+
+ err = cqm_init(dev);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Failed to init cqm, err: %d\n", err);
+ goto init_cqm_err;
+ }
+
+ sdk_info(dev->dev_hdl, "Initialize statefull resource success\n");
+
+ return 0;
+
+init_cqm_err:
+
+ dev->statufull_ref_cnt--;
+
+ return err;
+}
+
+/* Caller should ensure atomicity when calling this function */
+void hifc_stateful_deinit(void *hwdev)
+{
+ struct hifc_hwdev *dev = hwdev;
+
+ if (!dev || !dev->statufull_ref_cnt)
+ return;
+
+ if (--dev->statufull_ref_cnt)
+ return;
+
+ cqm_uninit(hwdev);
+
+ sdk_info(dev->dev_hdl, "Clear statefull resource success\n");
+}
+
+bool hifc_is_hwdev_mod_inited(void *hwdev, enum hifc_hwdev_init_state state)
+{
+ struct hifc_hwdev *dev = hwdev;
+
+ if (!hwdev || state >= HIFC_HWDEV_MAX_INVAL_INITED)
+ return false;
+
+ return !!test_bit(state, &dev->func_state);
+}
+
+static int hifc_os_dep_init(struct hifc_hwdev *hwdev)
+{
+ hwdev->workq = create_singlethread_workqueue(HIFC_HW_WQ_NAME);
+ if (!hwdev->workq) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize hardware workqueue\n");
+ return -EFAULT;
+ }
+
+ sema_init(&hwdev->fault_list_sem, 1);
+
+ return 0;
+}
+
+static void hifc_os_dep_deinit(struct hifc_hwdev *hwdev)
+{
+ destroy_workqueue(hwdev->workq);
+}
+
+static int __hilink_phy_init(struct hifc_hwdev *hwdev)
+{
+ int err;
+
+ err = hifc_phy_init_status_judge(hwdev);
+ if (err) {
+ sdk_info(hwdev->dev_hdl, "Phy init failed\n");
+ return err;
+ }
+
+ return 0;
+}
+
+static int init_hwdev_and_hwif(struct hifc_init_para *para)
+{
+ struct hifc_hwdev *hwdev;
+ int err;
+
+ if (!(*para->hwdev)) {
+ hwdev = kzalloc(sizeof(*hwdev), GFP_KERNEL);
+ if (!hwdev)
+ return -ENOMEM;
+
+ *para->hwdev = hwdev;
+ hwdev->adapter_hdl = para->adapter_hdl;
+ hwdev->pcidev_hdl = para->pcidev_hdl;
+ hwdev->dev_hdl = para->dev_hdl;
+ hwdev->chip_node = para->chip_node;
+
+ hwdev->chip_fault_stats = vzalloc(HIFC_CHIP_FAULT_SIZE);
+ if (!hwdev->chip_fault_stats)
+ goto alloc_chip_fault_stats_err;
+
+ err = hifc_init_hwif(hwdev, para->cfg_reg_base,
+ para->intr_reg_base,
+ para->db_base_phy, para->db_base,
+ para->dwqe_mapping);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init hwif\n");
+ goto init_hwif_err;
+ }
+ }
+
+ return 0;
+
+init_hwif_err:
+ vfree(hwdev->chip_fault_stats);
+
+alloc_chip_fault_stats_err:
+
+ *para->hwdev = NULL;
+
+ return -EFAULT;
+}
+
+static void deinit_hwdev_and_hwif(struct hifc_hwdev *hwdev)
+{
+ hifc_free_hwif(hwdev);
+
+ vfree(hwdev->chip_fault_stats);
+
+ kfree(hwdev);
+}
+
+static int init_hw_cfg(struct hifc_hwdev *hwdev)
+{
+ int err;
+
+ err = init_capability(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init capability\n");
+ return err;
+ }
+
+ err = __hilink_phy_init(hwdev);
+ if (err)
+ goto hilink_phy_init_err;
+
+ return 0;
+
+hilink_phy_init_err:
+ free_capability(hwdev);
+
+ return err;
+}
+
+/* Return:
+ * 0: all success
+ * >0: partitial success
+ * <0: all failed
+ */
+int hifc_init_hwdev(struct hifc_init_para *para)
+{
+ struct hifc_hwdev *hwdev;
+ int err;
+
+ err = init_hwdev_and_hwif(para);
+ if (err)
+ return err;
+
+ hwdev = *para->hwdev;
+
+ /* detect slave host according to BAR reg */
+ hwdev->feature_cap = HIFC_FUNC_MGMT | HIFC_FUNC_PORT |
+ HIFC_FUNC_SUPP_RATE_LIMIT | HIFC_FUNC_SUPP_DFX_REG |
+ HIFC_FUNC_SUPP_RX_MODE | HIFC_FUNC_SUPP_SET_VF_MAC_VLAN |
+ HIFC_FUNC_SUPP_CHANGE_MAC;
+
+ err = hifc_os_dep_init(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init os dependent\n");
+ goto os_dep_init_err;
+ }
+
+ hifc_set_chip_present(hwdev);
+ hifc_init_heartbeat(hwdev);
+
+ err = init_cfg_mgmt(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init config mgmt\n");
+ goto init_cfg_mgmt_err;
+ }
+
+ err = hifc_init_comm_ch(hwdev);
+ if (err) {
+ if (!(hwdev->func_state & HIFC_HWDEV_INIT_MODES_MASK)) {
+ sdk_err(hwdev->dev_hdl, "Failed to init communication channel\n");
+ goto init_comm_ch_err;
+ } else {
+ sdk_err(hwdev->dev_hdl, "Init communication channel partitail failed\n");
+ return hwdev->func_state & HIFC_HWDEV_INIT_MODES_MASK;
+ }
+ }
+
+ err = init_hw_cfg(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init hardware config\n");
+ goto init_hw_cfg_err;
+ }
+
+ set_bit(HIFC_HWDEV_ALL_INITED, &hwdev->func_state);
+
+ sdk_info(hwdev->dev_hdl, "Init hwdev success\n");
+
+ return 0;
+
+init_hw_cfg_err:
+ return (hwdev->func_state & HIFC_HWDEV_INIT_MODES_MASK);
+
+init_comm_ch_err:
+ free_cfg_mgmt(hwdev);
+
+init_cfg_mgmt_err:
+ hifc_destroy_heartbeat(hwdev);
+ hifc_os_dep_deinit(hwdev);
+
+os_dep_init_err:
+ deinit_hwdev_and_hwif(hwdev);
+ *para->hwdev = NULL;
+
+ return -EFAULT;
+}
+
+void hifc_free_hwdev(void *hwdev)
+{
+ struct hifc_hwdev *dev = hwdev;
+ enum hifc_hwdev_init_state state = HIFC_HWDEV_ALL_INITED;
+ int flag = 0;
+
+ if (!hwdev)
+ return;
+
+ if (test_bit(HIFC_HWDEV_ALL_INITED, &dev->func_state)) {
+ clear_bit(HIFC_HWDEV_ALL_INITED, &dev->func_state);
+
+ /* BM slave function not need to exec rx_tx_flush */
+
+ hifc_func_rx_tx_flush(hwdev);
+
+ free_capability(dev);
+ }
+ while (state > HIFC_HWDEV_NONE_INITED) {
+ if (test_bit(state, &dev->func_state)) {
+ flag = 1;
+ break;
+ }
+ state--;
+ }
+ if (flag) {
+ hifc_uninit_comm_ch(dev);
+ free_cfg_mgmt(dev);
+ hifc_destroy_heartbeat(dev);
+ hifc_os_dep_deinit(dev);
+ }
+ clear_bit(HIFC_HWDEV_NONE_INITED, &dev->func_state);
+
+ deinit_hwdev_and_hwif(dev);
+}
+
+u64 hifc_get_func_feature_cap(void *hwdev)
+{
+ struct hifc_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting function feature capability\n");
+ return 0;
+ }
+
+ return dev->feature_cap;
+}
+
diff --git a/drivers/scsi/huawei/hifc/hifc_cfg.h b/drivers/scsi/huawei/hifc/hifc_cfg.h
new file mode 100644
index 000000000000..b8a9dd35b1fd
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_cfg.h
@@ -0,0 +1,171 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+
+#ifndef __CFG_MGT_H__
+#define __CFG_MGT_H__
+
+enum {
+ CFG_FREE = 0,
+ CFG_BUSY = 1
+};
+
+/* FC */
+#define FC_PCTX_SZ 256
+#define FC_CCTX_SZ 256
+#define FC_SQE_SZ 128
+#define FC_SCQC_SZ 64
+#define FC_SCQE_SZ 64
+#define FC_SRQC_SZ 64
+#define FC_SRQE_SZ 32
+
+/* device capability */
+struct service_cap {
+ /* Host global resources */
+ u16 host_total_function;
+ u8 host_oq_id_mask_val;
+
+ /* DO NOT get interrupt_type from firmware */
+ enum intr_type interrupt_type;
+ u8 intr_chip_en;
+
+ u8 port_id; /* PF/VF's physical port */
+ u8 force_up;
+
+ u8 timer_en; /* 0:disable, 1:enable */
+
+ u16 max_sqs;
+ u16 max_rqs;
+
+ /* For test */
+ bool test_xid_alloc_mode;
+ bool test_gpa_check_enable;
+
+ u32 max_connect_num; /* PF/VF maximum connection number(1M) */
+ /* The maximum connections which can be stick to cache memory, max 1K */
+ u16 max_stick2cache_num;
+
+ struct nic_service_cap nic_cap; /* NIC capability */
+ struct fc_service_cap fc_cap; /* FC capability */
+};
+
+struct hifc_sync_time_info {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u64 mstime;
+};
+
+struct cfg_eq {
+ enum hifc_service_type type;
+ int eqn;
+ int free; /* 1 - alocated, 0- freed */
+};
+
+struct cfg_eq_info {
+ struct cfg_eq *eq;
+ u8 num_ceq;
+ u8 num_ceq_remain;
+ /* mutex used for allocate EQs */
+ struct mutex eq_mutex;
+};
+
+struct irq_alloc_info_st {
+ enum hifc_service_type type;
+ int free; /* 1 - alocated, 0- freed */
+ struct irq_info info;
+};
+
+struct cfg_irq_info {
+ struct irq_alloc_info_st *alloc_info;
+ u16 num_total;
+ u16 num_irq_remain;
+ u16 num_irq_hw; /* device max irq number */
+
+ /* mutex used for allocate EQs */
+ struct mutex irq_mutex;
+};
+
+#define VECTOR_THRESHOLD 2
+
+struct cfg_mgmt_info {
+ struct hifc_hwdev *hwdev;
+ struct service_cap svc_cap;
+ struct cfg_eq_info eq_info; /* EQ */
+ struct cfg_irq_info irq_param_info; /* IRQ */
+ u32 func_seq_num; /* temporary */
+};
+
+enum cfg_sub_cmd {
+ /* PPF(PF) <-> FW */
+ HIFC_CFG_NIC_CAP = 0,
+ CFG_FW_VERSION,
+ CFG_UCODE_VERSION,
+ HIFC_CFG_FUNC_CAP,
+ HIFC_CFG_MBOX_CAP = 6,
+};
+
+struct hifc_dev_cap {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ /* Public resource */
+ u8 sf_svc_attr;
+ u8 host_id;
+ u8 sf_en_pf;
+ u8 sf_en_vf;
+
+ u8 ep_id;
+ u8 intr_type;
+ u8 max_cos_id;
+ u8 er_id;
+ u8 port_id;
+ u8 max_vf;
+ u16 svc_cap_en;
+ u16 host_total_func;
+ u8 host_oq_id_mask_val;
+ u8 max_vf_cos_id;
+
+ u32 max_conn_num;
+ u16 max_stick2cache_num;
+ u16 max_bfilter_start_addr;
+ u16 bfilter_len;
+ u16 hash_bucket_num;
+ u8 cfg_file_ver;
+ u8 net_port_mode;
+ u8 valid_cos_bitmap; /* every bit indicate cos is valid */
+ u8 force_up;
+ u32 pf_num;
+ u32 pf_id_start;
+ u32 vf_num;
+ u32 vf_id_start;
+
+ /* shared resource */
+ u32 host_pctx_num;
+ u8 host_sf_en;
+ u8 rsvd2[3];
+ u32 host_ccxt_num;
+ u32 host_scq_num;
+ u32 host_srq_num;
+ u32 host_mpt_num;
+ /* l2nic */
+ u16 nic_max_sq;
+ u16 nic_max_rq;
+ u32 rsvd[46];
+ /* FC */
+ u32 fc_max_pctx;
+ u32 fc_max_scq;
+ u32 fc_max_srq;
+
+ u32 fc_max_cctx;
+ u32 fc_cctx_id_start;
+
+ u8 fc_vp_id_start;
+ u8 fc_vp_id_end;
+ u16 func_id;
+};
+#endif
diff --git a/drivers/scsi/huawei/hifc/hifc_cmdq.c b/drivers/scsi/huawei/hifc/hifc_cmdq.c
new file mode 100644
index 000000000000..03531017c412
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_cmdq.c
@@ -0,0 +1,1507 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/errno.h>
+#include <linux/completion.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+
+#include "hifc_knl_adp.h"
+#include "hifc_hw.h"
+#include "hifc_hwdev.h"
+#include "hifc_hwif.h"
+#include "hifc_wq.h"
+#include "hifc_api_cmd.h"
+#include "hifc_mgmt.h"
+#include "hifc_eqs.h"
+#include "hifc_cmdq.h"
+
+#define CMDQ_CMD_TIMEOUT 1000 /* millisecond */
+#define UPPER_8_BITS(data) (((data) >> 8) & 0xFF)
+#define LOWER_8_BITS(data) ((data) & 0xFF)
+
+#define CMDQ_DB_INFO_HI_PROD_IDX_SHIFT 0
+#define CMDQ_DB_INFO_QUEUE_TYPE_SHIFT 23
+#define CMDQ_DB_INFO_CMDQ_TYPE_SHIFT 24
+#define CMDQ_DB_INFO_SRC_TYPE_SHIFT 27
+#define CMDQ_DB_INFO_HI_PROD_IDX_MASK 0xFFU
+#define CMDQ_DB_INFO_QUEUE_TYPE_MASK 0x1U
+#define CMDQ_DB_INFO_CMDQ_TYPE_MASK 0x7U
+#define CMDQ_DB_INFO_SRC_TYPE_MASK 0x1FU
+
+#define CMDQ_DB_INFO_SET(val, member) \
+ (((val) & CMDQ_DB_INFO_##member##_MASK) << \
+ CMDQ_DB_INFO_##member##_SHIFT)
+
+#define CMDQ_CTRL_PI_SHIFT 0
+#define CMDQ_CTRL_CMD_SHIFT 16
+#define CMDQ_CTRL_MOD_SHIFT 24
+#define CMDQ_CTRL_ACK_TYPE_SHIFT 29
+#define CMDQ_CTRL_HW_BUSY_BIT_SHIFT 31
+#define CMDQ_CTRL_PI_MASK 0xFFFFU
+#define CMDQ_CTRL_CMD_MASK 0xFFU
+#define CMDQ_CTRL_MOD_MASK 0x1FU
+#define CMDQ_CTRL_ACK_TYPE_MASK 0x3U
+#define CMDQ_CTRL_HW_BUSY_BIT_MASK 0x1U
+
+#define CMDQ_CTRL_SET(val, member) \
+ (((val) & CMDQ_CTRL_##member##_MASK) \
+ << CMDQ_CTRL_##member##_SHIFT)
+
+#define CMDQ_CTRL_GET(val, member) \
+ (((val) >> CMDQ_CTRL_##member##_SHIFT) \
+ & CMDQ_CTRL_##member##_MASK)
+
+#define CMDQ_WQE_HEADER_BUFDESC_LEN_SHIFT 0
+#define CMDQ_WQE_HEADER_COMPLETE_FMT_SHIFT 15
+#define CMDQ_WQE_HEADER_DATA_FMT_SHIFT 22
+#define CMDQ_WQE_HEADER_COMPLETE_REQ_SHIFT 23
+#define CMDQ_WQE_HEADER_COMPLETE_SECT_LEN_SHIFT 27
+#define CMDQ_WQE_HEADER_CTRL_LEN_SHIFT 29
+#define CMDQ_WQE_HEADER_HW_BUSY_BIT_SHIFT 31
+
+#define CMDQ_WQE_HEADER_BUFDESC_LEN_MASK 0xFFU
+#define CMDQ_WQE_HEADER_COMPLETE_FMT_MASK 0x1U
+#define CMDQ_WQE_HEADER_DATA_FMT_MASK 0x1U
+#define CMDQ_WQE_HEADER_COMPLETE_REQ_MASK 0x1U
+#define CMDQ_WQE_HEADER_COMPLETE_SECT_LEN_MASK 0x3U
+#define CMDQ_WQE_HEADER_CTRL_LEN_MASK 0x3U
+#define CMDQ_WQE_HEADER_HW_BUSY_BIT_MASK 0x1U
+
+#define CMDQ_WQE_HEADER_SET(val, member) \
+ (((val) & CMDQ_WQE_HEADER_##member##_MASK) \
+ << CMDQ_WQE_HEADER_##member##_SHIFT)
+
+#define CMDQ_WQE_HEADER_GET(val, member) \
+ (((val) >> CMDQ_WQE_HEADER_##member##_SHIFT) \
+ & CMDQ_WQE_HEADER_##member##_MASK)
+
+#define CMDQ_CTXT_CURR_WQE_PAGE_PFN_SHIFT 0
+#define CMDQ_CTXT_EQ_ID_SHIFT 56
+#define CMDQ_CTXT_CEQ_ARM_SHIFT 61
+#define CMDQ_CTXT_CEQ_EN_SHIFT 62
+#define CMDQ_CTXT_HW_BUSY_BIT_SHIFT 63
+#define CMDQ_CTXT_CURR_WQE_PAGE_PFN_MASK 0xFFFFFFFFFFFFF
+#define CMDQ_CTXT_EQ_ID_MASK 0x1F
+#define CMDQ_CTXT_CEQ_ARM_MASK 0x1
+#define CMDQ_CTXT_CEQ_EN_MASK 0x1
+#define CMDQ_CTXT_HW_BUSY_BIT_MASK 0x1
+
+#define CMDQ_CTXT_PAGE_INFO_SET(val, member) \
+ (((u64)(val) & CMDQ_CTXT_##member##_MASK) \
+ << CMDQ_CTXT_##member##_SHIFT)
+
+#define CMDQ_CTXT_PAGE_INFO_GET(val, member) \
+ (((u64)(val) >> CMDQ_CTXT_##member##_SHIFT) \
+ & CMDQ_CTXT_##member##_MASK)
+
+#define CMDQ_CTXT_WQ_BLOCK_PFN_SHIFT 0
+#define CMDQ_CTXT_CI_SHIFT 52
+#define CMDQ_CTXT_WQ_BLOCK_PFN_MASK 0xFFFFFFFFFFFFF
+#define CMDQ_CTXT_CI_MASK 0xFFF
+
+#define CMDQ_CTXT_BLOCK_INFO_SET(val, member) \
+ (((u64)(val) & CMDQ_CTXT_##member##_MASK) \
+ << CMDQ_CTXT_##member##_SHIFT)
+
+#define CMDQ_CTXT_BLOCK_INFO_GET(val, member) \
+ (((u64)(val) >> CMDQ_CTXT_##member##_SHIFT) \
+ & CMDQ_CTXT_##member##_MASK)
+
+#define SAVED_DATA_ARM_SHIFT 31
+#define SAVED_DATA_ARM_MASK 0x1U
+
+#define SAVED_DATA_SET(val, member) \
+ (((val) & SAVED_DATA_##member##_MASK) \
+ << SAVED_DATA_##member##_SHIFT)
+
+#define SAVED_DATA_CLEAR(val, member) \
+ ((val) & (~(SAVED_DATA_##member##_MASK \
+ << SAVED_DATA_##member##_SHIFT)))
+
+#define WQE_ERRCODE_VAL_SHIFT 20
+#define WQE_ERRCODE_VAL_MASK 0xF
+
+#define WQE_ERRCODE_GET(val, member) \
+ (((val) >> WQE_ERRCODE_##member##_SHIFT) & \
+ WQE_ERRCODE_##member##_MASK)
+
+#define CEQE_CMDQ_TYPE_SHIFT 0
+#define CEQE_CMDQ_TYPE_MASK 0x7
+
+#define CEQE_CMDQ_GET(val, member) \
+ (((val) >> CEQE_CMDQ_##member##_SHIFT) & CEQE_CMDQ_##member##_MASK)
+
+#define WQE_COMPLETED(ctrl_info) CMDQ_CTRL_GET(ctrl_info, HW_BUSY_BIT)
+
+#define WQE_HEADER(wqe) ((struct hifc_cmdq_header *)(wqe))
+
+#define CMDQ_DB_PI_OFF(pi) (((u16)LOWER_8_BITS(pi)) << 3)
+
+#define CMDQ_DB_ADDR(db_base, pi) \
+ (((u8 *)(db_base) + HIFC_DB_OFF) + CMDQ_DB_PI_OFF(pi))
+
+#define CMDQ_PFN_SHIFT 12
+#define CMDQ_PFN(addr) ((addr) >> CMDQ_PFN_SHIFT)
+
+#define FIRST_DATA_TO_WRITE_LAST sizeof(u64)
+#define WQE_LCMD_SIZE 64
+#define WQE_SCMD_SIZE 64
+#define COMPLETE_LEN 3
+#define CMDQ_WQEBB_SIZE 64
+#define CMDQ_WQE_SIZE 64
+#define CMDQ_WQ_PAGE_SIZE 4096
+
+#define WQE_NUM_WQEBBS(wqe_size, wq) \
+ ((u16)(ALIGN((u32)(wqe_size), (wq)->wqebb_size) / (wq)->wqebb_size))
+
+#define cmdq_to_cmdqs(cmdq) container_of((cmdq) - (cmdq)->cmdq_type, \
+ struct hifc_cmdqs, cmdq[0])
+
+#define CMDQ_SEND_CMPT_CODE 10
+#define CMDQ_COMPLETE_CMPT_CODE 11
+
+#define HIFC_GET_CMDQ_FREE_WQEBBS(cmdq_wq) \
+ atomic_read(&(cmdq_wq)->delta)
+
+enum cmdq_scmd_type {
+ CMDQ_SET_ARM_CMD = 2,
+};
+
+enum cmdq_wqe_type {
+ WQE_LCMD_TYPE,
+ WQE_SCMD_TYPE,
+};
+
+enum ctrl_sect_len {
+ CTRL_SECT_LEN = 1,
+ CTRL_DIRECT_SECT_LEN = 2,
+};
+
+enum bufdesc_len {
+ BUFDESC_LCMD_LEN = 2,
+ BUFDESC_SCMD_LEN = 3,
+};
+
+enum data_format {
+ DATA_SGE,
+ DATA_DIRECT,
+};
+
+enum completion_format {
+ COMPLETE_DIRECT,
+ COMPLETE_SGE,
+};
+
+enum completion_request {
+ CEQ_SET = 1,
+};
+
+enum cmdq_cmd_type {
+ SYNC_CMD_DIRECT_RESP,
+ SYNC_CMD_SGE_RESP,
+ ASYNC_CMD,
+};
+
+bool hifc_cmdq_idle(struct hifc_cmdq *cmdq)
+{
+ struct hifc_wq *wq = cmdq->wq;
+
+ return (atomic_read(&wq->delta) == wq->q_depth ? true : false);
+}
+
+struct hifc_cmd_buf *hifc_alloc_cmd_buf(void *hwdev)
+{
+ struct hifc_cmdqs *cmdqs;
+ struct hifc_cmd_buf *cmd_buf;
+ void *dev;
+
+ if (!hwdev) {
+ pr_err("Failed to alloc cmd buf, Invalid hwdev\n");
+ return NULL;
+ }
+
+ cmdqs = ((struct hifc_hwdev *)hwdev)->cmdqs;
+ dev = ((struct hifc_hwdev *)hwdev)->dev_hdl;
+
+ cmd_buf = kzalloc(sizeof(*cmd_buf), GFP_ATOMIC);
+ if (!cmd_buf)
+ return NULL;
+
+ cmd_buf->buf = pci_pool_alloc(cmdqs->cmd_buf_pool, GFP_ATOMIC,
+ &cmd_buf->dma_addr);
+ if (!cmd_buf->buf) {
+ sdk_err(dev, "Failed to allocate cmdq cmd buf from the pool\n");
+ goto alloc_pci_buf_err;
+ }
+
+ return cmd_buf;
+
+alloc_pci_buf_err:
+ kfree(cmd_buf);
+ return NULL;
+}
+
+void hifc_free_cmd_buf(void *hwdev, struct hifc_cmd_buf *cmd_buf)
+{
+ struct hifc_cmdqs *cmdqs;
+
+ if (!hwdev || !cmd_buf) {
+ pr_err("Failed to free cmd buf\n");
+ return;
+ }
+
+ cmdqs = ((struct hifc_hwdev *)hwdev)->cmdqs;
+
+ pci_pool_free(cmdqs->cmd_buf_pool, cmd_buf->buf, cmd_buf->dma_addr);
+ kfree(cmd_buf);
+}
+
+static int cmdq_wqe_size(enum cmdq_wqe_type wqe_type)
+{
+ int wqe_size = 0;
+
+ switch (wqe_type) {
+ case WQE_LCMD_TYPE:
+ wqe_size = WQE_LCMD_SIZE;
+ break;
+ case WQE_SCMD_TYPE:
+ wqe_size = WQE_SCMD_SIZE;
+ break;
+ }
+
+ return wqe_size;
+}
+
+static int cmdq_get_wqe_size(enum bufdesc_len len)
+{
+ int wqe_size = 0;
+
+ switch (len) {
+ case BUFDESC_LCMD_LEN:
+ wqe_size = WQE_LCMD_SIZE;
+ break;
+ case BUFDESC_SCMD_LEN:
+ wqe_size = WQE_SCMD_SIZE;
+ break;
+ }
+
+ return wqe_size;
+}
+
+static void cmdq_set_completion(struct hifc_cmdq_completion *complete,
+ struct hifc_cmd_buf *buf_out)
+{
+ struct hifc_sge_resp *sge_resp = &complete->sge_resp;
+
+ hifc_set_sge(&sge_resp->sge, buf_out->dma_addr,
+ HIFC_CMDQ_BUF_SIZE);
+}
+
+static void cmdq_set_lcmd_bufdesc(struct hifc_cmdq_wqe_lcmd *wqe,
+ struct hifc_cmd_buf *buf_in)
+{
+ hifc_set_sge(&wqe->buf_desc.sge, buf_in->dma_addr, buf_in->size);
+}
+
+static void cmdq_set_inline_wqe_data(struct hifc_cmdq_inline_wqe *wqe,
+ const void *buf_in, u32 in_size)
+{
+ struct hifc_cmdq_wqe_scmd *wqe_scmd = &wqe->wqe_scmd;
+
+ wqe_scmd->buf_desc.buf_len = in_size;
+ memcpy(wqe_scmd->buf_desc.data, buf_in, in_size);
+}
+
+static void cmdq_fill_db(struct hifc_cmdq_db *db,
+ enum hifc_cmdq_type cmdq_type, u16 prod_idx)
+{
+ db->db_info = CMDQ_DB_INFO_SET(UPPER_8_BITS(prod_idx), HI_PROD_IDX) |
+ CMDQ_DB_INFO_SET(HIFC_DB_CMDQ_TYPE, QUEUE_TYPE) |
+ CMDQ_DB_INFO_SET(cmdq_type, CMDQ_TYPE) |
+ CMDQ_DB_INFO_SET(HIFC_DB_SRC_CMDQ_TYPE, SRC_TYPE);
+}
+
+static void cmdq_set_db(struct hifc_cmdq *cmdq,
+ enum hifc_cmdq_type cmdq_type, u16 prod_idx)
+{
+ struct hifc_cmdq_db db;
+
+ cmdq_fill_db(&db, cmdq_type, prod_idx);
+
+ /* The data that is written to HW should be in Big Endian Format */
+ db.db_info = cpu_to_be32(db.db_info);
+
+ wmb(); /* write all before the doorbell */
+ writel(db.db_info, CMDQ_DB_ADDR(cmdq->db_base, prod_idx));
+}
+
+static void cmdq_wqe_fill(void *dst, const void *src)
+{
+ memcpy((u8 *)dst + FIRST_DATA_TO_WRITE_LAST,
+ (u8 *)src + FIRST_DATA_TO_WRITE_LAST,
+ CMDQ_WQE_SIZE - FIRST_DATA_TO_WRITE_LAST);
+
+ wmb(); /* The first 8 bytes should be written last */
+
+ *(u64 *)dst = *(u64 *)src;
+}
+
+static void cmdq_prepare_wqe_ctrl(struct hifc_cmdq_wqe *wqe, int wrapped,
+ enum hifc_ack_type ack_type,
+ enum hifc_mod_type mod, u8 cmd, u16 prod_idx,
+ enum completion_format complete_format,
+ enum data_format data_format,
+ enum bufdesc_len buf_len)
+{
+ struct hifc_ctrl *ctrl;
+ enum ctrl_sect_len ctrl_len;
+ struct hifc_cmdq_wqe_lcmd *wqe_lcmd;
+ struct hifc_cmdq_wqe_scmd *wqe_scmd;
+ u32 saved_data = WQE_HEADER(wqe)->saved_data;
+
+ if (data_format == DATA_SGE) {
+ wqe_lcmd = &wqe->wqe_lcmd;
+
+ wqe_lcmd->status.status_info = 0;
+ ctrl = &wqe_lcmd->ctrl;
+ ctrl_len = CTRL_SECT_LEN;
+ } else {
+ wqe_scmd = &wqe->inline_wqe.wqe_scmd;
+
+ wqe_scmd->status.status_info = 0;
+ ctrl = &wqe_scmd->ctrl;
+ ctrl_len = CTRL_DIRECT_SECT_LEN;
+ }
+
+ ctrl->ctrl_info = CMDQ_CTRL_SET(prod_idx, PI) |
+ CMDQ_CTRL_SET(cmd, CMD) |
+ CMDQ_CTRL_SET(mod, MOD) |
+ CMDQ_CTRL_SET(ack_type, ACK_TYPE);
+
+ WQE_HEADER(wqe)->header_info =
+ CMDQ_WQE_HEADER_SET(buf_len, BUFDESC_LEN) |
+ CMDQ_WQE_HEADER_SET(complete_format, COMPLETE_FMT) |
+ CMDQ_WQE_HEADER_SET(data_format, DATA_FMT) |
+ CMDQ_WQE_HEADER_SET(CEQ_SET, COMPLETE_REQ) |
+ CMDQ_WQE_HEADER_SET(COMPLETE_LEN, COMPLETE_SECT_LEN) |
+ CMDQ_WQE_HEADER_SET(ctrl_len, CTRL_LEN) |
+ CMDQ_WQE_HEADER_SET((u32)wrapped, HW_BUSY_BIT);
+
+ if (cmd == CMDQ_SET_ARM_CMD && mod == HIFC_MOD_COMM) {
+ saved_data &= SAVED_DATA_CLEAR(saved_data, ARM);
+ WQE_HEADER(wqe)->saved_data = saved_data |
+ SAVED_DATA_SET(1, ARM);
+ } else {
+ saved_data &= SAVED_DATA_CLEAR(saved_data, ARM);
+ WQE_HEADER(wqe)->saved_data = saved_data;
+ }
+}
+
+static void cmdq_set_lcmd_wqe(struct hifc_cmdq_wqe *wqe,
+ enum cmdq_cmd_type cmd_type,
+ struct hifc_cmd_buf *buf_in,
+ struct hifc_cmd_buf *buf_out, int wrapped,
+ enum hifc_ack_type ack_type,
+ enum hifc_mod_type mod, u8 cmd, u16 prod_idx)
+{
+ struct hifc_cmdq_wqe_lcmd *wqe_lcmd = &wqe->wqe_lcmd;
+ enum completion_format complete_format = COMPLETE_DIRECT;
+
+ switch (cmd_type) {
+ case SYNC_CMD_SGE_RESP:
+ if (buf_out) {
+ complete_format = COMPLETE_SGE;
+ cmdq_set_completion(&wqe_lcmd->completion, buf_out);
+ }
+ break;
+ case SYNC_CMD_DIRECT_RESP:
+ complete_format = COMPLETE_DIRECT;
+ wqe_lcmd->completion.direct_resp = 0;
+ break;
+ case ASYNC_CMD:
+ complete_format = COMPLETE_DIRECT;
+ wqe_lcmd->completion.direct_resp = 0;
+
+ wqe_lcmd->buf_desc.saved_async_buf = (u64)(buf_in);
+ break;
+ }
+
+ cmdq_prepare_wqe_ctrl(wqe, wrapped, ack_type, mod, cmd,
+ prod_idx, complete_format, DATA_SGE,
+ BUFDESC_LCMD_LEN);
+
+ cmdq_set_lcmd_bufdesc(wqe_lcmd, buf_in);
+}
+
+static void cmdq_set_inline_wqe(struct hifc_cmdq_wqe *wqe,
+ enum cmdq_cmd_type cmd_type,
+ void *buf_in, u16 in_size,
+ struct hifc_cmd_buf *buf_out, int wrapped,
+ enum hifc_ack_type ack_type,
+ enum hifc_mod_type mod, u8 cmd, u16 prod_idx)
+{
+ struct hifc_cmdq_wqe_scmd *wqe_scmd = &wqe->inline_wqe.wqe_scmd;
+ enum completion_format complete_format = COMPLETE_DIRECT;
+
+ switch (cmd_type) {
+ case SYNC_CMD_SGE_RESP:
+ complete_format = COMPLETE_SGE;
+ cmdq_set_completion(&wqe_scmd->completion, buf_out);
+ break;
+ case SYNC_CMD_DIRECT_RESP:
+ complete_format = COMPLETE_DIRECT;
+ wqe_scmd->completion.direct_resp = 0;
+ break;
+ default:
+ break;
+ }
+
+ cmdq_prepare_wqe_ctrl(wqe, wrapped, ack_type, mod, cmd, prod_idx,
+ complete_format, DATA_DIRECT, BUFDESC_SCMD_LEN);
+
+ cmdq_set_inline_wqe_data(&wqe->inline_wqe, buf_in, in_size);
+}
+
+static void cmdq_update_cmd_status(struct hifc_cmdq *cmdq, u16 prod_idx,
+ struct hifc_cmdq_wqe *wqe)
+{
+ struct hifc_cmdq_cmd_info *cmd_info;
+ struct hifc_cmdq_wqe_lcmd *wqe_lcmd;
+ u32 status_info;
+
+ wqe_lcmd = &wqe->wqe_lcmd;
+ cmd_info = &cmdq->cmd_infos[prod_idx];
+
+ if (cmd_info->errcode) {
+ status_info = be32_to_cpu(wqe_lcmd->status.status_info);
+ *cmd_info->errcode = WQE_ERRCODE_GET(status_info, VAL);
+ }
+
+ if (cmd_info->direct_resp &&
+ cmd_info->cmd_type == HIFC_CMD_TYPE_DIRECT_RESP)
+ *cmd_info->direct_resp =
+ cpu_to_be64(wqe_lcmd->completion.direct_resp);
+}
+
+static int hifc_cmdq_sync_timeout_check(struct hifc_cmdq *cmdq,
+ struct hifc_cmdq_wqe *wqe, u16 pi,
+ enum hifc_mod_type mod, u8 cmd)
+{
+ struct hifc_cmdq_wqe_lcmd *wqe_lcmd;
+ struct hifc_ctrl *ctrl;
+ u32 ctrl_info;
+
+ wqe_lcmd = &wqe->wqe_lcmd;
+ ctrl = &wqe_lcmd->ctrl;
+ ctrl_info = be32_to_cpu((ctrl)->ctrl_info);
+ if (!WQE_COMPLETED(ctrl_info)) {
+ sdk_info(cmdq->hwdev->dev_hdl, "Cmdq sync command check busy bit not set, mod: %u, cmd: 0x%x\n",
+ mod, cmd);
+ return -EFAULT;
+ }
+
+ cmdq_update_cmd_status(cmdq, pi, wqe);
+
+ sdk_info(cmdq->hwdev->dev_hdl, "Cmdq sync command check succeed, mod: %u, cmd: 0x%x\n",
+ mod, cmd);
+ return 0;
+}
+
+static void __clear_cmd_info(struct hifc_cmdq_cmd_info *cmd_info,
+ struct hifc_cmdq_cmd_info *saved_cmd_info)
+{
+ if (cmd_info->errcode == saved_cmd_info->errcode)
+ cmd_info->errcode = NULL;
+
+ if (cmd_info->done == saved_cmd_info->done)
+ cmd_info->done = NULL;
+
+ if (cmd_info->direct_resp == saved_cmd_info->direct_resp)
+ cmd_info->direct_resp = NULL;
+}
+
+static int
+cmdq_sync_cmd_timeout_handler(struct hifc_cmdq *cmdq,
+ struct hifc_cmdq_cmd_info *cmd_info,
+ struct hifc_cmdq_cmd_info *saved_cmd_info,
+ struct hifc_cmdq_wqe *curr_wqe,
+ enum hifc_mod_type mod, u8 cmd,
+ u16 curr_prod_idx, u64 curr_msg_id)
+{
+ int err;
+
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ if (cmd_info->cmpt_code == saved_cmd_info->cmpt_code)
+ cmd_info->cmpt_code = NULL;
+
+ if (*saved_cmd_info->cmpt_code == CMDQ_COMPLETE_CMPT_CODE) {
+ sdk_info(cmdq->hwdev->dev_hdl, "Cmdq sync command (mod: %u, cmd: 0x%x)has been completed\n",
+ mod, cmd);
+ spin_unlock_bh(&cmdq->cmdq_lock);
+ return 0;
+ }
+
+ if (curr_msg_id == cmd_info->cmdq_msg_id) {
+ err = hifc_cmdq_sync_timeout_check(cmdq, curr_wqe,
+ curr_prod_idx,
+ mod, cmd);
+ if (err)
+ cmd_info->cmd_type = HIFC_CMD_TYPE_TIMEOUT;
+ else
+ cmd_info->cmd_type = HIFC_CMD_TYPE_FAKE_TIMEOUT;
+ } else {
+ err = -ETIMEDOUT;
+ sdk_err(cmdq->hwdev->dev_hdl,
+ "Cmdq sync command current msg id dismatch with cmd_info msg id, mod: %u, cmd: 0x%x\n",
+ mod, cmd);
+ }
+
+ __clear_cmd_info(cmd_info, saved_cmd_info);
+
+ spin_unlock_bh(&cmdq->cmdq_lock);
+
+ return err;
+}
+
+static int cmdq_sync_cmd_direct_resp(struct hifc_cmdq *cmdq,
+ enum hifc_ack_type ack_type,
+ enum hifc_mod_type mod, u8 cmd,
+ struct hifc_cmd_buf *buf_in,
+ u64 *out_param, u32 timeout)
+{
+ struct hifc_wq *wq = cmdq->wq;
+ struct hifc_cmdq_wqe *curr_wqe, wqe;
+ struct hifc_cmdq_cmd_info *cmd_info, saved_cmd_info;
+ struct completion done;
+ u16 curr_prod_idx, next_prod_idx, num_wqebbs;
+ int wrapped, errcode = 0, wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE);
+ int cmpt_code = CMDQ_SEND_CMPT_CODE;
+ ulong timeo;
+ u64 curr_msg_id;
+ int err;
+
+ num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq);
+
+ /* Keep wrapped and doorbell index correct. bh - for tasklet(ceq) */
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ /* in order to save a wqebb for setting arm_bit when
+ * send cmdq commands frequently resulting in cmdq full
+ */
+ if (HIFC_GET_CMDQ_FREE_WQEBBS(wq) < num_wqebbs + 1) {
+ spin_unlock_bh(&cmdq->cmdq_lock);
+ return -EBUSY;
+ }
+
+ /* WQE_SIZE = WQEBB_SIZE, we will get the wq element and not shadow */
+ curr_wqe = hifc_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx);
+ if (!curr_wqe) {
+ spin_unlock_bh(&cmdq->cmdq_lock);
+ sdk_err(cmdq->hwdev->dev_hdl, "Can not get avalible wqebb, mod: %u, cmd: 0x%x\n",
+ mod, cmd);
+ return -EBUSY;
+ }
+
+ memset(&wqe, 0, sizeof(wqe));
+
+ wrapped = cmdq->wrapped;
+
+ next_prod_idx = curr_prod_idx + num_wqebbs;
+ if (next_prod_idx >= wq->q_depth) {
+ cmdq->wrapped = !cmdq->wrapped;
+ next_prod_idx -= wq->q_depth;
+ }
+
+ cmd_info = &cmdq->cmd_infos[curr_prod_idx];
+
+ init_completion(&done);
+
+ cmd_info->done = &done;
+ cmd_info->errcode = &errcode;
+ cmd_info->direct_resp = out_param;
+ cmd_info->cmpt_code = &cmpt_code;
+
+ memcpy(&saved_cmd_info, cmd_info, sizeof(*cmd_info));
+
+ cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_DIRECT_RESP, buf_in, NULL,
+ wrapped, ack_type, mod, cmd, curr_prod_idx);
+
+ /* The data that is written to HW should be in Big Endian Format */
+ hifc_cpu_to_be32(&wqe, wqe_size);
+
+ /* CMDQ WQE is not shadow, therefore wqe will be written to wq */
+ cmdq_wqe_fill(curr_wqe, &wqe);
+
+ cmd_info->cmd_type = HIFC_CMD_TYPE_DIRECT_RESP;
+
+ (cmd_info->cmdq_msg_id)++;
+ curr_msg_id = cmd_info->cmdq_msg_id;
+
+ cmdq_set_db(cmdq, HIFC_CMDQ_SYNC, next_prod_idx);
+
+ spin_unlock_bh(&cmdq->cmdq_lock);
+
+ timeo = msecs_to_jiffies(timeout ? timeout : CMDQ_CMD_TIMEOUT);
+ if (!wait_for_completion_timeout(&done, timeo)) {
+ err = cmdq_sync_cmd_timeout_handler(cmdq, cmd_info,
+ &saved_cmd_info,
+ curr_wqe, mod, cmd,
+ curr_prod_idx, curr_msg_id);
+
+ if (!err)
+ goto timeout_check_ok;
+
+ sdk_err(cmdq->hwdev->dev_hdl, "Cmdq sync command timeout, prod idx: 0x%x\n",
+ curr_prod_idx);
+ return -ETIMEDOUT;
+ }
+
+timeout_check_ok:
+ smp_rmb(); /* read error code after completion */
+
+ if (errcode > 1)
+ return errcode;
+
+ return 0;
+}
+
+static int cmdq_sync_cmd_detail_resp(struct hifc_cmdq *cmdq,
+ enum hifc_ack_type ack_type,
+ enum hifc_mod_type mod, u8 cmd,
+ struct hifc_cmd_buf *buf_in,
+ struct hifc_cmd_buf *buf_out,
+ u32 timeout)
+{
+ struct hifc_wq *wq = cmdq->wq;
+ struct hifc_cmdq_wqe *curr_wqe, wqe;
+ struct hifc_cmdq_cmd_info *cmd_info, saved_cmd_info;
+ struct completion done;
+ u16 curr_prod_idx, next_prod_idx, num_wqebbs;
+ int wrapped, errcode = 0, wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE);
+ int cmpt_code = CMDQ_SEND_CMPT_CODE;
+ ulong timeo;
+ u64 curr_msg_id;
+ int err;
+
+ num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq);
+
+ /* Keep wrapped and doorbell index correct. bh - for tasklet(ceq) */
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ /* in order to save a wqebb for setting arm_bit when
+ * send cmdq commands frequently resulting in cmdq full
+ */
+ if (HIFC_GET_CMDQ_FREE_WQEBBS(wq) < num_wqebbs + 1) {
+ spin_unlock_bh(&cmdq->cmdq_lock);
+ return -EBUSY;
+ }
+
+ /* WQE_SIZE = WQEBB_SIZE, we will get the wq element and not shadow*/
+ curr_wqe = hifc_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx);
+ if (!curr_wqe) {
+ spin_unlock_bh(&cmdq->cmdq_lock);
+ sdk_err(cmdq->hwdev->dev_hdl, "Can not get avalible wqebb, mod: %u, cmd: 0x%x\n",
+ mod, cmd);
+ return -EBUSY;
+ }
+
+ memset(&wqe, 0, sizeof(wqe));
+
+ wrapped = cmdq->wrapped;
+
+ next_prod_idx = curr_prod_idx + num_wqebbs;
+ if (next_prod_idx >= wq->q_depth) {
+ cmdq->wrapped = !cmdq->wrapped;
+ next_prod_idx -= wq->q_depth;
+ }
+
+ cmd_info = &cmdq->cmd_infos[curr_prod_idx];
+
+ init_completion(&done);
+
+ cmd_info->done = &done;
+ cmd_info->errcode = &errcode;
+ cmd_info->cmpt_code = &cmpt_code;
+
+ memcpy(&saved_cmd_info, cmd_info, sizeof(*cmd_info));
+
+ cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_SGE_RESP, buf_in, buf_out,
+ wrapped, ack_type, mod, cmd, curr_prod_idx);
+
+ hifc_cpu_to_be32(&wqe, wqe_size);
+
+ cmdq_wqe_fill(curr_wqe, &wqe);
+
+ cmd_info->cmd_type = HIFC_CMD_TYPE_SGE_RESP;
+
+ (cmd_info->cmdq_msg_id)++;
+ curr_msg_id = cmd_info->cmdq_msg_id;
+
+ cmdq_set_db(cmdq, HIFC_CMDQ_SYNC, next_prod_idx);
+
+ spin_unlock_bh(&cmdq->cmdq_lock);
+
+ timeo = msecs_to_jiffies(timeout ? timeout : CMDQ_CMD_TIMEOUT);
+ if (!wait_for_completion_timeout(&done, timeo)) {
+ err = cmdq_sync_cmd_timeout_handler(cmdq, cmd_info,
+ &saved_cmd_info,
+ curr_wqe, mod, cmd,
+ curr_prod_idx, curr_msg_id);
+ if (!err)
+ goto timeout_check_ok;
+
+ sdk_err(cmdq->hwdev->dev_hdl, "Cmdq sync command timeout, prod idx: 0x%x\n",
+ curr_prod_idx);
+ return -ETIMEDOUT;
+ }
+
+timeout_check_ok:
+
+ smp_rmb(); /* read error code after completion */
+
+ if (errcode > 1)
+ return errcode;
+
+ return 0;
+}
+
+static int cmdq_async_cmd(struct hifc_cmdq *cmdq, enum hifc_ack_type ack_type,
+ enum hifc_mod_type mod, u8 cmd,
+ struct hifc_cmd_buf *buf_in)
+{
+ struct hifc_wq *wq = cmdq->wq;
+ int wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE);
+ u16 curr_prod_idx, next_prod_idx, num_wqebbs;
+ struct hifc_cmdq_wqe *curr_wqe, wqe;
+ int wrapped;
+
+ num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq);
+
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ /* WQE_SIZE = WQEBB_SIZE, we will get the wq element and not shadow*/
+ curr_wqe = hifc_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx);
+ if (!curr_wqe) {
+ spin_unlock_bh(&cmdq->cmdq_lock);
+ return -EBUSY;
+ }
+
+ memset(&wqe, 0, sizeof(wqe));
+
+ wrapped = cmdq->wrapped;
+ next_prod_idx = curr_prod_idx + num_wqebbs;
+ if (next_prod_idx >= cmdq->wq->q_depth) {
+ cmdq->wrapped = !cmdq->wrapped;
+ next_prod_idx -= cmdq->wq->q_depth;
+ }
+
+ cmdq_set_lcmd_wqe(&wqe, ASYNC_CMD, buf_in, NULL, wrapped,
+ ack_type, mod, cmd, curr_prod_idx);
+
+ /* The data that is written to HW should be in Big Endian Format */
+ hifc_cpu_to_be32(&wqe, wqe_size);
+
+ cmdq_wqe_fill(curr_wqe, &wqe);
+
+ cmdq->cmd_infos[curr_prod_idx].cmd_type = HIFC_CMD_TYPE_ASYNC;
+
+ cmdq_set_db(cmdq, HIFC_CMDQ_ASYNC, next_prod_idx);
+
+ spin_unlock_bh(&cmdq->cmdq_lock);
+
+ return 0;
+}
+
+static int cmdq_set_arm_bit(struct hifc_cmdq *cmdq, void *buf_in, u16 in_size)
+{
+ struct hifc_wq *wq = cmdq->wq;
+ struct hifc_cmdq_wqe *curr_wqe, wqe;
+ u16 curr_prod_idx, next_prod_idx, num_wqebbs;
+ int wrapped, wqe_size = cmdq_wqe_size(WQE_SCMD_TYPE);
+
+ num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq);
+
+ /* Keep wrapped and doorbell index correct. bh - for tasklet(ceq) */
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ /* WQE_SIZE = WQEBB_SIZE, we will get the wq element and not shadow*/
+ curr_wqe = hifc_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx);
+ if (!curr_wqe) {
+ spin_unlock_bh(&cmdq->cmdq_lock);
+ sdk_err(cmdq->hwdev->dev_hdl, "Can not get avalible wqebb setting arm\n");
+ return -EBUSY;
+ }
+
+ memset(&wqe, 0, sizeof(wqe));
+
+ wrapped = cmdq->wrapped;
+
+ next_prod_idx = curr_prod_idx + num_wqebbs;
+ if (next_prod_idx >= wq->q_depth) {
+ cmdq->wrapped = !cmdq->wrapped;
+ next_prod_idx -= wq->q_depth;
+ }
+
+ cmdq_set_inline_wqe(&wqe, SYNC_CMD_DIRECT_RESP, buf_in, in_size, NULL,
+ wrapped, HIFC_ACK_TYPE_CMDQ, HIFC_MOD_COMM,
+ CMDQ_SET_ARM_CMD, curr_prod_idx);
+
+ /* The data that is written to HW should be in Big Endian Format */
+ hifc_cpu_to_be32(&wqe, wqe_size);
+
+ /* cmdq wqe is not shadow, therefore wqe will be written to wq */
+ cmdq_wqe_fill(curr_wqe, &wqe);
+
+ cmdq->cmd_infos[curr_prod_idx].cmd_type = HIFC_CMD_TYPE_SET_ARM;
+
+ cmdq_set_db(cmdq, cmdq->cmdq_type, next_prod_idx);
+
+ spin_unlock_bh(&cmdq->cmdq_lock);
+
+ return 0;
+}
+
+static int cmdq_params_valid(void *hwdev, struct hifc_cmd_buf *buf_in)
+{
+ if (!buf_in || !hwdev) {
+ pr_err("Invalid CMDQ buffer addr\n");
+ return -EINVAL;
+ }
+
+ if (!buf_in->size || buf_in->size > HIFC_CMDQ_MAX_DATA_SIZE) {
+ pr_err("Invalid CMDQ buffer size: 0x%x\n", buf_in->size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+#define WAIT_CMDQ_ENABLE_TIMEOUT 300
+
+static int wait_cmdqs_enable(struct hifc_cmdqs *cmdqs)
+{
+ unsigned long end;
+
+ end = jiffies + msecs_to_jiffies(WAIT_CMDQ_ENABLE_TIMEOUT);
+ do {
+ if (cmdqs->status & HIFC_CMDQ_ENABLE)
+ return 0;
+ } while (time_before(jiffies, end) && cmdqs->hwdev->chip_present_flag &&
+ !cmdqs->disable_flag);
+
+ cmdqs->disable_flag = 1;
+
+ return -EBUSY;
+}
+
+int hifc_cmdq_direct_resp(void *hwdev, enum hifc_ack_type ack_type,
+ enum hifc_mod_type mod, u8 cmd,
+ struct hifc_cmd_buf *buf_in, u64 *out_param,
+ u32 timeout)
+{
+ struct hifc_cmdqs *cmdqs;
+ int err = cmdq_params_valid(hwdev, buf_in);
+
+ if (err) {
+ pr_err("Invalid CMDQ parameters\n");
+ return err;
+ }
+
+ cmdqs = ((struct hifc_hwdev *)hwdev)->cmdqs;
+
+ if (!(((struct hifc_hwdev *)hwdev)->chip_present_flag) ||
+ !hifc_is_hwdev_mod_inited(hwdev, HIFC_HWDEV_CMDQ_INITED))
+ return -EPERM;
+
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq is disable\n");
+ return err;
+ }
+
+ err = cmdq_sync_cmd_direct_resp(&cmdqs->cmdq[HIFC_CMDQ_SYNC], ack_type,
+ mod, cmd, buf_in, out_param, timeout);
+ if (!(((struct hifc_hwdev *)hwdev)->chip_present_flag))
+ return -ETIMEDOUT;
+ else
+ return err;
+}
+
+int hifc_cmdq_detail_resp(void *hwdev,
+ enum hifc_ack_type ack_type,
+ enum hifc_mod_type mod, u8 cmd,
+ struct hifc_cmd_buf *buf_in,
+ struct hifc_cmd_buf *buf_out,
+ u32 timeout)
+{
+ struct hifc_cmdqs *cmdqs;
+ int err = cmdq_params_valid(hwdev, buf_in);
+
+ if (err)
+ return err;
+
+ cmdqs = ((struct hifc_hwdev *)hwdev)->cmdqs;
+
+ if (!(((struct hifc_hwdev *)hwdev)->chip_present_flag) ||
+ !hifc_is_hwdev_mod_inited(hwdev, HIFC_HWDEV_CMDQ_INITED))
+ return -EPERM;
+
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq is disable\n");
+ return err;
+ }
+
+ err = cmdq_sync_cmd_detail_resp(&cmdqs->cmdq[HIFC_CMDQ_SYNC], ack_type,
+ mod, cmd, buf_in, buf_out, timeout);
+ if (!(((struct hifc_hwdev *)hwdev)->chip_present_flag))
+ return -ETIMEDOUT;
+ else
+ return err;
+}
+
+int hifc_cmdq_async(void *hwdev, enum hifc_ack_type ack_type,
+ enum hifc_mod_type mod, u8 cmd,
+ struct hifc_cmd_buf *buf_in)
+{
+ struct hifc_cmdqs *cmdqs;
+ int err = cmdq_params_valid(hwdev, buf_in);
+
+ if (err)
+ return err;
+
+ cmdqs = ((struct hifc_hwdev *)hwdev)->cmdqs;
+
+ if (!(((struct hifc_hwdev *)hwdev)->chip_present_flag) ||
+ !hifc_is_hwdev_mod_inited(hwdev, HIFC_HWDEV_CMDQ_INITED))
+ return -EPERM;
+
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq is disable\n");
+ return err;
+ }
+
+ return cmdq_async_cmd(&cmdqs->cmdq[HIFC_CMDQ_ASYNC], ack_type, mod,
+ cmd, buf_in);
+}
+
+int hifc_set_arm_bit(void *hwdev, enum hifc_set_arm_type q_type, u16 q_id)
+{
+ struct hifc_cmdqs *cmdqs;
+ struct hifc_cmdq *cmdq;
+ struct hifc_cmdq_arm_bit arm_bit;
+ enum hifc_cmdq_type cmdq_type = HIFC_CMDQ_SYNC;
+ u16 in_size;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (!(((struct hifc_hwdev *)hwdev)->chip_present_flag) ||
+ !hifc_is_hwdev_mod_inited(hwdev, HIFC_HWDEV_CMDQ_INITED))
+ return -EPERM;
+
+ cmdqs = ((struct hifc_hwdev *)hwdev)->cmdqs;
+
+ if (!(cmdqs->status & HIFC_CMDQ_ENABLE))
+ return -EBUSY;
+
+ if (q_type == HIFC_SET_ARM_CMDQ) {
+ if (q_id >= HIFC_MAX_CMDQ_TYPES)
+ return -EFAULT;
+
+ cmdq_type = q_id;
+ }
+ /* sq is using interrupt now, so we only need to set arm bit for cmdq,
+ * remove comment below if need to set sq arm bit
+ * else
+ * cmdq_type = HIFC_CMDQ_SYNC;
+ */
+
+ cmdq = &cmdqs->cmdq[cmdq_type];
+
+ arm_bit.q_type = q_type;
+ arm_bit.q_id = q_id;
+ in_size = sizeof(arm_bit);
+
+ err = cmdq_set_arm_bit(cmdq, &arm_bit, in_size);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl,
+ "Failed to set arm for q_type: %d, qid %d\n",
+ q_type, q_id);
+ return err;
+ }
+
+ return 0;
+}
+
+static void clear_wqe_complete_bit(struct hifc_cmdq *cmdq,
+ struct hifc_cmdq_wqe *wqe, u16 ci)
+{
+ struct hifc_cmdq_wqe_lcmd *wqe_lcmd;
+ struct hifc_cmdq_inline_wqe *inline_wqe;
+ struct hifc_cmdq_wqe_scmd *wqe_scmd;
+ struct hifc_ctrl *ctrl;
+ u32 header_info = be32_to_cpu(WQE_HEADER(wqe)->header_info);
+ int buf_len = CMDQ_WQE_HEADER_GET(header_info, BUFDESC_LEN);
+ int wqe_size = cmdq_get_wqe_size(buf_len);
+ u16 num_wqebbs;
+
+ if (wqe_size == WQE_LCMD_SIZE) {
+ wqe_lcmd = &wqe->wqe_lcmd;
+ ctrl = &wqe_lcmd->ctrl;
+ } else {
+ inline_wqe = &wqe->inline_wqe;
+ wqe_scmd = &inline_wqe->wqe_scmd;
+ ctrl = &wqe_scmd->ctrl;
+ }
+
+ /* clear HW busy bit */
+ ctrl->ctrl_info = 0;
+ cmdq->cmd_infos[ci].cmd_type = HIFC_CMD_TYPE_NONE;
+
+ wmb(); /* verify wqe is clear */
+
+ num_wqebbs = WQE_NUM_WQEBBS(wqe_size, cmdq->wq);
+ hifc_put_wqe(cmdq->wq, num_wqebbs);
+}
+
+static void cmdq_sync_cmd_handler(struct hifc_cmdq *cmdq,
+ struct hifc_cmdq_wqe *wqe, u16 cons_idx)
+{
+ u16 prod_idx = cons_idx;
+
+ spin_lock(&cmdq->cmdq_lock);
+
+ cmdq_update_cmd_status(cmdq, prod_idx, wqe);
+
+ if (cmdq->cmd_infos[prod_idx].cmpt_code) {
+ *cmdq->cmd_infos[prod_idx].cmpt_code =
+ CMDQ_COMPLETE_CMPT_CODE;
+ cmdq->cmd_infos[prod_idx].cmpt_code = NULL;
+ }
+
+ /* make sure cmpt_code operation before done operation */
+ smp_rmb();
+
+ if (cmdq->cmd_infos[prod_idx].done) {
+ complete(cmdq->cmd_infos[prod_idx].done);
+ cmdq->cmd_infos[prod_idx].done = NULL;
+ }
+
+ spin_unlock(&cmdq->cmdq_lock);
+
+ clear_wqe_complete_bit(cmdq, wqe, cons_idx);
+}
+
+static void cmdq_async_cmd_handler(struct hifc_hwdev *hwdev,
+ struct hifc_cmdq *cmdq,
+ struct hifc_cmdq_wqe *wqe, u16 ci)
+{
+ u64 buf = wqe->wqe_lcmd.buf_desc.saved_async_buf;
+ int addr_sz = sizeof(u64);
+
+ hifc_be32_to_cpu((void *)&buf, addr_sz);
+ if (buf)
+ hifc_free_cmd_buf(hwdev, (struct hifc_cmd_buf *)buf);
+
+ clear_wqe_complete_bit(cmdq, wqe, ci);
+}
+
+static int cmdq_arm_ceq_handler(struct hifc_cmdq *cmdq,
+ struct hifc_cmdq_wqe *wqe, u16 ci)
+{
+ struct hifc_cmdq_inline_wqe *inline_wqe = &wqe->inline_wqe;
+ struct hifc_cmdq_wqe_scmd *wqe_scmd = &inline_wqe->wqe_scmd;
+ struct hifc_ctrl *ctrl = &wqe_scmd->ctrl;
+ u32 ctrl_info = be32_to_cpu((ctrl)->ctrl_info);
+
+ if (!WQE_COMPLETED(ctrl_info))
+ return -EBUSY;
+
+ clear_wqe_complete_bit(cmdq, wqe, ci);
+
+ return 0;
+}
+
+#define HIFC_CMDQ_WQE_HEAD_LEN 32
+static void hifc_dump_cmdq_wqe_head(struct hifc_hwdev *hwdev,
+ struct hifc_cmdq_wqe *wqe)
+{
+ u32 i;
+ u32 *data = (u32 *)wqe;
+
+ for (i = 0; i < (HIFC_CMDQ_WQE_HEAD_LEN / sizeof(u32)); i += 4) {
+ sdk_info(hwdev->dev_hdl, "wqe data: 0x%08x, 0x%08x, 0x%08x, 0x%08x\n",
+ data[i], data[i + 1], data[i + 2],
+ data[i + 3]);/*lint !e679*/
+ }
+}
+
+#define CMDQ_CMD_TYPE_TIMEOUT(cmd_type) \
+ ((cmd_type) == HIFC_CMD_TYPE_TIMEOUT || \
+ (cmd_type) == HIFC_CMD_TYPE_FAKE_TIMEOUT)
+
+static inline void cmdq_response_handle(struct hifc_hwdev *hwdev,
+ struct hifc_cmdq *cmdq,
+ struct hifc_cmdq_wqe *wqe,
+ enum hifc_cmdq_type cmdq_type, u16 ci)
+{
+ if (cmdq_type == HIFC_CMDQ_ASYNC)
+ cmdq_async_cmd_handler(hwdev, cmdq, wqe, ci);
+ else
+ cmdq_sync_cmd_handler(cmdq, wqe, ci);
+}
+
+static inline void set_arm_bit(struct hifc_hwdev *hwdev, int set_arm,
+ enum hifc_cmdq_type cmdq_type)
+{
+ if (set_arm)
+ hifc_set_arm_bit(hwdev, HIFC_SET_ARM_CMDQ, cmdq_type);
+}
+
+void hifc_cmdq_ceq_handler(void *handle, u32 ceqe_data)
+{
+ struct hifc_cmdqs *cmdqs = ((struct hifc_hwdev *)handle)->cmdqs;
+ enum hifc_cmdq_type cmdq_type = CEQE_CMDQ_GET(ceqe_data, TYPE);
+ struct hifc_cmdq *cmdq = &cmdqs->cmdq[cmdq_type];
+ struct hifc_hwdev *hwdev = cmdqs->hwdev;
+ struct hifc_cmdq_wqe *wqe;
+ struct hifc_cmdq_wqe_lcmd *wqe_lcmd;
+ struct hifc_ctrl *ctrl;
+ struct hifc_cmdq_cmd_info *cmd_info;
+ u32 ctrl_info;
+ u16 ci;
+ int set_arm = 1;
+
+ while ((wqe = hifc_read_wqe(cmdq->wq, 1, &ci)) != NULL) {
+ cmd_info = &cmdq->cmd_infos[ci];
+
+ if (cmd_info->cmd_type == HIFC_CMD_TYPE_NONE) {
+ set_arm = 1;
+ break;
+ } else if (CMDQ_CMD_TYPE_TIMEOUT(cmd_info->cmd_type)) {
+ if (cmd_info->cmd_type == HIFC_CMD_TYPE_TIMEOUT) {
+ sdk_info(hwdev->dev_hdl, "Cmdq timeout, q_id: %u, ci: %u\n",
+ cmdq_type, ci);
+ hifc_dump_cmdq_wqe_head(hwdev, wqe);
+ }
+
+ set_arm = 1;
+ clear_wqe_complete_bit(cmdq, wqe, ci);
+ } else if (cmd_info->cmd_type == HIFC_CMD_TYPE_SET_ARM) {
+ /* arm_bit was set until here */
+ set_arm = 0;
+
+ if (cmdq_arm_ceq_handler(cmdq, wqe, ci))
+ break;
+ } else {
+ set_arm = 1;
+
+ /* only arm bit is using scmd wqe, the wqe is lcmd */
+ wqe_lcmd = &wqe->wqe_lcmd;
+ ctrl = &wqe_lcmd->ctrl;
+ ctrl_info = be32_to_cpu((ctrl)->ctrl_info);
+
+ if (!WQE_COMPLETED(ctrl_info))
+ break;
+
+ /* This memory barrier is needed to keep us from reading
+ * any other fields out of the cmdq wqe until we have
+ * verified the command has been processed and
+ * written back.
+ */
+ dma_rmb();
+
+ cmdq_response_handle(hwdev, cmdq, wqe, cmdq_type, ci);
+ }
+ }
+
+ set_arm_bit(hwdev, set_arm, cmdq_type);
+}
+
+static void cmdq_init_queue_ctxt(struct hifc_cmdq *cmdq,
+ struct hifc_cmdq_pages *cmdq_pages,
+ struct hifc_cmdq_ctxt *cmdq_ctxt)
+{
+ struct hifc_cmdqs *cmdqs = cmdq_to_cmdqs(cmdq);
+ struct hifc_hwdev *hwdev = cmdqs->hwdev;
+ struct hifc_wq *wq = cmdq->wq;
+ struct hifc_cmdq_ctxt_info *ctxt_info = &cmdq_ctxt->ctxt_info;
+ u64 wq_first_page_paddr, cmdq_first_block_paddr, pfn;
+ u16 start_ci = (u16)wq->cons_idx;
+
+ /* The data in the HW is in Big Endian Format */
+ wq_first_page_paddr = be64_to_cpu(*wq->block_vaddr);
+
+ pfn = CMDQ_PFN(wq_first_page_paddr);
+
+ ctxt_info->curr_wqe_page_pfn =
+ CMDQ_CTXT_PAGE_INFO_SET(1, HW_BUSY_BIT) |
+ CMDQ_CTXT_PAGE_INFO_SET(1, CEQ_EN) |
+ CMDQ_CTXT_PAGE_INFO_SET(1, CEQ_ARM) |
+ CMDQ_CTXT_PAGE_INFO_SET(HIFC_CEQ_ID_CMDQ, EQ_ID) |
+ CMDQ_CTXT_PAGE_INFO_SET(pfn, CURR_WQE_PAGE_PFN);
+
+ /* If only use one page, use 0-level CLA */
+ if (cmdq->wq->num_q_pages != 1) {
+ cmdq_first_block_paddr = cmdq_pages->cmdq_page_paddr;
+ pfn = CMDQ_PFN(cmdq_first_block_paddr);
+ }
+
+ ctxt_info->wq_block_pfn = CMDQ_CTXT_BLOCK_INFO_SET(start_ci, CI) |
+ CMDQ_CTXT_BLOCK_INFO_SET(pfn, WQ_BLOCK_PFN);
+
+ cmdq_ctxt->func_idx = hifc_global_func_id_hw(hwdev);
+ cmdq_ctxt->ppf_idx = HIFC_HWIF_PPF_IDX(hwdev->hwif);
+ cmdq_ctxt->cmdq_id = cmdq->cmdq_type;
+}
+
+static int init_cmdq(struct hifc_cmdq *cmdq, struct hifc_hwdev *hwdev,
+ struct hifc_wq *wq, enum hifc_cmdq_type q_type)
+{
+ void __iomem *db_base;
+ int err = 0;
+
+ cmdq->wq = wq;
+ cmdq->cmdq_type = q_type;
+ cmdq->wrapped = 1;
+ cmdq->hwdev = hwdev;
+
+ spin_lock_init(&cmdq->cmdq_lock);
+
+ cmdq->cmd_infos = kcalloc(wq->q_depth, sizeof(*cmdq->cmd_infos),
+ GFP_KERNEL);
+ if (!cmdq->cmd_infos) {
+ err = -ENOMEM;
+ goto cmd_infos_err;
+ }
+
+ err = hifc_alloc_db_addr(hwdev, &db_base, NULL);
+ if (err)
+ goto alloc_db_err;
+
+ cmdq->db_base = (u8 *)db_base;
+ return 0;
+
+alloc_db_err:
+ kfree(cmdq->cmd_infos);
+
+cmd_infos_err:
+
+ return err;
+}
+
+static void free_cmdq(struct hifc_hwdev *hwdev, struct hifc_cmdq *cmdq)
+{
+ hifc_free_db_addr(hwdev, cmdq->db_base, NULL);
+ kfree(cmdq->cmd_infos);
+}
+
+int hifc_set_cmdq_ctxts(struct hifc_hwdev *hwdev)
+{
+ struct hifc_cmdqs *cmdqs = hwdev->cmdqs;
+ struct hifc_cmdq_ctxt *cmdq_ctxt, cmdq_ctxt_out = {0};
+ enum hifc_cmdq_type cmdq_type;
+ u16 in_size;
+ u16 out_size = sizeof(*cmdq_ctxt);
+ int err;
+
+ cmdq_type = HIFC_CMDQ_SYNC;
+ for (; cmdq_type < HIFC_MAX_CMDQ_TYPES; cmdq_type++) {
+ cmdq_ctxt = &cmdqs->cmdq[cmdq_type].cmdq_ctxt;
+ cmdq_ctxt->func_idx = hifc_global_func_id_hw(hwdev);
+ in_size = sizeof(*cmdq_ctxt);
+ err = hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_CMDQ_CTXT_SET,
+ cmdq_ctxt, in_size,
+ &cmdq_ctxt_out, &out_size, 0);
+ if (err || !out_size || cmdq_ctxt_out.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to set cmdq ctxt, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, cmdq_ctxt_out.status, out_size);
+ return -EFAULT;
+ }
+ }
+
+ cmdqs->status |= HIFC_CMDQ_ENABLE;
+ cmdqs->disable_flag = 0;
+
+ return 0;
+}
+
+void hifc_cmdq_flush_cmd(struct hifc_hwdev *hwdev,
+ struct hifc_cmdq *cmdq)
+{
+ struct hifc_cmdq_wqe *wqe;
+ struct hifc_cmdq_cmd_info *cmdq_info;
+ u16 ci, wqe_left, i;
+ u64 buf;
+
+ spin_lock_bh(&cmdq->cmdq_lock);
+ wqe_left = cmdq->wq->q_depth - (u16)atomic_read(&cmdq->wq->delta);
+ ci = MASKED_WQE_IDX(cmdq->wq, cmdq->wq->cons_idx);
+ for (i = 0; i < wqe_left; i++, ci++) {
+ ci = MASKED_WQE_IDX(cmdq->wq, ci);
+ cmdq_info = &cmdq->cmd_infos[ci];
+
+ if (cmdq_info->cmd_type == HIFC_CMD_TYPE_SET_ARM)
+ continue;
+
+ if (cmdq->cmdq_type == HIFC_CMDQ_ASYNC) {
+ wqe = hifc_get_wqebb_addr(cmdq->wq, ci);
+ buf = wqe->wqe_lcmd.buf_desc.saved_async_buf;
+ wqe->wqe_lcmd.buf_desc.saved_async_buf = 0;
+
+ hifc_be32_to_cpu((void *)&buf, sizeof(u64));
+ if (buf)
+ hifc_free_cmd_buf(hwdev,
+ (struct hifc_cmd_buf *)buf);
+ } else {
+ if (cmdq_info->done) {
+ complete(cmdq_info->done);
+ cmdq_info->done = NULL;
+ cmdq_info->cmpt_code = NULL;
+ cmdq_info->direct_resp = NULL;
+ cmdq_info->errcode = NULL;
+ }
+ }
+ }
+
+ spin_unlock_bh(&cmdq->cmdq_lock);
+}
+
+int hifc_reinit_cmdq_ctxts(struct hifc_hwdev *hwdev)
+{
+ struct hifc_cmdqs *cmdqs = hwdev->cmdqs;
+ enum hifc_cmdq_type cmdq_type;
+
+ cmdq_type = HIFC_CMDQ_SYNC;
+ for (; cmdq_type < HIFC_MAX_CMDQ_TYPES; cmdq_type++) {
+ hifc_cmdq_flush_cmd(hwdev, &cmdqs->cmdq[cmdq_type]);
+ cmdqs->cmdq[cmdq_type].wrapped = 1;
+ hifc_wq_wqe_pg_clear(cmdqs->cmdq[cmdq_type].wq);
+ }
+
+ return hifc_set_cmdq_ctxts(hwdev);
+}
+
+int hifc_cmdqs_init(struct hifc_hwdev *hwdev)
+{
+ struct hifc_cmdqs *cmdqs;
+ struct hifc_cmdq_ctxt *cmdq_ctxt;
+ enum hifc_cmdq_type type, cmdq_type;
+ size_t saved_wqs_size;
+ u32 max_wqe_size;
+ int err;
+
+ cmdqs = kzalloc(sizeof(*cmdqs), GFP_KERNEL);
+ if (!cmdqs)
+ return -ENOMEM;
+
+ hwdev->cmdqs = cmdqs;
+ cmdqs->hwdev = hwdev;
+
+ saved_wqs_size = HIFC_MAX_CMDQ_TYPES * sizeof(struct hifc_wq);
+ cmdqs->saved_wqs = kzalloc(saved_wqs_size, GFP_KERNEL);
+ if (!cmdqs->saved_wqs) {
+ sdk_err(hwdev->dev_hdl, "Failed to allocate saved wqs\n");
+ err = -ENOMEM;
+ goto alloc_wqs_err;
+ }
+
+ cmdqs->cmd_buf_pool = dma_pool_create("hifc_cmdq", hwdev->dev_hdl,
+ HIFC_CMDQ_BUF_SIZE,
+ HIFC_CMDQ_BUF_SIZE, 0ULL);
+ if (!cmdqs->cmd_buf_pool) {
+ sdk_err(hwdev->dev_hdl, "Failed to create cmdq buffer pool\n");
+ err = -ENOMEM;
+ goto pool_create_err;
+ }
+
+ max_wqe_size = (u32)cmdq_wqe_size(WQE_LCMD_TYPE);
+ err = hifc_cmdq_alloc(&cmdqs->cmdq_pages, cmdqs->saved_wqs,
+ hwdev->dev_hdl, HIFC_MAX_CMDQ_TYPES,
+ hwdev->wq_page_size, CMDQ_WQEBB_SIZE,
+ HIFC_CMDQ_DEPTH, max_wqe_size);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to allocate cmdq\n");
+ goto cmdq_alloc_err;
+ }
+
+ cmdq_type = HIFC_CMDQ_SYNC;
+ for (; cmdq_type < HIFC_MAX_CMDQ_TYPES; cmdq_type++) {
+ err = init_cmdq(&cmdqs->cmdq[cmdq_type], hwdev,
+ &cmdqs->saved_wqs[cmdq_type], cmdq_type);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize cmdq type :%d\n",
+ cmdq_type);
+ goto init_cmdq_err;
+ }
+
+ cmdq_ctxt = &cmdqs->cmdq[cmdq_type].cmdq_ctxt;
+ cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type],
+ &cmdqs->cmdq_pages, cmdq_ctxt);
+ }
+
+ err = hifc_set_cmdq_ctxts(hwdev);
+ if (err)
+ goto init_cmdq_err;
+
+ return 0;
+
+init_cmdq_err:
+ type = HIFC_CMDQ_SYNC;
+ for (; type < cmdq_type; type++)
+ free_cmdq(hwdev, &cmdqs->cmdq[type]);
+
+ hifc_cmdq_free(&cmdqs->cmdq_pages, cmdqs->saved_wqs,
+ HIFC_MAX_CMDQ_TYPES);
+
+cmdq_alloc_err:
+ dma_pool_destroy(cmdqs->cmd_buf_pool);
+
+pool_create_err:
+ kfree(cmdqs->saved_wqs);
+
+alloc_wqs_err:
+ kfree(cmdqs);
+
+ return err;
+}
+
+void hifc_cmdqs_free(struct hifc_hwdev *hwdev)
+{
+ struct hifc_cmdqs *cmdqs = hwdev->cmdqs;
+ enum hifc_cmdq_type cmdq_type = HIFC_CMDQ_SYNC;
+
+ cmdqs->status &= ~HIFC_CMDQ_ENABLE;
+
+ for (; cmdq_type < HIFC_MAX_CMDQ_TYPES; cmdq_type++) {
+ hifc_cmdq_flush_cmd(hwdev, &cmdqs->cmdq[cmdq_type]);
+ free_cmdq(cmdqs->hwdev, &cmdqs->cmdq[cmdq_type]);
+ }
+
+ hifc_cmdq_free(&cmdqs->cmdq_pages, cmdqs->saved_wqs,
+ HIFC_MAX_CMDQ_TYPES);
+
+ dma_pool_destroy(cmdqs->cmd_buf_pool);
+
+ kfree(cmdqs->saved_wqs);
+
+ kfree(cmdqs);
+}
diff --git a/drivers/scsi/huawei/hifc/hifc_cmdq.h b/drivers/scsi/huawei/hifc/hifc_cmdq.h
new file mode 100644
index 000000000000..cb2ac81c5edc
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_cmdq.h
@@ -0,0 +1,210 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+
+#ifndef HIFC_CMDQ_H_
+#define HIFC_CMDQ_H_
+
+#define HIFC_DB_OFF 0x00000800
+
+#define HIFC_SCMD_DATA_LEN 16
+
+#define HIFC_CMDQ_DEPTH 4096
+
+#define HIFC_CMDQ_BUF_SIZE 2048U
+#define HIFC_CMDQ_BUF_HW_RSVD 8
+#define HIFC_CMDQ_MAX_DATA_SIZE \
+ (HIFC_CMDQ_BUF_SIZE - HIFC_CMDQ_BUF_HW_RSVD)
+#define WQ_PAGE_PFN_SHIFT 12
+#define WQ_BLOCK_PFN_SHIFT 9
+
+#define WQ_PAGE_PFN(page_addr) ((page_addr) >> WQ_PAGE_PFN_SHIFT)
+#define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT)
+
+enum hifc_cmdq_type {
+ HIFC_CMDQ_SYNC,
+ HIFC_CMDQ_ASYNC,
+ HIFC_MAX_CMDQ_TYPES,
+};
+
+enum hifc_db_src_type {
+ HIFC_DB_SRC_CMDQ_TYPE,
+ HIFC_DB_SRC_L2NIC_SQ_TYPE,
+};
+
+enum hifc_cmdq_db_type {
+ HIFC_DB_SQ_RQ_TYPE,
+ HIFC_DB_CMDQ_TYPE,
+};
+
+/* CMDQ WQE CTRLS */
+struct hifc_cmdq_header {
+ u32 header_info;
+ u32 saved_data;
+};
+
+struct hifc_scmd_bufdesc {
+ u32 buf_len;
+ u32 rsvd;
+ u8 data[HIFC_SCMD_DATA_LEN];
+};
+
+struct hifc_lcmd_bufdesc {
+ struct hifc_sge sge;
+ u32 rsvd1;
+ u64 saved_async_buf;
+ u64 rsvd3;
+};
+
+struct hifc_cmdq_db {
+ u32 db_info;
+ u32 rsvd;
+};
+
+struct hifc_status {
+ u32 status_info;
+};
+
+struct hifc_ctrl {
+ u32 ctrl_info;
+};
+
+struct hifc_sge_resp {
+ struct hifc_sge sge;
+ u32 rsvd;
+};
+
+struct hifc_cmdq_completion {
+ /* HW Format */
+ union {
+ struct hifc_sge_resp sge_resp;
+ u64 direct_resp;
+ };
+};
+
+struct hifc_cmdq_wqe_scmd {
+ struct hifc_cmdq_header header;
+ struct hifc_cmdq_db db;
+ struct hifc_status status;
+ struct hifc_ctrl ctrl;
+ struct hifc_cmdq_completion completion;
+ struct hifc_scmd_bufdesc buf_desc;
+};
+
+struct hifc_cmdq_wqe_lcmd {
+ struct hifc_cmdq_header header;
+ struct hifc_status status;
+ struct hifc_ctrl ctrl;
+ struct hifc_cmdq_completion completion;
+ struct hifc_lcmd_bufdesc buf_desc;
+};
+
+struct hifc_cmdq_inline_wqe {
+ struct hifc_cmdq_wqe_scmd wqe_scmd;
+};
+
+struct hifc_cmdq_wqe {
+ /* HW Format */
+ union {
+ struct hifc_cmdq_inline_wqe inline_wqe;
+ struct hifc_cmdq_wqe_lcmd wqe_lcmd;
+ };
+};
+
+struct hifc_cmdq_arm_bit {
+ u32 q_type;
+ u32 q_id;
+};
+
+struct hifc_cmdq_ctxt_info {
+ u64 curr_wqe_page_pfn;
+ u64 wq_block_pfn;
+};
+
+struct hifc_cmdq_ctxt {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_idx;
+ u8 cmdq_id;
+ u8 ppf_idx;
+
+ u8 rsvd1[4];
+
+ struct hifc_cmdq_ctxt_info ctxt_info;
+};
+
+enum hifc_cmdq_status {
+ HIFC_CMDQ_ENABLE = BIT(0),
+};
+
+enum hifc_cmdq_cmd_type {
+ HIFC_CMD_TYPE_NONE,
+ HIFC_CMD_TYPE_SET_ARM,
+ HIFC_CMD_TYPE_DIRECT_RESP,
+ HIFC_CMD_TYPE_SGE_RESP,
+ HIFC_CMD_TYPE_ASYNC,
+ HIFC_CMD_TYPE_TIMEOUT,
+ HIFC_CMD_TYPE_FAKE_TIMEOUT,
+};
+
+struct hifc_cmdq_cmd_info {
+ enum hifc_cmdq_cmd_type cmd_type;
+
+ struct completion *done;
+ int *errcode;
+ int *cmpt_code;
+ u64 *direct_resp;
+ u64 cmdq_msg_id;
+};
+
+struct hifc_cmdq {
+ struct hifc_wq *wq;
+
+ enum hifc_cmdq_type cmdq_type;
+ int wrapped;
+
+ /* spinlock for send cmdq commands */
+ spinlock_t cmdq_lock;
+
+ /* doorbell area */
+ u8 __iomem *db_base;
+
+ struct hifc_cmdq_ctxt cmdq_ctxt;
+
+ struct hifc_cmdq_cmd_info *cmd_infos;
+
+ struct hifc_hwdev *hwdev;
+};
+
+struct hifc_cmdqs {
+ struct hifc_hwdev *hwdev;
+
+ struct pci_pool *cmd_buf_pool;
+
+ struct hifc_wq *saved_wqs;
+
+ struct hifc_cmdq_pages cmdq_pages;
+ struct hifc_cmdq cmdq[HIFC_MAX_CMDQ_TYPES];
+
+ u32 status;
+ u32 disable_flag;
+};
+
+void hifc_cmdq_ceq_handler(void *hwdev, u32 ceqe_data);
+
+int hifc_reinit_cmdq_ctxts(struct hifc_hwdev *hwdev);
+
+bool hifc_cmdq_idle(struct hifc_cmdq *cmdq);
+
+int hifc_cmdqs_init(struct hifc_hwdev *hwdev);
+
+void hifc_cmdqs_free(struct hifc_hwdev *hwdev);
+
+void hifc_cmdq_flush_cmd(struct hifc_hwdev *hwdev,
+ struct hifc_cmdq *cmdq);
+
+#endif
diff --git a/drivers/scsi/huawei/hifc/hifc_cqm_main.c b/drivers/scsi/huawei/hifc/hifc_cqm_main.c
new file mode 100644
index 000000000000..4cd048f1e662
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_cqm_main.c
@@ -0,0 +1,694 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/vmalloc.h>
+
+#include "hifc_knl_adp.h"
+#include "hifc_hw.h"
+#include "hifc_hwdev.h"
+#include "hifc_hwif.h"
+#include "hifc_api_cmd.h"
+#include "hifc_mgmt.h"
+#include "hifc_cfg.h"
+#include "hifc_cqm_object.h"
+#include "hifc_cqm_main.h"
+
+#define GET_MAX(a, b) (((a) > (b)) ? (a) : (b))
+#define GET_MIN(a, b) (((a) < (b)) ? (a) : (b))
+
+static void cqm_capability_init_check_ppf(void *ex_handle,
+ u32 *total_function_num)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+ struct service_cap *service_capability = &handle->cfg_mgmt->svc_cap;
+ struct cqm_handle_s *cqm_handle = (struct cqm_handle_s *)
+ (handle->cqm_hdl);
+
+ if (cqm_handle->func_attribute.func_type == CQM_PPF) {
+ *total_function_num = service_capability->host_total_function;
+ cqm_handle->func_capability.timer_enable =
+ service_capability->timer_en;
+
+ cqm_info(handle->dev_hdl, "Cap init: total function num 0x%x\n",
+ *total_function_num);
+ cqm_info(handle->dev_hdl, "Cap init: timer_enable %d (1: enable; 0: disable)\n",
+ cqm_handle->func_capability.timer_enable);
+ }
+}
+
+void cqm_test_mode_init(struct cqm_handle_s *cqm_handle,
+ struct service_cap *service_capability)
+{
+ cqm_handle->func_capability.xid_alloc_mode =
+ service_capability->test_xid_alloc_mode;
+ cqm_handle->func_capability.gpa_check_enable =
+ service_capability->test_gpa_check_enable;
+}
+
+static s32 cqm_service_capability_init_for_each(
+ struct cqm_handle_s *cqm_handle,
+ struct service_cap *service_capability)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)cqm_handle->ex_handle;
+
+ cqm_info(handle->dev_hdl, "Cap init: fc is valid\n");
+ cqm_handle->func_capability.hash_number +=
+ service_capability->fc_cap.dev_fc_cap.max_parent_qpc_num;
+ cqm_handle->func_capability.hash_basic_size = CQM_HASH_BUCKET_SIZE_64;
+ cqm_handle->func_capability.qpc_number +=
+ service_capability->fc_cap.dev_fc_cap.max_parent_qpc_num;
+ cqm_handle->func_capability.qpc_basic_size =
+ GET_MAX(service_capability->fc_cap.parent_qpc_size,
+ cqm_handle->func_capability.qpc_basic_size);
+ cqm_handle->func_capability.qpc_alloc_static = true;
+ cqm_handle->func_capability.scqc_number +=
+ service_capability->fc_cap.dev_fc_cap.scq_num;
+ cqm_handle->func_capability.scqc_basic_size =
+ GET_MAX(service_capability->fc_cap.scqc_size,
+ cqm_handle->func_capability.scqc_basic_size);
+ cqm_handle->func_capability.srqc_number +=
+ service_capability->fc_cap.dev_fc_cap.srq_num;
+ cqm_handle->func_capability.srqc_basic_size =
+ GET_MAX(service_capability->fc_cap.srqc_size,
+ cqm_handle->func_capability.srqc_basic_size);
+ cqm_handle->func_capability.lun_number = CQM_LUN_FC_NUM;
+ cqm_handle->func_capability.lun_basic_size = CQM_LUN_SIZE_8;
+ cqm_handle->func_capability.taskmap_number = CQM_TASKMAP_FC_NUM;
+ cqm_handle->func_capability.taskmap_basic_size = PAGE_SIZE;
+ cqm_handle->func_capability.childc_number +=
+ service_capability->fc_cap.dev_fc_cap.max_child_qpc_num;
+ cqm_handle->func_capability.childc_basic_size =
+ GET_MAX(service_capability->fc_cap.child_qpc_size,
+ cqm_handle->func_capability.childc_basic_size);
+ cqm_handle->func_capability.pagesize_reorder = CQM_FC_PAGESIZE_ORDER;
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_service_capability_init(struct cqm_handle_s *cqm_handle,
+ struct service_cap *service_capability)
+{
+ cqm_handle->service.has_register = false;
+ cqm_handle->service.buf_order = 0;
+
+ if (cqm_service_capability_init_for_each(
+ cqm_handle,
+ service_capability) == CQM_FAIL)
+ return CQM_FAIL;
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_capability_init - Initialize capability of cqm function and service,
+ * need to read information from the configuration management module
+ * @ex_handle: handle of hwdev
+ */
+s32 cqm_capability_init(void *ex_handle)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+ struct service_cap *service_capability = &handle->cfg_mgmt->svc_cap;
+ struct cqm_handle_s *cqm_handle = (struct cqm_handle_s *)
+ (handle->cqm_hdl);
+ u32 total_function_num = 0;
+ int err = 0;
+
+ cqm_capability_init_check_ppf(ex_handle, &total_function_num);
+
+ cqm_handle->func_capability.flow_table_based_conn_number =
+ service_capability->max_connect_num;
+ cqm_handle->func_capability.flow_table_based_conn_cache_number =
+ service_capability->max_stick2cache_num;
+ cqm_info(handle->dev_hdl, "Cap init: cfg max_conn_num 0x%x, max_cache_conn_num 0x%x\n",
+ cqm_handle->func_capability.flow_table_based_conn_number,
+ cqm_handle->func_capability.flow_table_based_conn_cache_number);
+
+ cqm_handle->func_capability.qpc_reserved = 0;
+ cqm_handle->func_capability.mpt_reserved = 0;
+ cqm_handle->func_capability.qpc_alloc_static = false;
+ cqm_handle->func_capability.scqc_alloc_static = false;
+
+ cqm_handle->func_capability.l3i_number = CQM_L3I_COMM_NUM;
+ cqm_handle->func_capability.l3i_basic_size = CQM_L3I_SIZE_8;
+
+ cqm_handle->func_capability.timer_number = CQM_TIMER_ALIGN_SCALE_NUM *
+ total_function_num;
+ cqm_handle->func_capability.timer_basic_size = CQM_TIMER_SIZE_32;
+
+ if (cqm_service_capability_init(cqm_handle, service_capability) ==
+ CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_service_capability_init));
+ err = CQM_FAIL;
+ goto out;
+ }
+
+ cqm_test_mode_init(cqm_handle, service_capability);
+
+ cqm_info(handle->dev_hdl, "Cap init: pagesize_reorder %d\n",
+ cqm_handle->func_capability.pagesize_reorder);
+ cqm_info(handle->dev_hdl, "Cap init: xid_alloc_mode %d, gpa_check_enable %d\n",
+ cqm_handle->func_capability.xid_alloc_mode,
+ cqm_handle->func_capability.gpa_check_enable);
+ cqm_info(handle->dev_hdl, "Cap init: qpc_alloc_mode %d, scqc_alloc_mode %d\n",
+ cqm_handle->func_capability.qpc_alloc_static,
+ cqm_handle->func_capability.scqc_alloc_static);
+ cqm_info(handle->dev_hdl, "Cap init: hash_number 0x%x\n",
+ cqm_handle->func_capability.hash_number);
+ cqm_info(handle->dev_hdl, "Cap init: qpc_number 0x%x, qpc_reserved 0x%x\n",
+ cqm_handle->func_capability.qpc_number,
+ cqm_handle->func_capability.qpc_reserved);
+ cqm_info(handle->dev_hdl, "Cap init: scqc_number 0x%x scqc_reserved 0x%x\n",
+ cqm_handle->func_capability.scqc_number,
+ cqm_handle->func_capability.scq_reserved);
+ cqm_info(handle->dev_hdl, "Cap init: srqc_number 0x%x\n",
+ cqm_handle->func_capability.srqc_number);
+ cqm_info(handle->dev_hdl, "Cap init: mpt_number 0x%x, mpt_reserved 0x%x\n",
+ cqm_handle->func_capability.mpt_number,
+ cqm_handle->func_capability.mpt_reserved);
+ cqm_info(handle->dev_hdl, "Cap init: gid_number 0x%x, lun_number 0x%x\n",
+ cqm_handle->func_capability.gid_number,
+ cqm_handle->func_capability.lun_number);
+ cqm_info(handle->dev_hdl, "Cap init: taskmap_number 0x%x, l3i_number 0x%x\n",
+ cqm_handle->func_capability.taskmap_number,
+ cqm_handle->func_capability.l3i_number);
+ cqm_info(handle->dev_hdl, "Cap init: timer_number 0x%x\n",
+ cqm_handle->func_capability.timer_number);
+ cqm_info(handle->dev_hdl, "Cap init: xid2cid_number 0x%x, reorder_number 0x%x\n",
+ cqm_handle->func_capability.xid2cid_number,
+ cqm_handle->func_capability.reorder_number);
+
+ return CQM_SUCCESS;
+
+out:
+ if (cqm_handle->func_attribute.func_type == CQM_PPF)
+ cqm_handle->func_capability.timer_enable = 0;
+
+ return err;
+}
+
+/**
+ * cqm_init - Initialize cqm
+ * @ex_handle: handle of hwdev
+ */
+s32 cqm_init(void *ex_handle)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+ struct cqm_handle_s *cqm_handle = NULL;
+ s32 ret = CQM_FAIL;
+
+ CQM_PTR_CHECK_RET(ex_handle, return CQM_FAIL, CQM_PTR_NULL(ex_handle));
+
+ cqm_handle = (struct cqm_handle_s *)kmalloc(sizeof(struct cqm_handle_s),
+ GFP_KERNEL | __GFP_ZERO);
+ CQM_PTR_CHECK_RET(cqm_handle, return CQM_FAIL,
+ CQM_ALLOC_FAIL(cqm_handle));
+ /* Clear memory to prevent other systems' memory from being cleared */
+ memset(cqm_handle, 0, sizeof(struct cqm_handle_s));
+
+ cqm_handle->ex_handle = handle;
+ cqm_handle->dev = (struct pci_dev *)(handle->pcidev_hdl);
+
+ handle->cqm_hdl = (void *)cqm_handle;
+
+ /* Clear statistics */
+ memset(&handle->hw_stats.cqm_stats, 0, sizeof(struct hifc_cqm_stats));
+
+ /* Read information of vf or pf */
+ cqm_handle->func_attribute = handle->hwif->attr;
+ cqm_info(handle->dev_hdl, "Func init: function type %d\n",
+ cqm_handle->func_attribute.func_type);
+
+ /* Read ability from configuration management module */
+ ret = cqm_capability_init(ex_handle);
+ if (ret == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_capability_init));
+ goto err1;
+ }
+
+ /* Initialize entries of memory table such as BAT/CLA/bitmap */
+ if (cqm_mem_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_mem_init));
+ goto err1;
+ }
+
+ /* Initialize event callback */
+ if (cqm_event_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_event_init));
+ goto err2;
+ }
+
+ /* Initialize doorbell */
+ if (cqm_db_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_db_init));
+ goto err3;
+ }
+
+ /* The timer bitmap is set directly from the beginning through CQM,
+ * no longer set/clear the bitmap through ifconfig up/down
+ */
+ if (hifc_func_tmr_bitmap_set(ex_handle, 1) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, "Timer start: enable timer bitmap failed\n");
+ goto err5;
+ }
+
+ return CQM_SUCCESS;
+
+err5:
+ cqm_db_uninit(ex_handle);
+err3:
+ cqm_event_uninit(ex_handle);
+err2:
+ cqm_mem_uninit(ex_handle);
+err1:
+ handle->cqm_hdl = NULL;
+ kfree(cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_uninit - Deinitialize the cqm, and is called once removing a function
+ * @ex_handle: handle of hwdev
+ */
+void cqm_uninit(void *ex_handle)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+ struct cqm_handle_s *cqm_handle = NULL;
+ s32 ret = CQM_FAIL;
+
+ CQM_PTR_CHECK_NO_RET(ex_handle, CQM_PTR_NULL(ex_handle), return);
+
+ cqm_handle = (struct cqm_handle_s *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_NO_RET(cqm_handle, CQM_PTR_NULL(cqm_handle), return);
+
+ /* The timer bitmap is set directly from the beginning through CQM,
+ * no longer set/clear the bitmap through ifconfig up/down
+ */
+ cqm_info(handle->dev_hdl, "Timer stop: disable timer\n");
+ if (hifc_func_tmr_bitmap_set(ex_handle, 0) != CQM_SUCCESS)
+ cqm_err(handle->dev_hdl, "Timer stop: disable timer bitmap failed\n");
+
+ /* Stopping timer, release the resource
+ * after a delay of one or two milliseconds
+ */
+ if ((cqm_handle->func_attribute.func_type == CQM_PPF) &&
+ (cqm_handle->func_capability.timer_enable == CQM_TIMER_ENABLE)) {
+ cqm_info(handle->dev_hdl, "Timer stop: hifc ppf timer stop\n");
+ ret = hifc_ppf_tmr_stop(handle);
+
+ if (ret != CQM_SUCCESS) {
+ cqm_info(handle->dev_hdl, "Timer stop: hifc ppf timer stop, ret=%d\n",
+ ret);
+ /* The timer fails to stop
+ * and does not affect resource release
+ */
+ }
+ usleep_range(900, 1000);
+ }
+
+ /* Release hardware doorbell */
+ cqm_db_uninit(ex_handle);
+
+ /* Cancel the callback of chipif */
+ cqm_event_uninit(ex_handle);
+
+ /* Release all table items
+ * and require the service to release all objects
+ */
+ cqm_mem_uninit(ex_handle);
+
+ /* Release cqm_handle */
+ handle->cqm_hdl = NULL;
+ kfree(cqm_handle);
+}
+
+/**
+ * cqm_mem_init - Initialize related memory of cqm,
+ * including all levels of entries
+ * @ex_handle: handle of hwdev
+ */
+s32 cqm_mem_init(void *ex_handle)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+ struct cqm_handle_s *cqm_handle = NULL;
+
+ cqm_handle = (struct cqm_handle_s *)(handle->cqm_hdl);
+
+ if (cqm_bat_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bat_init));
+ return CQM_FAIL;
+ }
+
+ if (cqm_cla_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_init));
+ goto err1;
+ }
+
+ if (cqm_bitmap_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bitmap_init));
+ goto err2;
+ }
+
+ if (cqm_object_table_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_object_table_init));
+ goto err3;
+ }
+
+ return CQM_SUCCESS;
+
+err3:
+ cqm_bitmap_uninit(cqm_handle);
+err2:
+ cqm_cla_uninit(cqm_handle);
+err1:
+ cqm_bat_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_mem_uninit - Deinitialize related memory of cqm,
+ * including all levels of entries
+ * @ex_handle: handle of hwdev
+ */
+void cqm_mem_uninit(void *ex_handle)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+ struct cqm_handle_s *cqm_handle = NULL;
+
+ cqm_handle = (struct cqm_handle_s *)(handle->cqm_hdl);
+
+ cqm_object_table_uninit(cqm_handle);
+ cqm_bitmap_uninit(cqm_handle);
+ cqm_cla_uninit(cqm_handle);
+ cqm_bat_uninit(cqm_handle);
+}
+
+/**
+ * cqm_event_init - Initialize the event callback of cqm
+ * @ex_handle: handle of hwdev
+ */
+s32 cqm_event_init(void *ex_handle)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+
+ /* Register ceq and aeq callbacks with chipif */
+ if (hifc_aeq_register_swe_cb(ex_handle,
+ HIFC_STATEFULL_EVENT,
+ cqm_aeq_callback) != CHIPIF_SUCCESS) {
+ cqm_err(handle->dev_hdl, "Event: fail to register aeq callback\n");
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_event_uninit - Deinitialize the event callback of cqm
+ * @ex_handle: handle of hwdev
+ */
+void cqm_event_uninit(void *ex_handle)
+{
+ (void)hifc_aeq_unregister_swe_cb(ex_handle, HIFC_STATEFULL_EVENT);
+}
+
+/**
+ * cqm_db_addr_alloc - Apply for a page of hardware doorbell and dwqe,
+ * with the same index, all obtained are physical addresses
+ * each function has up to 1K
+ * @ex_handle: handle of hwdev
+ * @db_addr: the address of doorbell
+ * @dwqe_addr: the address of dwqe
+ */
+s32 cqm_db_addr_alloc(void *ex_handle, void __iomem **db_addr,
+ void __iomem **dwqe_addr)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+
+ CQM_PTR_CHECK_RET(ex_handle, return CQM_FAIL, CQM_PTR_NULL(ex_handle));
+ CQM_PTR_CHECK_RET(db_addr, return CQM_FAIL, CQM_PTR_NULL(db_addr));
+ CQM_PTR_CHECK_RET(dwqe_addr, return CQM_FAIL, CQM_PTR_NULL(dwqe_addr));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_db_addr_alloc_cnt);
+
+ return hifc_alloc_db_addr(ex_handle, db_addr, dwqe_addr);
+}
+
+/**
+ * cqm_db_init - Initialize doorbell of cqm
+ * @ex_handle: handle of hwdev
+ */
+s32 cqm_db_init(void *ex_handle)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+ struct cqm_handle_s *cqm_handle = NULL;
+ struct cqm_service_s *service = NULL;
+
+ cqm_handle = (struct cqm_handle_s *)(handle->cqm_hdl);
+
+ /* Assign hardware doorbell for service */
+ service = &cqm_handle->service;
+
+ if (cqm_db_addr_alloc(ex_handle,
+ &service->hardware_db_vaddr,
+ &service->dwqe_vaddr) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_db_addr_alloc));
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_db_addr_free - Release a page of hardware doorbell and dwqe
+ * @ex_handle: handle of hwdev
+ * @db_addr: the address of doorbell
+ * @dwqe_addr: the address of dwqe
+ */
+void cqm_db_addr_free(void *ex_handle, void __iomem *db_addr,
+ void __iomem *dwqe_addr)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+
+ CQM_PTR_CHECK_NO_RET(ex_handle, CQM_PTR_NULL(ex_handle), return);
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_db_addr_free_cnt);
+
+ hifc_free_db_addr(ex_handle, db_addr, dwqe_addr);
+}
+
+/**
+ * cqm_db_uninit - Deinitialize doorbell of cqm
+ * @ex_handle: handle of hwdev
+ */
+void cqm_db_uninit(void *ex_handle)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+ struct cqm_handle_s *cqm_handle = NULL;
+ struct cqm_service_s *service = NULL;
+
+ cqm_handle = (struct cqm_handle_s *)(handle->cqm_hdl);
+
+ /* Release hardware doorbell */
+ service = &cqm_handle->service;
+
+ cqm_db_addr_free(ex_handle, service->hardware_db_vaddr,
+ service->dwqe_vaddr);
+}
+
+/**
+ * cqm_aeq_callback - cqm module callback processing of aeq
+ * @ex_handle: handle of hwdev
+ * @event: the input type of event
+ * @data: the input data
+ */
+u8 cqm_aeq_callback(void *ex_handle, u8 event, u64 data)
+{
+#define CQM_AEQ_BASE_T_FC 48
+#define CQM_AEQ_BASE_T_FCOE 56
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+ struct cqm_handle_s *cqm_handle = NULL;
+ struct cqm_service_s *service = NULL;
+ struct service_register_template_s *service_template = NULL;
+ u8 event_level = FAULT_LEVEL_MAX;
+
+ CQM_PTR_CHECK_RET(ex_handle, return event_level,
+ CQM_PTR_NULL(ex_handle));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_aeq_callback_cnt[event]);
+
+ cqm_handle = (struct cqm_handle_s *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_RET(cqm_handle, return event_level,
+ CQM_PTR_NULL(cqm_handle));
+
+ if (event >= (u8)CQM_AEQ_BASE_T_FC &&
+ (event < (u8)CQM_AEQ_BASE_T_FCOE)) {
+ service = &cqm_handle->service;
+ service_template = &service->service_template;
+
+ if (!service_template->aeq_callback) {
+ cqm_err(handle->dev_hdl, "Event: service aeq_callback unregistered\n");
+ } else {
+ service_template->aeq_callback(
+ service_template->service_handle, event, data);
+ }
+
+ return event_level;
+ }
+
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(event));
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_service_register - Service driver registers callback template with cqm
+ * @ex_handle: handle of hwdev
+ * @service_template: the template of service registration
+ */
+s32 cqm_service_register(void *ex_handle,
+ struct service_register_template_s *service_template)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+ struct cqm_handle_s *cqm_handle = NULL;
+ struct cqm_service_s *service = NULL;
+
+ CQM_PTR_CHECK_RET(ex_handle, return CQM_FAIL, CQM_PTR_NULL(ex_handle));
+
+ cqm_handle = (struct cqm_handle_s *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_RET(cqm_handle, return CQM_FAIL,
+ CQM_PTR_NULL(cqm_handle));
+ CQM_PTR_CHECK_RET(service_template, return CQM_FAIL,
+ CQM_PTR_NULL(service_template));
+
+ service = &cqm_handle->service;
+
+ if (service->has_register == true) {
+ cqm_err(handle->dev_hdl, "Service register: service has registered\n");
+ return CQM_FAIL;
+ }
+
+ service->has_register = true;
+ (void)memcpy((void *)(&service->service_template),
+ (void *)service_template,
+ sizeof(struct service_register_template_s));
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_service_unregister - Service-driven cancellation to CQM
+ * @ex_handle: handle of hwdev
+ * @service_type: the type of service module
+ */
+void cqm_service_unregister(void *ex_handle)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+ struct cqm_handle_s *cqm_handle = NULL;
+ struct cqm_service_s *service = NULL;
+
+ CQM_PTR_CHECK_NO_RET(ex_handle, CQM_PTR_NULL(ex_handle), return);
+
+ cqm_handle = (struct cqm_handle_s *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_NO_RET(cqm_handle, CQM_PTR_NULL(cqm_handle), return);
+
+ service = &cqm_handle->service;
+
+ service->has_register = false;
+ memset(&service->service_template, 0,
+ sizeof(struct service_register_template_s));
+}
+
+/**
+ * cqm_cmd_alloc - Apply for a cmd buffer, the buffer size is fixed at 2K,
+ * the buffer content is not cleared, but the service needs to be cleared
+ * @ex_handle: handle of hwdev
+ */
+struct cqm_cmd_buf_s *cqm_cmd_alloc(void *ex_handle)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+
+ CQM_PTR_CHECK_RET(ex_handle, return NULL, CQM_PTR_NULL(ex_handle));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_cmd_alloc_cnt);
+
+ return (struct cqm_cmd_buf_s *)hifc_alloc_cmd_buf(ex_handle);
+}
+
+/**
+ * cqm_cmd_free - Free a cmd buffer
+ * @ex_handle: handle of hwdev
+ * @cmd_buf: the cmd buffer which needs freeing memory for
+ */
+void cqm_cmd_free(void *ex_handle, struct cqm_cmd_buf_s *cmd_buf)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+
+ CQM_PTR_CHECK_NO_RET(ex_handle, CQM_PTR_NULL(ex_handle), return);
+ CQM_PTR_CHECK_NO_RET(cmd_buf, CQM_PTR_NULL(cmd_buf), return);
+ CQM_PTR_CHECK_NO_RET(cmd_buf->buf, CQM_PTR_NULL(buf), return);
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_cmd_free_cnt);
+
+ hifc_free_cmd_buf(ex_handle, (struct hifc_cmd_buf *)cmd_buf);
+}
+
+/**
+ * cqm_send_cmd_box - Send a cmd in box mode,
+ * the interface will hang the completed amount, causing sleep
+ * @ex_handle: handle of hwdev
+ * @ack_type: the type of ack
+ * @mod: the mode of cqm send
+ * @cmd: the input cmd
+ * @buf_in: the input buffer of cqm_cmd
+ * @buf_out: the output buffer of cqm_cmd
+ * @timeout: exceeding the time limit will cause sleep
+ */
+s32 cqm_send_cmd_box(void *ex_handle, u8 ack_type, u8 mod, u8 cmd,
+ struct cqm_cmd_buf_s *buf_in,
+ struct cqm_cmd_buf_s *buf_out, u32 timeout)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+
+ CQM_PTR_CHECK_RET(ex_handle, return CQM_FAIL, CQM_PTR_NULL(ex_handle));
+ CQM_PTR_CHECK_RET(buf_in, return CQM_FAIL, CQM_PTR_NULL(buf_in));
+ CQM_PTR_CHECK_RET(buf_in->buf, return CQM_FAIL, CQM_PTR_NULL(buf));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_send_cmd_box_cnt);
+
+ return hifc_cmdq_detail_resp(ex_handle, ack_type, mod, cmd,
+ (struct hifc_cmd_buf *)buf_in,
+ (struct hifc_cmd_buf *)buf_out, timeout);
+}
+
+/**
+ * cqm_ring_hardware_db - Knock hardware doorbell
+ * @ex_handle: handle of hwdev
+ * @service_type: each kernel mode will be allocated a page of hardware doorbell
+ * @db_count: PI exceeding 64b in doorbell[7:0]
+ * @db: doorbell content, organized by the business,
+ * if there is a small-end conversion, the business needs to be completed
+ */
+s32 cqm_ring_hardware_db(void *ex_handle, u32 service_type, u8 db_count, u64 db)
+{
+ struct hifc_hwdev *handle;
+ struct cqm_handle_s *cqm_handle;
+ struct cqm_service_s *service;
+
+ handle = (struct hifc_hwdev *)ex_handle;
+ cqm_handle = (struct cqm_handle_s *)(handle->cqm_hdl);
+ service = &cqm_handle->service;
+
+ /* Write all before the doorbell */
+ wmb();
+ *((u64 *)service->hardware_db_vaddr + db_count) = db;
+
+ return CQM_SUCCESS;
+}
diff --git a/drivers/scsi/huawei/hifc/hifc_cqm_main.h b/drivers/scsi/huawei/hifc/hifc_cqm_main.h
new file mode 100644
index 000000000000..70b0c9ae0609
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_cqm_main.h
@@ -0,0 +1,366 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+#ifndef __CQM_MAIN_H__
+#define __CQM_MAIN_H__
+
+#define CHIPIF_SUCCESS 0
+#define CQM_TIMER_ENABLE 1
+
+enum cqm_object_type_e {
+ CQM_OBJECT_ROOT_CTX = 0,
+ CQM_OBJECT_SERVICE_CTX,
+ CQM_OBJECT_NONRDMA_EMBEDDED_RQ = 10,
+ CQM_OBJECT_NONRDMA_EMBEDDED_SQ,
+ CQM_OBJECT_NONRDMA_SRQ,
+ CQM_OBJECT_NONRDMA_EMBEDDED_CQ,
+ CQM_OBJECT_NONRDMA_SCQ,
+};
+
+struct service_register_template_s {
+ u32 service_type;
+ u32 srq_ctx_size; /* srq,scq context_size config */
+ u32 scq_ctx_size;
+ void *service_handle; /* ceq/aeq callback fun */
+
+ void (*aeq_callback)(void *service_handle, u8 event_type, u64 val);
+};
+
+struct cqm_service_s {
+ bool has_register;
+ void __iomem *hardware_db_vaddr;
+ void __iomem *dwqe_vaddr;
+ u32 buf_order; /* size of per buf 2^buf_order page */
+ struct service_register_template_s service_template;
+};
+
+struct cqm_func_capability_s {
+ bool qpc_alloc_static; /* Allocate qpc memory dynamicly/statically */
+ bool scqc_alloc_static;
+ u8 timer_enable; /* whether timer enable */
+
+ u32 flow_table_based_conn_number;
+ u32 flow_table_based_conn_cache_number; /* Maximum number in cache */
+ u32 bloomfilter_length; /* Bloomfilter table size, aligned by 64B */
+ /* The starting position of the bloomfilter table in the cache */
+ u32 bloomfilter_addr;
+ u32 qpc_reserved; /* Reserved bits in bitmap */
+ u32 mpt_reserved; /* There are also reserved bits in ROCE/IWARP mpt */
+ /* All basic_size must be 2^n aligned */
+ u32 hash_number;
+ /* Number of hash buckets, BAT table fill size is
+ * aligned with 64 buckets, at least 64
+ */
+ u32 hash_basic_size;
+ /* Hash bucket size is 64B, including 5 valid
+ * entries and 1 nxt_entry
+ */
+ u32 qpc_number;
+ u32 qpc_basic_size;
+
+ /* Note: for cqm specail test */
+ u32 pagesize_reorder;
+ bool xid_alloc_mode;
+ bool gpa_check_enable;
+ u32 scq_reserved;
+
+ u32 mpt_number;
+ u32 mpt_basic_size;
+ u32 scqc_number;
+ u32 scqc_basic_size;
+ u32 srqc_number;
+ u32 srqc_basic_size;
+
+ u32 gid_number;
+ u32 gid_basic_size;
+ u32 lun_number;
+ u32 lun_basic_size;
+ u32 taskmap_number;
+ u32 taskmap_basic_size;
+ u32 l3i_number;
+ u32 l3i_basic_size;
+ u32 childc_number;
+ u32 childc_basic_size;
+ u32 child_qpc_id_start; /* Child ctx of FC is global addressing */
+ /* The maximum number of child ctx in
+ * chip is 8096
+ */
+ u32 childc_number_all_function;
+
+ u32 timer_number;
+ u32 timer_basic_size;
+ u32 xid2cid_number;
+ u32 xid2cid_basic_size;
+ u32 reorder_number;
+ u32 reorder_basic_size;
+};
+
+#define CQM_PF TYPE_PF
+#define CQM_PPF TYPE_PPF
+#define CQM_BAT_ENTRY_MAX (16)
+#define CQM_BAT_ENTRY_SIZE (16)
+
+struct cqm_buf_list_s {
+ void *va;
+ dma_addr_t pa;
+ u32 refcount;
+};
+
+struct cqm_buf_s {
+ struct cqm_buf_list_s *buf_list;
+ struct cqm_buf_list_s direct;
+ u32 page_number; /* page_number=2^n buf_number */
+ u32 buf_number; /* buf_list node count */
+ u32 buf_size; /* buf_size=2^n PAGE_SIZE */
+};
+
+struct cqm_bitmap_s {
+ ulong *table;
+ u32 max_num;
+ u32 last;
+ /* The index that cannot be allocated is reserved in the front */
+ u32 reserved_top;
+ /* Lock for bitmap allocation */
+ spinlock_t lock;
+};
+
+struct completion;
+struct cqm_object_s {
+ u32 service_type;
+ u32 object_type; /* context,queue,mpt,mtt etc */
+ u32 object_size;
+ /* for queue, ctx, MPT Byte */
+ atomic_t refcount;
+ struct completion free;
+ void *cqm_handle;
+};
+
+struct cqm_object_table_s {
+ struct cqm_object_s **table;
+ u32 max_num;
+ rwlock_t lock;
+};
+
+struct cqm_cla_table_s {
+ u32 type;
+ u32 max_buffer_size;
+ u32 obj_num;
+ bool alloc_static; /* Whether the buffer is statically allocated */
+ u32 cla_lvl;
+ /* The value of x calculated by the cacheline, used for chip */
+ u32 cacheline_x;
+ /* The value of y calculated by the cacheline, used for chip */
+ u32 cacheline_y;
+ /* The value of z calculated by the cacheline, used for chip */
+ u32 cacheline_z;
+ /* The value of x calculated by the obj_size, used for software */
+ u32 x;
+ /* The value of y calculated by the obj_size, used for software */
+ u32 y;
+ /* The value of z calculated by the obj_size, used for software */
+ u32 z;
+ struct cqm_buf_s cla_x_buf;
+ struct cqm_buf_s cla_y_buf;
+ struct cqm_buf_s cla_z_buf;
+ u32 trunk_order;/* A continuous physical page contains 2^order pages */
+ u32 obj_size;
+ /* Lock for cla buffer allocation and free */
+ struct mutex lock;
+ struct cqm_bitmap_s bitmap;
+ /* The association mapping table of index and object */
+ struct cqm_object_table_s obj_table;
+};
+
+typedef void (*init_handler)(void *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ void *cap);
+
+struct cqm_cla_entry_init_s {
+ u32 type;
+ init_handler cqm_cla_init_handler;
+};
+
+struct cqm_bat_table_s {
+ u32 bat_entry_type[CQM_BAT_ENTRY_MAX];
+ u8 bat[CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE];
+ struct cqm_cla_table_s entry[CQM_BAT_ENTRY_MAX];
+ u32 bat_size;
+};
+
+struct cqm_handle_s {
+ struct hifc_hwdev *ex_handle;
+ struct pci_dev *dev;
+ struct hifc_func_attr func_attribute; /* vf or pf */
+ struct cqm_func_capability_s func_capability;
+ struct cqm_service_s service;
+ struct cqm_bat_table_s bat_table;
+
+ struct list_head node;
+};
+
+struct cqm_cmd_buf_s {
+ void *buf;
+ dma_addr_t dma;
+ u16 size;
+};
+
+struct cqm_queue_header_s {
+ u64 doorbell_record;
+ u64 ci_record;
+ u64 rsv1; /* the share area bettween driver and ucode */
+ u64 rsv2; /* the share area bettween driver and ucode*/
+};
+
+struct cqm_queue_s {
+ struct cqm_object_s object;
+ u32 index; /* embedded queue QP has not index, SRQ and SCQ have */
+ void *priv; /* service driver private info */
+ u32 current_q_doorbell;
+ u32 current_q_room;
+ /* nonrdma: only select q_room_buf_1 for q_room_buf */
+ struct cqm_buf_s q_room_buf_1;
+ struct cqm_buf_s q_room_buf_2;
+ struct cqm_queue_header_s *q_header_vaddr;
+ dma_addr_t q_header_paddr;
+ u8 *q_ctx_vaddr; /* SRQ and SCQ ctx space */
+ dma_addr_t q_ctx_paddr;
+ u32 valid_wqe_num;
+ /*add for srq*/
+ u8 *tail_container;
+ u8 *head_container;
+ u8 queue_link_mode; /*link,ring */
+};
+
+struct cqm_nonrdma_qinfo_s {
+ struct cqm_queue_s common;
+ u32 wqe_size;
+ /* The number of wqe contained in each buf (excluding link wqe),
+ * For srq, it is the number of wqe contained in 1 container
+ */
+ u32 wqe_per_buf;
+ u32 q_ctx_size;
+ /* When different services use different sizes of ctx, a large ctx will
+ * occupy multiple consecutive indexes of the bitmap
+ */
+ u32 index_count;
+ u32 container_size;
+};
+
+/* service context, QPC, mpt */
+struct cqm_qpc_mpt_s {
+ struct cqm_object_s object;
+ u32 xid;
+ dma_addr_t paddr;
+ void *priv; /* service driver private info */
+ u8 *vaddr;
+};
+
+struct cqm_qpc_mpt_info_s {
+ struct cqm_qpc_mpt_s common;
+ /* When different services use different sizes of QPC, large QPC/mpt
+ * will occupy multiple consecutive indexes of the bitmap
+ */
+ u32 index_count;
+};
+
+#define CQM_ADDR_COMBINE(high_addr, low_addr) \
+ ((((dma_addr_t)(high_addr)) << 32) + ((dma_addr_t)(low_addr)))
+#define CQM_ADDR_HI(addr) ((u32)((u64)(addr) >> 32))
+#define CQM_ADDR_LW(addr) ((u32)((u64)(addr) & 0xffffffff))
+#define CQM_HASH_BUCKET_SIZE_64 (64)
+#define CQM_LUN_SIZE_8 (8)
+#define CQM_L3I_SIZE_8 (8)
+#define CQM_TIMER_SIZE_32 (32)
+#define CQM_LUN_FC_NUM (64)
+#define CQM_TASKMAP_FC_NUM (4)
+#define CQM_L3I_COMM_NUM (64)
+#define CQM_TIMER_SCALE_NUM (2*1024)
+#define CQM_TIMER_ALIGN_WHEEL_NUM (8)
+#define CQM_TIMER_ALIGN_SCALE_NUM \
+ (CQM_TIMER_SCALE_NUM*CQM_TIMER_ALIGN_WHEEL_NUM)
+#define CQM_FC_PAGESIZE_ORDER (0)
+#define CQM_QHEAD_ALIGN_ORDER (6)
+
+s32 cqm_mem_init(void *ex_handle);
+void cqm_mem_uninit(void *ex_handle);
+s32 cqm_event_init(void *ex_handle);
+void cqm_event_uninit(void *ex_handle);
+s32 cqm_db_init(void *ex_handle);
+void cqm_db_uninit(void *ex_handle);
+s32 cqm_init(void *ex_handle);
+void cqm_uninit(void *ex_handle);
+s32 cqm_service_register(void *ex_handle,
+ struct service_register_template_s *service_template);
+void cqm_service_unregister(void *ex_handle);
+s32 cqm_ring_hardware_db(void *ex_handle,
+ u32 service_type,
+ u8 db_count, u64 db);
+s32 cqm_send_cmd_box(void *ex_handle, u8 ack_type, u8 mod, u8 cmd,
+ struct cqm_cmd_buf_s *buf_in,
+ struct cqm_cmd_buf_s *buf_out,
+ u32 timeout);
+u8 cqm_aeq_callback(void *ex_handle, u8 event, u64 data);
+void cqm_object_delete(struct cqm_object_s *object);
+struct cqm_cmd_buf_s *cqm_cmd_alloc(void *ex_handle);
+void cqm_cmd_free(void *ex_handle, struct cqm_cmd_buf_s *cmd_buf);
+struct cqm_queue_s *cqm_object_fc_srq_create(
+ void *ex_handle,
+ enum cqm_object_type_e object_type,
+ u32 wqe_number,
+ u32 wqe_size,
+ void *object_priv);
+struct cqm_qpc_mpt_s *cqm_object_qpc_mpt_create(
+ void *ex_handle,
+ enum cqm_object_type_e object_type,
+ u32 object_size,
+ void *object_priv,
+ u32 index);
+struct cqm_queue_s *cqm_object_nonrdma_queue_create(
+ void *ex_handle,
+ enum cqm_object_type_e object_type,
+ u32 wqe_number,
+ u32 wqe_size,
+ void *object_priv);
+
+#define CQM_PTR_NULL(x) "%s: "#x" is null\n", __func__
+#define CQM_ALLOC_FAIL(x) "%s: "#x" alloc fail\n", __func__
+#define CQM_MAP_FAIL(x) "%s: "#x" map fail\n", __func__
+#define CQM_FUNCTION_FAIL(x) "%s: "#x" return failure\n", __func__
+#define CQM_WRONG_VALUE(x) "%s: "#x" %u is wrong\n", __func__, (u32)x
+
+#define cqm_err(dev, format, ...) \
+ dev_err(dev, "[CQM]"format, ##__VA_ARGS__)
+#define cqm_warn(dev, format, ...) \
+ dev_warn(dev, "[CQM]"format, ##__VA_ARGS__)
+#define cqm_notice(dev, format, ...) \
+ dev_notice(dev, "[CQM]"format, ##__VA_ARGS__)
+#define cqm_info(dev, format, ...) \
+ dev_info(dev, "[CQM]"format, ##__VA_ARGS__)
+#define cqm_dbg(format, ...)
+
+#define CQM_PTR_CHECK_RET(ptr, ret, desc) \
+ do {\
+ if (unlikely(NULL == (ptr))) {\
+ pr_err("[CQM]"desc);\
+ ret; \
+ } \
+ } while (0)
+
+#define CQM_PTR_CHECK_NO_RET(ptr, desc, ret) \
+ do {\
+ if (unlikely((ptr) == NULL)) {\
+ pr_err("[CQM]"desc);\
+ ret; \
+ } \
+ } while (0)
+#define CQM_CHECK_EQUAL_RET(dev_hdl, actual, expect, ret, desc) \
+ do {\
+ if (unlikely((expect) != (actual))) {\
+ cqm_err(dev_hdl, desc);\
+ ret; \
+ } \
+ } while (0)
+
+#endif /* __CQM_MAIN_H__ */
diff --git a/drivers/scsi/huawei/hifc/hifc_cqm_object.c b/drivers/scsi/huawei/hifc/hifc_cqm_object.c
new file mode 100644
index 000000000000..406b13f92e64
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_cqm_object.c
@@ -0,0 +1,3599 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/device.h>
+#include <linux/gfp.h>
+#include <linux/mm.h>
+
+#include "hifc_knl_adp.h"
+#include "hifc_hw.h"
+#include "hifc_hwdev.h"
+#include "hifc_hwif.h"
+#include "hifc_api_cmd.h"
+#include "hifc_mgmt.h"
+#include "hifc_cfg.h"
+#include "hifc_cqm_object.h"
+#include "hifc_cqm_main.h"
+#define common_section
+
+#define CQM_MOD_CQM 8
+#define CQM_HARDWARE_DOORBELL 1
+/**
+ * cqm_swab64 - Convert a memory block to another endian by 8 byte basis
+ * @addr: start address of the memory block
+ * @cnt: the number of 8 byte basis in the memory block
+ */
+void cqm_swab64(u8 *addr, u32 cnt)
+{
+ u32 i = 0;
+ u64 *temp = (u64 *)addr;
+ u64 value = 0;
+
+ for (i = 0; i < cnt; i++) {
+ value = __swab64(*temp);
+ *temp = value;
+ temp++;
+ }
+}
+
+/**
+ * cqm_swab32 - Convert a memory block to another endian by 4 byte basis
+ * @addr: start address of the memory block
+ * @cnt: the number of 4 byte basis in the memory block
+ */
+void cqm_swab32(u8 *addr, u32 cnt)
+{
+ u32 i = 0;
+ u32 *temp = (u32 *)addr;
+ u32 value = 0;
+
+ for (i = 0; i < cnt; i++) {
+ value = __swab32(*temp);
+ *temp = value;
+ temp++;
+ }
+}
+
+/**
+ * cqm_shift - Find the base logarithm of two
+ * @data: the input data
+ */
+s32 cqm_shift(u32 data)
+{
+ s32 shift = -1;
+
+ do {
+ data >>= 1;
+ shift++;
+ } while (data);
+
+ return shift;
+}
+
+/**
+ * cqm_check_align - Check whether the data is aligned as the base of 2^n
+ * @data: the input data
+ */
+bool cqm_check_align(u32 data)
+{
+ if (data == 0)
+ return false;
+
+ do {
+ /* If data can be divided exactly by 2,
+ * it right shifts one bit
+ */
+ if ((data & 0x1) == 0) {
+ data >>= 1;
+ } else {
+ /* If data can not be divided exactly by 2
+ * it is not the base of 2^n,return false
+ */
+ return false;
+ }
+ } while (data != 1);
+
+ return true;
+}
+
+/**
+ * cqm_kmalloc_align - Alloc memory whose start address is aligned as the basis
+ * of 2^n
+ * @size: the size of memory allocated
+ * @flags: the type of memory allocated
+ * @align_order: the basis for aligning
+ */
+static void *cqm_kmalloc_align(size_t size, gfp_t flags, u16 align_order)
+{
+ void *orig_addr = NULL;
+ void *align_addr = NULL;
+ void *index_addr = NULL;
+
+ orig_addr = kmalloc(size + ((u64)1 << align_order) + sizeof(void *),
+ flags);
+ if (!orig_addr)
+ return NULL;
+
+ index_addr = (void *)((char *)orig_addr + sizeof(void *));
+ align_addr = (void *)((((u64)index_addr +
+ ((u64)1 << align_order) - 1) >> align_order) << align_order);
+
+ /* Record the original memory address for memory release. */
+ index_addr = (void *)((char *)align_addr - sizeof(void *));
+ *(void **)index_addr = orig_addr;
+
+ cqm_dbg("allocate %lu bytes aligned address: %p, original address: %p\n",
+ size, align_addr, orig_addr);
+
+ return align_addr;
+}
+
+/**
+ * cqm_kfree_align - Free memory whose start address is aligned as the basis of
+ * 2^n
+ * @addr: aligned address which would be free
+ */
+static void cqm_kfree_align(void *addr)
+{
+ void *index_addr = NULL;
+
+ /* Release original memory address */
+ index_addr = (void *)((char *)addr - sizeof(void *));
+
+ cqm_dbg("free aligned address: %p, original address: %p\n",
+ addr, *(void **)index_addr);
+
+ kfree(*(void **)index_addr);
+}
+
+/**
+ * cqm_buf_alloc_page - Alloc total pages memory for buffers
+ * @cqm_handle: handle of cqm
+ * @buf: the buffer which needs allocating memory for
+ */
+s32 cqm_buf_alloc_page(struct cqm_handle_s *cqm_handle, struct cqm_buf_s *buf)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ u32 order = 0;
+ void *va = NULL;
+ s32 i = 0;
+
+ order = get_order(buf->buf_size);
+
+ /*Here to allocate for every buffer's page for non-ovs*/
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ va = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, order);
+ if (!va) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(buf_page));
+ break;
+ }
+ /* Pages should be initialized to 0 after applied
+ * especially related to the hash table
+ */
+ memset(va, 0, buf->buf_size);
+ buf->buf_list[i].va = va;
+ }
+
+ if (i != buf->buf_number) {
+ i--;
+ for (; i >= 0; i--) {
+ free_pages((ulong)(buf->buf_list[i].va), order);
+ buf->buf_list[i].va = NULL;
+ }
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_buf_alloc_map - Buffer pci mapping
+ * @cqm_handle: handle of cqm
+ * @buf: the buffer which needs map
+ */
+s32 cqm_buf_alloc_map(struct cqm_handle_s *cqm_handle, struct cqm_buf_s *buf)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct pci_dev *dev = cqm_handle->dev;
+ s32 i = 0;
+ void *va = NULL;
+
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ va = buf->buf_list[i].va;
+ buf->buf_list[i].pa =
+ pci_map_single(dev, va, buf->buf_size,
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(dev, buf->buf_list[i].pa)) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(buf_list));
+ break;
+ }
+ }
+
+ if (i != buf->buf_number) {
+ i--;
+ for (; i >= 0; i--) {
+ pci_unmap_single(dev, buf->buf_list[i].pa,
+ buf->buf_size, PCI_DMA_BIDIRECTIONAL);
+ }
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_buf_alloc_direct - Buffer pci direct remapping
+ * @cqm_handle: handle of cqm
+ * @buf: the buffer which needs remap
+ */
+s32 cqm_buf_alloc_direct(struct cqm_handle_s *cqm_handle,
+ struct cqm_buf_s *buf, bool direct)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct page **pages = NULL;
+ u32 order = 0;
+ u32 i = 0;
+ u32 j = 0;
+
+ order = get_order(buf->buf_size);
+
+ if (direct == true) {
+ pages = (struct page **)
+ vmalloc(sizeof(struct page *) * buf->page_number);
+ if (!pages) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(pages));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < buf->buf_number; i++) {
+ for (j = 0; j < ((u32)1 << order); j++) {
+ pages[(i << order) + j] = (struct page *)
+ (void *)virt_to_page(
+ (u8 *)(buf->buf_list[i].va) +
+ (PAGE_SIZE * j));
+ }
+ }
+
+ /*lint -save -e648
+ *Shield alarm for kernel functions' vmapping
+ */
+ buf->direct.va = vmap(pages, buf->page_number,
+ VM_MAP, PAGE_KERNEL);
+ /*lint -restore*/
+ vfree(pages);
+ if (!buf->direct.va) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(buf->direct.va));
+ return CQM_FAIL;
+ }
+ } else {
+ buf->direct.va = NULL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_buf_alloc - Allocate for buffer and dma for the struct cqm_buf_s
+ * @cqm_handle: handle of cqm
+ * @buf: the buffer which needs allocating memory for and dma
+ */
+s32 cqm_buf_alloc(struct cqm_handle_s *cqm_handle,
+ struct cqm_buf_s *buf, bool direct)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct pci_dev *dev = cqm_handle->dev;
+ u32 order = 0;
+ s32 i = 0;
+
+ order = get_order(buf->buf_size);
+
+ /* Allocate for the descriptor space of buffer lists */
+ buf->buf_list = (struct cqm_buf_list_s *)
+ vmalloc(buf->buf_number *
+ sizeof(struct cqm_buf_list_s));
+
+ CQM_PTR_CHECK_RET(buf->buf_list, return CQM_FAIL,
+ CQM_ALLOC_FAIL(buf_list));
+ memset(buf->buf_list, 0,
+ buf->buf_number * sizeof(struct cqm_buf_list_s));
+
+ /* Allocate for every buffer's page */
+ if (cqm_buf_alloc_page(cqm_handle, buf) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_buf_alloc_page));
+ goto err1;
+ }
+
+ /* Buffer pci remapping */
+ if (cqm_buf_alloc_map(cqm_handle, buf) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_buf_alloc_map));
+ goto err2;
+ }
+
+ /* Buffer pci mapping */
+ if (cqm_buf_alloc_direct(cqm_handle, buf, direct) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_buf_alloc_direct));
+ goto err3;
+ }
+
+ return CQM_SUCCESS;
+
+err3:
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ pci_unmap_single(dev, buf->buf_list[i].pa, buf->buf_size,
+ PCI_DMA_BIDIRECTIONAL);
+ }
+err2:
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ free_pages((ulong)(buf->buf_list[i].va), order);
+ buf->buf_list[i].va = NULL;
+ }
+err1:
+ vfree(buf->buf_list);
+ buf->buf_list = NULL;
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_cla_cache_invalid - Set the chip logical address cache invalid
+ * @cqm_handle: handle of cqm
+ * @gpa: global physical address
+ * @cache_size: chip cache size
+ */
+s32 cqm_cla_cache_invalid(struct cqm_handle_s *cqm_handle, dma_addr_t gpa,
+ u32 cache_size)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_cmd_buf_s *buf_in = NULL;
+ struct cqm_cla_cache_invalid_cmd_s *cmd = NULL;
+ s32 ret = CQM_FAIL;
+
+ buf_in = cqm_cmd_alloc((void *)(cqm_handle->ex_handle));
+ CQM_PTR_CHECK_RET(buf_in, return CQM_FAIL,
+ CQM_ALLOC_FAIL(buf_in));
+ buf_in->size = sizeof(struct cqm_cla_cache_invalid_cmd_s);
+
+ /* Fill command format, and turn into big endian */
+ cmd = (struct cqm_cla_cache_invalid_cmd_s *)(buf_in->buf);
+ cmd->cache_size = cache_size;
+ cmd->gpa_h = CQM_ADDR_HI(gpa);
+ cmd->gpa_l = CQM_ADDR_LW(gpa);
+
+ cqm_swab32((u8 *)cmd,
+ (sizeof(struct cqm_cla_cache_invalid_cmd_s) >> 2));
+
+ /* cmdq send a cmd */
+ ret = cqm_send_cmd_box((void *)(cqm_handle->ex_handle),
+ CQM_CMD_ACK_TYPE_CMDQ,
+ CQM_MOD_CQM, CQM_CMD_T_CLA_CACHE_INVALID,
+ buf_in, NULL, CQM_CMD_TIMEOUT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_send_cmd_box));
+ cqm_err(handle->dev_hdl, "Cla cache invalid: cqm_send_cmd_box_ret=%d\n",
+ ret);
+ cqm_err(handle->dev_hdl, "Cla cache invalid: cla_cache_invalid_cmd: 0x%x 0x%x 0x%x\n",
+ cmd->gpa_h, cmd->gpa_l, cmd->cache_size);
+ }
+
+ cqm_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return ret;
+}
+
+/**
+ * cqm_buf_free - Free buffer space and dma for the struct cqm_buf_s
+ * @buf: the buffer which needs freeing memory for
+ * @dev: specific pci device
+ */
+void cqm_buf_free(struct cqm_buf_s *buf, struct pci_dev *dev)
+{
+ u32 order = 0;
+ s32 i = 0;
+
+ order = get_order(buf->buf_size);
+
+ if (buf->direct.va) {
+ vunmap(buf->direct.va);
+ buf->direct.va = NULL;
+ }
+
+ if (buf->buf_list) {
+ for (i = 0; i < (s32)(buf->buf_number); i++) {
+ if (buf->buf_list[i].va) {
+ pci_unmap_single(dev, buf->buf_list[i].pa,
+ buf->buf_size,
+ PCI_DMA_BIDIRECTIONAL);
+ free_pages((ulong)(buf->buf_list[i].va), order);
+ buf->buf_list[i].va = NULL;
+ }
+ }
+
+ vfree(buf->buf_list);
+ buf->buf_list = NULL;
+ }
+}
+
+/**
+ * __free_cache_inv - Free cache and make buffer list invalid
+ * @cqm_handle: handle of cqm
+ * @buf: the buffer which needs freeing memory for
+ * @inv_flag: invalid or not
+ * @order:the basis for aligning
+ * @buf_idx:buffer index
+ */
+static void __free_cache_inv(struct cqm_handle_s *cqm_handle,
+ struct cqm_buf_s *buf, s32 *inv_flag,
+ u32 order, s32 buf_idx)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+
+ if (handle->chip_present_flag) {
+ *inv_flag = cqm_cla_cache_invalid(cqm_handle,
+ buf->buf_list[buf_idx].pa,
+ PAGE_SIZE << order);
+ if (*inv_flag != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, "Buffer free: fail to invalid buf_list pa cache, inv_flag=%d\n",
+ *inv_flag);
+ }
+ }
+
+ pci_unmap_single(cqm_handle->dev, buf->buf_list[buf_idx].pa,
+ buf->buf_size, PCI_DMA_BIDIRECTIONAL);
+
+ free_pages((unsigned long)(buf->buf_list[buf_idx].va), order);
+
+ buf->buf_list[buf_idx].va = NULL;
+}
+
+/**
+ * cqm_buf_free_cache_inv - Free cache and make buffer list invalid
+ * @cqm_handle: handle of cqm
+ * @buf: the buffer which needs freeing memory for
+ * @inv_flag: invalid or not
+ */
+void cqm_buf_free_cache_inv(struct cqm_handle_s *cqm_handle,
+ struct cqm_buf_s *buf, s32 *inv_flag)
+{
+ u32 order = 0;
+ s32 i = 0;
+
+ order = get_order(buf->buf_size);
+
+ if (buf->direct.va) {
+ vunmap(buf->direct.va);
+ buf->direct.va = NULL;
+ }
+
+ if (buf->buf_list) {
+ for (i = 0; i < (s32)(buf->buf_number); i++) {
+ if (buf->buf_list[i].va) {
+ __free_cache_inv(cqm_handle, buf,
+ inv_flag, order, i);
+ }
+ }
+
+ vfree(buf->buf_list);
+ buf->buf_list = NULL;
+ }
+}
+
+#define bat_cla_section
+
+/**
+ * cqm_bat_update - Send cmds to the tile to update the BAT table through cmdq
+ * @cqm_handle: cqm handle
+ * Return: 0 - success, negative - failure
+ */
+s32 cqm_bat_update(struct cqm_handle_s *cqm_handle)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_cmd_buf_s *buf_in = NULL;
+ s32 ret = CQM_FAIL;
+ struct cqm_bat_update_cmd_s *bat_update_cmd = NULL;
+
+ /* Allocate a cmd and fill */
+ buf_in = cqm_cmd_alloc((void *)(cqm_handle->ex_handle));
+ CQM_PTR_CHECK_RET(buf_in, return CQM_FAIL, CQM_ALLOC_FAIL(buf_in));
+ buf_in->size = sizeof(struct cqm_bat_update_cmd_s);
+
+ bat_update_cmd = (struct cqm_bat_update_cmd_s *)(buf_in->buf);
+ bat_update_cmd->byte_len = cqm_handle->bat_table.bat_size;
+ bat_update_cmd->offset = 0;
+ memcpy(bat_update_cmd->data, cqm_handle->bat_table.bat,
+ bat_update_cmd->byte_len);
+
+ /*Big endian conversion*/
+ cqm_swab32((u8 *)bat_update_cmd,
+ sizeof(struct cqm_bat_update_cmd_s) >> 2);
+
+ /* send a cmd */
+ ret = cqm_send_cmd_box((void *)(cqm_handle->ex_handle),
+ CQM_CMD_ACK_TYPE_CMDQ, CQM_MOD_CQM,
+ CQM_CMD_T_BAT_UPDATE, buf_in,
+ NULL, CQM_CMD_TIMEOUT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_send_cmd_box));
+ cqm_err(handle->dev_hdl, "Bat update: send_cmd_box ret=%d\n",
+ ret);
+ cqm_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return CQM_FAIL;
+ }
+
+ /* Free a cmd */
+ cqm_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_bat_init_ft(struct cqm_handle_s *cqm_handle,
+ struct cqm_bat_table_s *bat_table,
+ enum func_type function_type)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+
+ if (function_type == CQM_PF || function_type == CQM_PPF) {
+ bat_table->bat_entry_type[0] = CQM_BAT_ENTRY_T_CFG;
+ bat_table->bat_entry_type[1] = CQM_BAT_ENTRY_T_HASH;
+ bat_table->bat_entry_type[2] = CQM_BAT_ENTRY_T_QPC;
+ bat_table->bat_entry_type[3] = CQM_BAT_ENTRY_T_SCQC;
+ bat_table->bat_entry_type[4] = CQM_BAT_ENTRY_T_LUN;
+ bat_table->bat_entry_type[5] = CQM_BAT_ENTRY_T_TASKMAP;
+ bat_table->bat_entry_type[6] = CQM_BAT_ENTRY_T_L3I;
+ bat_table->bat_entry_type[7] = CQM_BAT_ENTRY_T_CHILDC;
+ bat_table->bat_entry_type[8] = CQM_BAT_ENTRY_T_TIMER;
+ bat_table->bat_entry_type[9] = CQM_BAT_ENTRY_T_XID2CID;
+ bat_table->bat_entry_type[10] = CQM_BAT_ENTRY_T_REORDER;
+ bat_table->bat_size = CQM_BAT_SIZE_FT_PF;
+ } else {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(function_type));
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_bat_init - Initialize the BAT table, only select the items to be
+ * initialized and arrange the entry order, the content of the BAT table entry
+ * needs to be filled after the CLA allocation
+ * @cqm_handle: cqm handle
+ * Return: 0 - success, negative - failure
+ */
+s32 cqm_bat_init(struct cqm_handle_s *cqm_handle)
+{
+ struct cqm_bat_table_s *bat_table = &cqm_handle->bat_table;
+ u32 i = 0;
+
+ memset(bat_table, 0, sizeof(struct cqm_bat_table_s));
+
+ /* Initialize the type of each bat entry */
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
+ bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
+
+ if (cqm_bat_init_ft(cqm_handle, bat_table,
+ cqm_handle->func_attribute.func_type) == CQM_FAIL) {
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_bat_uninit - Deinitialize BAT table
+ * @cqm_handle: cqm handle
+ */
+void cqm_bat_uninit(struct cqm_handle_s *cqm_handle)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_bat_table_s *bat_table = &cqm_handle->bat_table;
+ u32 i = 0;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
+ bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
+
+ memset(bat_table->bat, 0, CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE);
+
+ /* Notify the chip to refresh the BAT table */
+ if (cqm_bat_update(cqm_handle) != CQM_SUCCESS)
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bat_update));
+}
+
+static void cqm_bat_config_entry_size(
+ struct cqm_cla_table_s *cla_table,
+ struct cqm_bat_entry_standerd_s *bat_entry_standerd)
+{
+ /* Except for QPC of 256/512/1024, the others are all cacheline 256B,
+ * and the conversion will be done inside the chip
+ */
+ if (cla_table->obj_size > CQM_CHIP_CACHELINE) {
+ if (cla_table->obj_size == 512) {
+ bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_512;
+ } else {
+ bat_entry_standerd->entry_size =
+ CQM_BAT_ENTRY_SIZE_1024;
+ }
+ bat_entry_standerd->max_number =
+ cla_table->max_buffer_size / cla_table->obj_size;
+ } else {
+ bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_256;
+ bat_entry_standerd->max_number =
+ cla_table->max_buffer_size / CQM_CHIP_CACHELINE;
+ }
+}
+
+void cqm_bat_fill_cla_std_entry(struct cqm_handle_s *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ u8 *entry_base_addr, u32 entry_type,
+ u8 gpa_check_enable)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_bat_entry_standerd_s *bat_entry_standerd = NULL;
+ dma_addr_t pa = 0;
+
+ if (cla_table->obj_num == 0) {
+ cqm_info(handle->dev_hdl, "Cla alloc: cla_type %u, obj_num=0, don't init bat entry\n",
+ cla_table->type);
+ return;
+ }
+
+ bat_entry_standerd = (struct cqm_bat_entry_standerd_s *)entry_base_addr;
+ cqm_bat_config_entry_size(cla_table, bat_entry_standerd);
+ bat_entry_standerd->max_number = bat_entry_standerd->max_number - 1;
+
+ bat_entry_standerd->bypass = CQM_BAT_NO_BYPASS_CACHE;
+ bat_entry_standerd->z = cla_table->cacheline_z;
+ bat_entry_standerd->y = cla_table->cacheline_y;
+ bat_entry_standerd->x = cla_table->cacheline_x;
+ bat_entry_standerd->cla_level = cla_table->cla_lvl;
+
+ if (cla_table->cla_lvl == CQM_CLA_LVL_0)
+ pa = cla_table->cla_z_buf.buf_list[0].pa;
+ else if (cla_table->cla_lvl == CQM_CLA_LVL_1)
+ pa = cla_table->cla_y_buf.buf_list[0].pa;
+ else
+ pa = cla_table->cla_x_buf.buf_list[0].pa;
+
+ bat_entry_standerd->cla_gpa_h = CQM_ADDR_HI(pa);
+ if (entry_type == CQM_BAT_ENTRY_T_REORDER) {
+ /* Reorder does not support GPA validity check */
+ bat_entry_standerd->cla_gpa_l = CQM_ADDR_LW(pa);
+ } else {
+ /* GPA is valid when gpa[0]=1 */
+ bat_entry_standerd->cla_gpa_l =
+ CQM_ADDR_LW(pa) | gpa_check_enable;
+ }
+}
+
+static void cqm_bat_fill_cla_cfg(struct cqm_handle_s *cqm_handle,
+ u8 *entry_base_addr)
+{
+ struct cqm_bat_entry_cfg_s *bat_entry_cfg =
+ (struct cqm_bat_entry_cfg_s *)entry_base_addr;
+
+ bat_entry_cfg->cur_conn_cache = 0;
+ bat_entry_cfg->max_conn_cache =
+ cqm_handle->func_capability.flow_table_based_conn_cache_number;
+ bat_entry_cfg->cur_conn_num_h_4 = 0;
+ bat_entry_cfg->cur_conn_num_l_16 = 0;
+ bat_entry_cfg->max_conn_num =
+ cqm_handle->func_capability.flow_table_based_conn_number;
+ /* Align by 64 buckets, shift right 6 bits */
+ if ((cqm_handle->func_capability.hash_number >> 6) != 0) {
+ /* After shift right 6 bits, the value should - 1 for the hash
+ * value
+ */
+ bat_entry_cfg->bucket_num =
+ ((cqm_handle->func_capability.hash_number >> 6) - 1);
+ }
+ if (cqm_handle->func_capability.bloomfilter_length != 0) {
+ bat_entry_cfg->bloom_filter_len =
+ cqm_handle->func_capability.bloomfilter_length - 1;
+ bat_entry_cfg->bloom_filter_addr =
+ cqm_handle->func_capability.bloomfilter_addr;
+ }
+}
+
+static void cqm_bat_fill_cla_taskmap(struct cqm_handle_s *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ u8 *entry_base_addr)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_bat_entry_taskmap_s *bat_entry_taskmap =
+ (struct cqm_bat_entry_taskmap_s *)entry_base_addr;
+ if (cqm_handle->func_capability.taskmap_number != 0) {
+ bat_entry_taskmap->gpa0_h =
+ (u32)(cla_table->cla_z_buf.buf_list[0].pa >> 32);
+ bat_entry_taskmap->gpa0_l =
+ (u32)(cla_table->cla_z_buf.buf_list[0].pa & 0xffffffff);
+
+ bat_entry_taskmap->gpa1_h =
+ (u32)(cla_table->cla_z_buf.buf_list[1].pa >> 32);
+ bat_entry_taskmap->gpa1_l =
+ (u32)(cla_table->cla_z_buf.buf_list[1].pa & 0xffffffff);
+
+ bat_entry_taskmap->gpa2_h =
+ (u32)(cla_table->cla_z_buf.buf_list[2].pa >> 32);
+ bat_entry_taskmap->gpa2_l =
+ (u32)(cla_table->cla_z_buf.buf_list[2].pa & 0xffffffff);
+
+ bat_entry_taskmap->gpa3_h =
+ (u32)(cla_table->cla_z_buf.buf_list[3].pa >> 32);
+ bat_entry_taskmap->gpa3_l =
+ (u32)(cla_table->cla_z_buf.buf_list[3].pa & 0xffffffff);
+
+ cqm_info(handle->dev_hdl, "Cla alloc: taskmap bat entry: 0x%x 0x%x, 0x%x 0x%x, 0x%x 0x%x, 0x%x 0x%x\n",
+ bat_entry_taskmap->gpa0_h, bat_entry_taskmap->gpa0_l,
+ bat_entry_taskmap->gpa1_h, bat_entry_taskmap->gpa1_l,
+ bat_entry_taskmap->gpa2_h, bat_entry_taskmap->gpa2_l,
+ bat_entry_taskmap->gpa3_h, bat_entry_taskmap->gpa3_l);
+ }
+}
+
+/**
+ * cqm_bat_fill_cla - Fill the base address of the cla table into the bat table
+ * @cqm_handle: cqm handle
+ */
+void cqm_bat_fill_cla(struct cqm_handle_s *cqm_handle)
+{
+ struct cqm_bat_table_s *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table_s *cla_table = NULL;
+ u32 entry_type = CQM_BAT_ENTRY_T_INVALID;
+ u8 *entry_base_addr = NULL;
+ u32 i = 0;
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+
+ /* Fill each item according to the arranged BAT table */
+ entry_base_addr = bat_table->bat;
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ entry_type = bat_table->bat_entry_type[i];
+ if (entry_type == CQM_BAT_ENTRY_T_CFG) {
+ cqm_bat_fill_cla_cfg(cqm_handle, entry_base_addr);
+ entry_base_addr += sizeof(struct cqm_bat_entry_cfg_s);
+ } else if (entry_type == CQM_BAT_ENTRY_T_TASKMAP) {
+ cqm_bat_fill_cla_taskmap(cqm_handle,
+ &bat_table->entry[i],
+ entry_base_addr);
+ entry_base_addr +=
+ sizeof(struct cqm_bat_entry_taskmap_s);
+ } else if ((entry_type == CQM_BAT_ENTRY_T_INVALID) ||
+ ((entry_type == CQM_BAT_ENTRY_T_TIMER) &&
+ (cqm_handle->func_attribute.func_type != CQM_PPF))) {
+ /* When entry_type is invalid, or the timer entry under
+ * PF does not need to apply for memory and bat filling
+ */
+ entry_base_addr += CQM_BAT_ENTRY_SIZE;
+ } else {
+ cla_table = &bat_table->entry[i];
+ cqm_bat_fill_cla_std_entry(cqm_handle, cla_table,
+ entry_base_addr, entry_type,
+ gpa_check_enable);
+ entry_base_addr +=
+ sizeof(struct cqm_bat_entry_standerd_s);
+ }
+ /* Checks if entry_base_addr is out of bounds */
+ if (entry_base_addr >=
+ (bat_table->bat + CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE))
+ break;
+ }
+}
+
+static void cqm_cla_xyz_cacheline_lvl1(struct cqm_cla_table_s *cla_table,
+ u32 trunk_size)
+{
+ s32 shift = 0;
+
+ if (cla_table->obj_size >= CQM_CHIP_CACHELINE) {
+ cla_table->cacheline_z = cla_table->z;
+ cla_table->cacheline_y = cla_table->y;
+ cla_table->cacheline_x = cla_table->x;
+ } else {
+ shift = cqm_shift(trunk_size / CQM_CHIP_CACHELINE);
+ cla_table->cacheline_z = shift ? (shift - 1) : (shift);
+ cla_table->cacheline_y = CQM_MAX_INDEX_BIT;
+ cla_table->cacheline_x = 0;
+ }
+}
+
+s32 cqm_cla_xyz_lvl1(struct cqm_handle_s *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ u32 trunk_size)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_buf_s *cla_y_buf = NULL;
+ struct cqm_buf_s *cla_z_buf = NULL;
+ dma_addr_t *base = NULL;
+ s32 shift = 0;
+ u32 i = 0;
+ s32 ret = CQM_FAIL;
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+
+ if (cla_table->type == CQM_BAT_ENTRY_T_REORDER)
+ gpa_check_enable = 0;
+
+ cla_table->cla_lvl = CQM_CLA_LVL_1;
+
+ shift = cqm_shift(trunk_size / cla_table->obj_size);
+ cla_table->z = shift ? (shift - 1) : (shift);
+ cla_table->y = CQM_MAX_INDEX_BIT;
+ cla_table->x = 0;
+ cqm_cla_xyz_cacheline_lvl1(cla_table, trunk_size);
+
+ /* Allocate y buf space */
+ cla_y_buf = &cla_table->cla_y_buf;
+ cla_y_buf->buf_size = trunk_size;
+ cla_y_buf->buf_number = 1;
+ cla_y_buf->page_number = cla_y_buf->buf_number <<
+ cla_table->trunk_order;
+ ret = cqm_buf_alloc(cqm_handle, cla_y_buf, false);
+
+ CQM_CHECK_EQUAL_RET(handle->dev_hdl, ret, CQM_SUCCESS, return CQM_FAIL,
+ CQM_ALLOC_FAIL(lvl_1_y_buf));
+
+ /* Allocate z buf space */
+ cla_z_buf = &cla_table->cla_z_buf;
+ cla_z_buf->buf_size = trunk_size;
+ cla_z_buf->buf_number = ALIGN(cla_table->max_buffer_size, trunk_size) /
+ trunk_size;
+ cla_z_buf->page_number = cla_z_buf->buf_number <<
+ cla_table->trunk_order;
+ /* Requires static allocation of all buffer space */
+ if (cla_table->alloc_static == true) {
+ if (cqm_buf_alloc(cqm_handle, cla_z_buf, false) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_1_z_buf));
+ cqm_buf_free(cla_y_buf, cqm_handle->dev);
+ return CQM_FAIL;
+ }
+
+ /* Fill gpa of z buf list into y buf */
+ base = (dma_addr_t *)(cla_y_buf->buf_list->va);
+ for (i = 0; i < cla_z_buf->buf_number; i++) {
+ /*gpa[0]=1 means this GPA is valid*/
+ *base = (cla_z_buf->buf_list[i].pa | gpa_check_enable);
+ base++;
+ }
+
+ /* big-endian conversion */
+ cqm_swab64((u8 *)(cla_y_buf->buf_list->va),
+ cla_z_buf->buf_number);
+ } else {
+ /* Only initialize and allocate buf list space, buffer spaces
+ * are dynamically allocated in service
+ */
+ cla_z_buf->buf_list = (struct cqm_buf_list_s *)
+ vmalloc(cla_z_buf->buf_number *
+ sizeof(struct cqm_buf_list_s));
+
+ if (!cla_z_buf->buf_list) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_1_z_buf));
+ cqm_buf_free(cla_y_buf, cqm_handle->dev);
+ return CQM_FAIL;
+ }
+ memset(cla_z_buf->buf_list, 0,
+ cla_z_buf->buf_number * sizeof(struct cqm_buf_list_s));
+ }
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_cla_yz_lvl2_static(struct cqm_handle_s *cqm_handle,
+ struct cqm_buf_s *cla_y_buf,
+ struct cqm_buf_s *cla_z_buf,
+ u8 gpa_check_enable)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ dma_addr_t *base = NULL;
+ u32 i = 0;
+
+ if (cqm_buf_alloc(cqm_handle, cla_z_buf, false) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_2_z_buf));
+ return CQM_FAIL;
+ }
+
+ /* The virtual address of y buf is remapped for software access */
+ if (cqm_buf_alloc(cqm_handle, cla_y_buf, true) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_2_y_buf));
+ cqm_buf_free(cla_z_buf, cqm_handle->dev);
+ return CQM_FAIL;
+ }
+
+ /* Fill gpa of z buf list into y buf */
+ base = (dma_addr_t *)(cla_y_buf->direct.va);
+ for (i = 0; i < cla_z_buf->buf_number; i++) {
+ /*gpa[0]=1 means this GPA is valid*/
+ *base = (cla_z_buf->buf_list[i].pa | gpa_check_enable);
+ base++;
+ }
+
+ /* big-endian conversion */
+ cqm_swab64((u8 *)(cla_y_buf->direct.va), cla_z_buf->buf_number);
+
+ return CQM_SUCCESS;
+}
+
+static void cqm_cla_yz_lvl2_init_cacheline(struct cqm_cla_table_s *cla_table,
+ u32 trunk_size)
+{
+ s32 shift = 0;
+
+ if (cla_table->obj_size >= CQM_CHIP_CACHELINE) {
+ cla_table->cacheline_z = cla_table->z;
+ cla_table->cacheline_y = cla_table->y;
+ cla_table->cacheline_x = cla_table->x;
+ } else {
+ shift = cqm_shift(trunk_size / CQM_CHIP_CACHELINE);
+ cla_table->cacheline_z = shift ? (shift - 1) : (shift);
+ shift = cqm_shift(trunk_size / sizeof(dma_addr_t));
+ cla_table->cacheline_y = cla_table->cacheline_z + shift;
+ cla_table->cacheline_x = CQM_MAX_INDEX_BIT;
+ }
+}
+
+s32 cqm_cla_xyz_lvl2(struct cqm_handle_s *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ u32 trunk_size)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_buf_s *cla_x_buf = NULL;
+ struct cqm_buf_s *cla_y_buf = NULL;
+ struct cqm_buf_s *cla_z_buf = NULL;
+ dma_addr_t *base = NULL;
+ s32 shift = 0;
+ u32 i = 0;
+ s32 ret = CQM_FAIL;
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+
+ if (cla_table->type == CQM_BAT_ENTRY_T_REORDER)
+ gpa_check_enable = 0;
+
+ cla_table->cla_lvl = CQM_CLA_LVL_2;
+
+ shift = cqm_shift(trunk_size / cla_table->obj_size);
+ cla_table->z = shift ? (shift - 1) : (shift);
+ shift = cqm_shift(trunk_size / sizeof(dma_addr_t));
+ cla_table->y = cla_table->z + shift;
+ cla_table->x = CQM_MAX_INDEX_BIT;
+
+ cqm_cla_yz_lvl2_init_cacheline(cla_table, trunk_size);
+
+ /* Allocate x buf space */
+ cla_x_buf = &cla_table->cla_x_buf;
+ cla_x_buf->buf_size = trunk_size;
+ cla_x_buf->buf_number = 1;
+ cla_x_buf->page_number = cla_x_buf->buf_number <<
+ cla_table->trunk_order;
+
+ ret = cqm_buf_alloc(cqm_handle, cla_x_buf, false);
+ CQM_CHECK_EQUAL_RET(handle->dev_hdl, ret, CQM_SUCCESS, return CQM_FAIL,
+ CQM_ALLOC_FAIL(lvl_2_x_buf));
+
+ /* Allocate y buf and z buf space */
+ cla_z_buf = &cla_table->cla_z_buf;
+ cla_z_buf->buf_size = trunk_size;
+ cla_z_buf->buf_number = ALIGN(cla_table->max_buffer_size, trunk_size) /
+ trunk_size;
+ cla_z_buf->page_number = cla_z_buf->buf_number <<
+ cla_table->trunk_order;
+
+ cla_y_buf = &cla_table->cla_y_buf;
+ cla_y_buf->buf_size = trunk_size;
+ cla_y_buf->buf_number =
+ (ALIGN(cla_z_buf->buf_number * sizeof(dma_addr_t),
+ trunk_size)) / trunk_size;
+
+ cla_y_buf->page_number = cla_y_buf->buf_number <<
+ cla_table->trunk_order;
+
+ /* Requires static allocation of all buffer space */
+ if (cla_table->alloc_static == true) {
+ if (cqm_cla_yz_lvl2_static(cqm_handle,
+ cla_y_buf,
+ cla_z_buf,
+ gpa_check_enable) == CQM_FAIL) {
+ cqm_buf_free(cla_x_buf, cqm_handle->dev);
+ return CQM_FAIL;
+ }
+ /* Fill gpa of y buf list into x buf */
+ base = (dma_addr_t *)(cla_x_buf->buf_list->va);
+ for (i = 0; i < cla_y_buf->buf_number; i++) {
+ /* gpa[0]=1 means this GPA is valid */
+ *base = (cla_y_buf->buf_list[i].pa | gpa_check_enable);
+ base++;
+ }
+
+ /* big-endian conversion */
+ cqm_swab64((u8 *)(cla_x_buf->buf_list->va),
+ cla_y_buf->buf_number);
+ } else {
+ /* Only initialize and allocate buf list space, buffer spaces
+ * are allocated in service
+ */
+ cla_z_buf->buf_list = (struct cqm_buf_list_s *)
+ vmalloc(cla_z_buf->buf_number *
+ sizeof(struct cqm_buf_list_s));
+ if (!cla_z_buf->buf_list) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_2_z_buf));
+ cqm_buf_free(cla_x_buf, cqm_handle->dev);
+ return CQM_FAIL;
+ }
+ memset(cla_z_buf->buf_list, 0,
+ cla_z_buf->buf_number * sizeof(struct cqm_buf_list_s));
+
+ cla_y_buf->buf_list = (struct cqm_buf_list_s *)
+ vmalloc(cla_y_buf->buf_number *
+ sizeof(struct cqm_buf_list_s));
+
+ if (!cla_y_buf->buf_list) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_2_y_buf));
+ cqm_buf_free(cla_z_buf, cqm_handle->dev);
+ cqm_buf_free(cla_x_buf, cqm_handle->dev);
+ return CQM_FAIL;
+ }
+ memset(cla_y_buf->buf_list, 0,
+ cla_y_buf->buf_number * sizeof(struct cqm_buf_list_s));
+ }
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_cla_xyz_check(struct cqm_handle_s *cqm_handle,
+ struct cqm_cla_table_s *cla_table)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+
+ if (cla_table->obj_num == 0) {
+ /* If the capability is set to 0, the CLA does not need to be
+ * initialized and exits directly
+ */
+ cqm_info(handle->dev_hdl, "Cla alloc: cla_type %u, obj_num=0, don't alloc buffer\n",
+ cla_table->type);
+ return CQM_SUCCESS;
+ }
+
+ /* Check whether obj_size is aligned with 2^n, and error is reported in
+ * case of 0 and 1
+ */
+ if (cqm_check_align(cla_table->obj_size) == false) {
+ cqm_err(handle->dev_hdl, "Cla alloc: cla_type %u, obj_size 0x%x is not align on 2^n\n",
+ cla_table->type, cla_table->obj_size);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_cla_xyz - Calculate how many levels of cla tables and allocate space
+ * for each level of cla tables
+ * @cqm_handle: cqm handle
+ * @cla_table: cla table
+ * Return: 0 - success, negative - failure
+ */
+s32 cqm_cla_xyz(struct cqm_handle_s *cqm_handle,
+ struct cqm_cla_table_s *cla_table)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_buf_s *cla_z_buf = NULL;
+ u32 trunk_size = 0;
+ s32 ret = CQM_FAIL;
+
+ if (cqm_cla_xyz_check(cqm_handle, cla_table) == CQM_FAIL)
+ return CQM_FAIL;
+
+ trunk_size = PAGE_SIZE << cla_table->trunk_order;
+
+ if (trunk_size < cla_table->obj_size) {
+ cqm_err(handle->dev_hdl, "Cla alloc: cla type %u, obj_size 0x%x is out of trunk size\n",
+ cla_table->type, cla_table->obj_size);
+ return CQM_FAIL;
+ }
+
+ /* Level 0 CLA: The buffer occupies little space, and can be assigned to
+ * cla_z_buf during initialization
+ */
+ if (cla_table->max_buffer_size <= trunk_size) {
+ cla_table->cla_lvl = CQM_CLA_LVL_0;
+
+ cla_table->z = CQM_MAX_INDEX_BIT;
+ cla_table->y = 0;
+ cla_table->x = 0;
+
+ cla_table->cacheline_z = cla_table->z;
+ cla_table->cacheline_y = cla_table->y;
+ cla_table->cacheline_x = cla_table->x;
+
+ /* Allocate z buf space */
+ cla_z_buf = &cla_table->cla_z_buf;
+ cla_z_buf->buf_size = trunk_size;
+ cla_z_buf->buf_number = 1;
+ cla_z_buf->page_number =
+ cla_z_buf->buf_number << cla_table->trunk_order;
+ ret = cqm_buf_alloc(cqm_handle, cla_z_buf, false);
+ CQM_CHECK_EQUAL_RET(
+ handle->dev_hdl, ret, CQM_SUCCESS,
+ return CQM_FAIL, CQM_ALLOC_FAIL(lvl_0_z_buf));
+
+ } else if (cla_table->max_buffer_size <=
+ (trunk_size * (trunk_size / sizeof(dma_addr_t)))) {
+ /* Level 1 CLA: Cla_y_buf is allocated during initialization,
+ * and cla_z_buf can be allocated dynamically
+ */
+ if (cqm_cla_xyz_lvl1(cqm_handle,
+ cla_table, trunk_size) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_xyz_lvl1));
+ return CQM_FAIL;
+ }
+ } else if (cla_table->max_buffer_size <=
+ (trunk_size * (trunk_size / sizeof(dma_addr_t)) *
+ (trunk_size / sizeof(dma_addr_t)))) {
+ /* Level 2 CLA: Cla_x_buf is allocated during initialization,
+ * and cla_y_buf and cla_z_buf can be dynamically allocated
+ */
+ if (cqm_cla_xyz_lvl2(cqm_handle, cla_table, trunk_size) ==
+ CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_xyz_lvl2));
+ return CQM_FAIL;
+ }
+ } else {
+ cqm_err(handle->dev_hdl, "Cla alloc: cla max_buffer_size 0x%x exceeds support range\n",
+ cla_table->max_buffer_size);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static void cqm_bat_entry_hash_init(void *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ void *cap)
+{
+ struct cqm_func_capability_s *capability =
+ (struct cqm_func_capability_s *)cap;
+
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->hash_number *
+ capability->hash_basic_size;
+ cla_table->obj_size = capability->hash_basic_size;
+ cla_table->obj_num = capability->hash_number;
+ cla_table->alloc_static = true;
+}
+
+static void cqm_bat_entry_qpc_init(void *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ void *cap)
+{
+ struct cqm_handle_s *handle = (struct cqm_handle_s *)cqm_handle;
+ struct hifc_hwdev *hwdev_handle = handle->ex_handle;
+ struct cqm_func_capability_s *capability =
+ (struct cqm_func_capability_s *)cap;
+
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->qpc_number *
+ capability->qpc_basic_size;
+ cla_table->obj_size = capability->qpc_basic_size;
+ cla_table->obj_num = capability->qpc_number;
+ cla_table->alloc_static = capability->qpc_alloc_static;
+ cqm_info(hwdev_handle->dev_hdl, "Cla alloc: qpc alloc_static=%d\n",
+ cla_table->alloc_static);
+}
+
+static void cqm_bat_entry_mpt_init(void *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ void *cap)
+{
+ struct cqm_func_capability_s *capability =
+ (struct cqm_func_capability_s *)cap;
+
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->mpt_number *
+ capability->mpt_basic_size;
+ cla_table->obj_size = capability->mpt_basic_size;
+ cla_table->obj_num = capability->mpt_number;
+ cla_table->alloc_static = true;
+}
+
+static void cqm_bat_entry_scqc_init(void *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ void *cap)
+{
+ struct cqm_handle_s *handle = (struct cqm_handle_s *)cqm_handle;
+ struct hifc_hwdev *hwdev_handle = handle->ex_handle;
+ struct cqm_func_capability_s *capability =
+ (struct cqm_func_capability_s *)cap;
+
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->scqc_number *
+ capability->scqc_basic_size;
+ cla_table->obj_size = capability->scqc_basic_size;
+ cla_table->obj_num = capability->scqc_number;
+ cla_table->alloc_static = capability->scqc_alloc_static;
+ cqm_info(hwdev_handle->dev_hdl, "Cla alloc: scqc alloc_static=%d\n",
+ cla_table->alloc_static);
+}
+
+static void cqm_bat_entry_srqc_init(void *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ void *cap)
+{
+ struct cqm_func_capability_s *capability =
+ (struct cqm_func_capability_s *)cap;
+
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->srqc_number *
+ capability->srqc_basic_size;
+ cla_table->obj_size = capability->srqc_basic_size;
+ cla_table->obj_num = capability->srqc_number;
+ cla_table->alloc_static = false;
+}
+
+static void cqm_bat_entry_gid_init(void *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ void *cap)
+{
+ struct cqm_func_capability_s *capability =
+ (struct cqm_func_capability_s *)cap;
+
+ cla_table->max_buffer_size = capability->gid_number *
+ capability->gid_basic_size;
+ cla_table->trunk_order = (u32)cqm_shift(
+ ALIGN(
+ cla_table->max_buffer_size,
+ PAGE_SIZE) / PAGE_SIZE);
+ cla_table->obj_size = capability->gid_basic_size;
+ cla_table->obj_num = capability->gid_number;
+ cla_table->alloc_static = true;
+}
+
+static void cqm_bat_entry_lun_init(void *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ void *cap)
+{
+ struct cqm_func_capability_s *capability =
+ (struct cqm_func_capability_s *)cap;
+
+ cla_table->trunk_order = CLA_TABLE_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->lun_number *
+ capability->lun_basic_size;
+ cla_table->obj_size = capability->lun_basic_size;
+ cla_table->obj_num = capability->lun_number;
+ cla_table->alloc_static = true;
+}
+
+static void cqm_bat_entry_taskmap_init(void *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ void *cap)
+{
+ struct cqm_func_capability_s *capability =
+ (struct cqm_func_capability_s *)cap;
+
+ cla_table->trunk_order = CQM_4K_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->taskmap_number *
+ capability->taskmap_basic_size;
+ cla_table->obj_size = capability->taskmap_basic_size;
+ cla_table->obj_num = capability->taskmap_number;
+ cla_table->alloc_static = true;
+}
+
+static void cqm_bat_entry_l3i_init(void *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ void *cap)
+{
+ struct cqm_func_capability_s *capability =
+ (struct cqm_func_capability_s *)cap;
+
+ cla_table->trunk_order = CLA_TABLE_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->l3i_number *
+ capability->l3i_basic_size;
+ cla_table->obj_size = capability->l3i_basic_size;
+ cla_table->obj_num = capability->l3i_number;
+ cla_table->alloc_static = true;
+}
+
+static void cqm_bat_entry_childc_init(void *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ void *cap)
+{
+ struct cqm_func_capability_s *capability =
+ (struct cqm_func_capability_s *)cap;
+
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->childc_number *
+ capability->childc_basic_size;
+ cla_table->obj_size = capability->childc_basic_size;
+ cla_table->obj_num = capability->childc_number;
+ cla_table->alloc_static = true;
+}
+
+static void cqm_bat_entry_timer_init(void *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ void *cap)
+{
+ struct cqm_func_capability_s *capability =
+ (struct cqm_func_capability_s *)cap;
+
+ cla_table->trunk_order = CQM_4K_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->timer_number *
+ capability->timer_basic_size;
+ cla_table->obj_size = capability->timer_basic_size;
+ cla_table->obj_num = capability->timer_number;
+ cla_table->alloc_static = true;
+}
+
+static void cqm_bat_entry_xid2cid_init(void *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ void *cap)
+{
+ struct cqm_func_capability_s *capability =
+ (struct cqm_func_capability_s *)cap;
+
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->xid2cid_number *
+ capability->xid2cid_basic_size;
+ cla_table->obj_size = capability->xid2cid_basic_size;
+ cla_table->obj_num = capability->xid2cid_number;
+ cla_table->alloc_static = true;
+}
+
+static void cqm_bat_entry_reoder_init(void *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ void *cap)
+{
+ struct cqm_func_capability_s *capability =
+ (struct cqm_func_capability_s *)cap;
+
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->reorder_number *
+ capability->reorder_basic_size;
+ cla_table->obj_size = capability->reorder_basic_size;
+ cla_table->obj_num = capability->reorder_number;
+ cla_table->alloc_static = true;
+}
+
+struct cqm_cla_entry_init_s cqm_cla_entry_init_tbl[] = {
+ {CQM_BAT_ENTRY_T_HASH, cqm_bat_entry_hash_init},
+ {CQM_BAT_ENTRY_T_QPC, cqm_bat_entry_qpc_init},
+ {CQM_BAT_ENTRY_T_MPT, cqm_bat_entry_mpt_init},
+ {CQM_BAT_ENTRY_T_SCQC, cqm_bat_entry_scqc_init},
+ {CQM_BAT_ENTRY_T_SRQC, cqm_bat_entry_srqc_init},
+ {CQM_BAT_ENTRY_T_GID, cqm_bat_entry_gid_init},
+ {CQM_BAT_ENTRY_T_LUN, cqm_bat_entry_lun_init},
+ {CQM_BAT_ENTRY_T_TASKMAP, cqm_bat_entry_taskmap_init},
+ {CQM_BAT_ENTRY_T_L3I, cqm_bat_entry_l3i_init},
+ {CQM_BAT_ENTRY_T_CHILDC, cqm_bat_entry_childc_init},
+ {CQM_BAT_ENTRY_T_TIMER, cqm_bat_entry_timer_init},
+ {CQM_BAT_ENTRY_T_XID2CID, cqm_bat_entry_xid2cid_init},
+ {CQM_BAT_ENTRY_T_REORDER, cqm_bat_entry_reoder_init},
+};
+
+static struct cqm_cla_entry_init_s *cqm_get_cla_init_entry(
+ struct cqm_handle_s *cqm_handle,
+ u32 type)
+{
+ int i;
+ struct cqm_cla_entry_init_s *entry = NULL;
+
+ for (i = 0;
+ i < (sizeof(cqm_cla_entry_init_tbl) /
+ sizeof(struct cqm_cla_entry_init_s)); i++) {
+ entry = &cqm_cla_entry_init_tbl[i];
+ if (entry->type == type)
+ return entry;
+ }
+
+ return NULL;
+}
+
+void cqm_cla_init_entry(struct cqm_handle_s *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ struct cqm_func_capability_s *capability)
+{
+ struct cqm_cla_entry_init_s *entry;
+
+ entry = cqm_get_cla_init_entry(cqm_handle, cla_table->type);
+ if (entry && entry->cqm_cla_init_handler)
+ entry->cqm_cla_init_handler(cqm_handle, cla_table, capability);
+}
+
+static s32 cqm_cla_fill_entry(struct cqm_handle_s *cqm_handle)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ s32 ret = CQM_FAIL;
+
+ /* After the allocation of CLA entry, fill in the BAT table */
+ cqm_bat_fill_cla(cqm_handle);
+
+ /* Notify the chip to refresh the BAT table */
+ ret = cqm_bat_update(cqm_handle);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bat_update));
+ return CQM_FAIL;
+ }
+
+ cqm_info(handle->dev_hdl, "Timer start: func_type=%d, timer_enable=%u\n",
+ cqm_handle->func_attribute.func_type,
+ cqm_handle->func_capability.timer_enable);
+
+ if ((cqm_handle->func_attribute.func_type == CQM_PPF) &&
+ (cqm_handle->func_capability.timer_enable == CQM_TIMER_ENABLE)) {
+ /* After the timer resource is allocated,
+ * the timer needs to be enabled
+ */
+ cqm_info(handle->dev_hdl, "Timer start: hifc ppf timer start\n");
+ ret = hifc_ppf_tmr_start((void *)(cqm_handle->ex_handle));
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, "Timer start: hifc ppf timer start, ret=%d\n",
+ ret);
+ return CQM_FAIL;
+ }
+ }
+ return CQM_SUCCESS;
+}
+
+s32 cqm_cla_init(struct cqm_handle_s *cqm_handle)
+{
+ struct cqm_func_capability_s *capability = &cqm_handle->func_capability;
+ struct cqm_bat_table_s *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table_s *cla_table = NULL;
+ s32 inv_flag = 0;
+ u32 i = 0;
+ u32 j = 0;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ cla_table->type = bat_table->bat_entry_type[i];
+
+ cqm_cla_init_entry(cqm_handle, cla_table, capability);
+
+ /* Allocate CLA entry space of all levels */
+ if ((cla_table->type >= CQM_BAT_ENTRY_T_HASH) &&
+ (cla_table->type <= CQM_BAT_ENTRY_T_REORDER)) {
+ /* Only needs to allocate timer resources for PPF,
+ * 8 wheels * 2k scales * 32B * func_num, for PF, there
+ * is no need to allocate resources for the timer, nor
+ * to fill in the structure of the timer entry in the
+ * BAT table.
+ */
+ if (!((cla_table->type == CQM_BAT_ENTRY_T_TIMER) &&
+ (cqm_handle->func_attribute.func_type
+ != CQM_PPF))) {
+ if (cqm_cla_xyz(cqm_handle, cla_table) ==
+ CQM_FAIL)
+ goto err;
+ }
+ }
+ mutex_init(&cla_table->lock);
+ }
+ if (cqm_cla_fill_entry(cqm_handle) == CQM_FAIL)
+ goto err;
+
+ return CQM_SUCCESS;
+
+err:
+ for (j = 0; j < i; j++) {
+ cla_table = &bat_table->entry[j];
+ if (cla_table->type != CQM_BAT_ENTRY_T_INVALID) {
+ cqm_buf_free_cache_inv(cqm_handle,
+ &cla_table->cla_x_buf,
+ &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle,
+ &cla_table->cla_y_buf,
+ &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle,
+ &cla_table->cla_z_buf,
+ &inv_flag);
+ }
+ }
+
+ return CQM_FAIL;
+}
+
+void cqm_cla_uninit(struct cqm_handle_s *cqm_handle)
+{
+ struct cqm_bat_table_s *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table_s *cla_table = NULL;
+ s32 inv_flag = 0;
+ u32 i = 0;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ if (cla_table->type != CQM_BAT_ENTRY_T_INVALID) {
+ cqm_buf_free_cache_inv(cqm_handle,
+ &cla_table->cla_x_buf,
+ &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle,
+ &cla_table->cla_y_buf,
+ &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle,
+ &cla_table->cla_z_buf,
+ &inv_flag);
+ }
+ }
+}
+
+s32 cqm_cla_update(struct cqm_handle_s *cqm_handle,
+ struct cqm_buf_list_s *buf_node_parent,
+ struct cqm_buf_list_s *buf_node_child,
+ u32 child_index, u8 cla_update_mode)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_cmd_buf_s *buf_in = NULL;
+ struct cqm_cla_update_cmd_s *cmd = NULL;
+ dma_addr_t pa = 0;
+ s32 ret = CQM_FAIL;
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+
+ buf_in = cqm_cmd_alloc((void *)(cqm_handle->ex_handle));
+ CQM_PTR_CHECK_RET(buf_in, return CQM_FAIL, CQM_ALLOC_FAIL(buf_in));
+ buf_in->size = sizeof(struct cqm_cla_update_cmd_s);
+
+ /* Fill the command format and convert to big endian */
+ cmd = (struct cqm_cla_update_cmd_s *)(buf_in->buf);
+
+ pa = buf_node_parent->pa + (child_index * sizeof(dma_addr_t));
+ cmd->gpa_h = CQM_ADDR_HI(pa);
+ cmd->gpa_l = CQM_ADDR_LW(pa);
+
+ pa = buf_node_child->pa;
+ cmd->value_h = CQM_ADDR_HI(pa);
+ cmd->value_l = CQM_ADDR_LW(pa);
+
+ cqm_dbg("Cla alloc: cqm_cla_update, gpa=0x%x 0x%x, value=0x%x 0x%x, cla_update_mode=0x%x\n",
+ cmd->gpa_h, cmd->gpa_l, cmd->value_h, cmd->value_l,
+ cla_update_mode);
+
+ /* CLA GPA check */
+ if (gpa_check_enable) {
+ switch (cla_update_mode) {
+ /* gpa[0]=1 means this GPA is valid */
+ case CQM_CLA_RECORD_NEW_GPA:
+ cmd->value_l |= 1;
+ break;
+ /* gpa[0]=0 means this GPA is valid */
+ case CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID:
+ case CQM_CLA_DEL_GPA_WITH_CACHE_INVALID:
+ cmd->value_l &= (~1);
+ break;
+ default:
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: cqm_cla_update, cqm_cla_update, wrong cla_update_mode=%u\n",
+ cla_update_mode);
+ break;
+ }
+ }
+
+ cqm_swab32((u8 *)cmd, (sizeof(struct cqm_cla_update_cmd_s) >> 2));
+
+ ret = cqm_send_cmd_box((void *)(cqm_handle->ex_handle),
+ CQM_CMD_ACK_TYPE_CMDQ,
+ CQM_MOD_CQM, CQM_CMD_T_CLA_UPDATE,
+ buf_in, NULL, CQM_CMD_TIMEOUT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_send_cmd_box));
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: cqm_cla_update, cqm_send_cmd_box_ret=%d\n",
+ ret);
+ cqm_err(handle->dev_hdl, "Cla alloc: cqm_cla_update, cla_update_cmd: 0x%x 0x%x 0x%x 0x%x\n",
+ cmd->gpa_h, cmd->gpa_l, cmd->value_h, cmd->value_l);
+ cqm_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return CQM_FAIL;
+ }
+
+ cqm_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_cla_alloc - Allocate a CLA trunk page
+ * @cqm_handle: cqm handle
+ * @cla_table: cla handle
+ * @buf_node_parent: the parent node whose content is to be updated
+ * @buf_node_child: the child node whose content is to be allocated
+ * @child_index: child index
+ * Return: 0 - success, negative - failure
+ */
+s32 cqm_cla_alloc(struct cqm_handle_s *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ struct cqm_buf_list_s *buf_node_parent,
+ struct cqm_buf_list_s *buf_node_child, u32 child_index)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ s32 ret = CQM_FAIL;
+
+ /* Allocate trunk page */
+ buf_node_child->va = (u8 *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
+ cla_table->trunk_order);
+ CQM_PTR_CHECK_RET(buf_node_child->va, return CQM_FAIL,
+ CQM_ALLOC_FAIL(va));
+
+ /* pci mapping */
+ buf_node_child->pa =
+ pci_map_single(cqm_handle->dev, buf_node_child->va,
+ PAGE_SIZE << cla_table->trunk_order,
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev, buf_node_child->pa)) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(buf_node_child->pa));
+ goto err1;
+ }
+
+ /* Notify the chip of trunk_pa and
+ * let it fill in the cla table entry
+ */
+ ret = cqm_cla_update(cqm_handle, buf_node_parent,
+ buf_node_child, child_index,
+ CQM_CLA_RECORD_NEW_GPA);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_update));
+ goto err2;
+ }
+
+ return CQM_SUCCESS;
+
+err2:
+ pci_unmap_single(cqm_handle->dev, buf_node_child->pa,
+ PAGE_SIZE << cla_table->trunk_order,
+ PCI_DMA_BIDIRECTIONAL);
+err1:
+ free_pages((ulong)(buf_node_child->va), cla_table->trunk_order);
+ buf_node_child->va = NULL;
+ return CQM_FAIL;
+}
+
+void cqm_cla_free(struct cqm_handle_s *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ struct cqm_buf_list_s *buf_node_parent,
+ struct cqm_buf_list_s *buf_node_child,
+ u32 child_index, u8 cla_update_mode)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+
+ cqm_dbg("Cla free: cla_update_mode=%u\n", cla_update_mode);
+
+ if (cqm_cla_update(cqm_handle, buf_node_parent,
+ buf_node_child, child_index,
+ cla_update_mode) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_update));
+ return;
+ }
+
+ if (cla_update_mode == CQM_CLA_DEL_GPA_WITH_CACHE_INVALID) {
+ if (cqm_cla_cache_invalid(
+ cqm_handle, buf_node_child->pa,
+ PAGE_SIZE << cla_table->trunk_order) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_cache_invalid));
+ return;
+ }
+ }
+
+ /* Unblock the pci mapping of the trunk page */
+ pci_unmap_single(cqm_handle->dev, buf_node_child->pa,
+ PAGE_SIZE << cla_table->trunk_order,
+ PCI_DMA_BIDIRECTIONAL);
+
+ /* Free trunk page */
+ free_pages((ulong)(buf_node_child->va), cla_table->trunk_order);
+ buf_node_child->va = NULL;
+}
+
+/**
+ * cqm_static_qpc_cla_get - When QPC is a static allocation, allocate the count
+ * of buffer from the index position in the cla table without lock
+ * @cqm_handle: cqm handle
+ * @cla_table: cla handle
+ * @index: the index of table
+ * @count: the count of buffer
+ * @pa: the physical address
+ * Return: the virtual address
+ */
+u8 *cqm_static_qpc_cla_get(struct cqm_handle_s *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_buf_s *cla_y_buf = &cla_table->cla_y_buf;
+ struct cqm_buf_s *cla_z_buf = &cla_table->cla_z_buf;
+ struct cqm_buf_list_s *buf_node_z = NULL;
+ u32 x_index = 0;
+ u32 y_index = 0;
+ u32 z_index = 0;
+ u32 trunk_size = PAGE_SIZE << cla_table->trunk_order;
+ u8 *ret_addr = NULL;
+ u32 offset = 0;
+
+ if (cla_table->cla_lvl == CQM_CLA_LVL_0) {
+ offset = index * cla_table->obj_size;
+ ret_addr = (u8 *)(cla_z_buf->buf_list->va) + offset;
+ *pa = cla_z_buf->buf_list->pa + offset;
+ } else if (cla_table->cla_lvl == CQM_CLA_LVL_1) {
+ z_index = index & ((1 << (cla_table->z + 1)) - 1);
+ y_index = index >> (cla_table->z + 1);
+
+ if (y_index >= cla_z_buf->buf_number) {
+ cqm_err(handle->dev_hdl,
+ "Static qpc cla get: index exceeds buf_number, y_index %u, z_buf_number %u\n",
+ y_index, cla_z_buf->buf_number);
+ return NULL;
+ }
+ buf_node_z = &cla_z_buf->buf_list[y_index];
+ if (!buf_node_z->va) {
+ cqm_err(handle->dev_hdl, "Cla get: static qpc cla_z_buf[%u].va=NULL\n",
+ y_index);
+ return NULL;
+ }
+ buf_node_z->refcount += count;
+ offset = z_index * cla_table->obj_size;
+ ret_addr = (u8 *)(buf_node_z->va) + offset;
+ *pa = buf_node_z->pa + offset;
+ } else {
+ z_index = index & ((1 << (cla_table->z + 1)) - 1);
+ y_index = (index >> (cla_table->z + 1)) &
+ ((1 << (cla_table->y - cla_table->z)) - 1);
+ x_index = index >> (cla_table->y + 1);
+
+ if ((x_index >= cla_y_buf->buf_number) ||
+ ((x_index * (trunk_size / sizeof(dma_addr_t)) + y_index) >=
+ cla_z_buf->buf_number)) {
+ cqm_err(handle->dev_hdl,
+ "Static qpc cla get: index exceeds buf_number,x_index %u, y_index %u, y_buf_number %u, z_buf_number %u\n ",
+ x_index, y_index, cla_y_buf->buf_number,
+ cla_z_buf->buf_number);
+ return NULL;
+ }
+
+ buf_node_z = &(cla_z_buf->buf_list[x_index *
+ (trunk_size / sizeof(dma_addr_t)) + y_index]);
+ if (!buf_node_z->va) {
+ cqm_err(handle->dev_hdl, "Cla get: static qpc cla_z_buf.va=NULL\n");
+ return NULL;
+ }
+
+ buf_node_z->refcount += count;
+ offset = z_index * cla_table->obj_size;
+ ret_addr = (u8 *)(buf_node_z->va) + offset;
+ *pa = buf_node_z->pa + offset;
+ }
+
+ return ret_addr;
+}
+
+static s32 cqm_cla_get_level_two(struct cqm_handle_s *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ u32 index, u32 count,
+ dma_addr_t *pa, u8 **ret_addr)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_buf_s *cla_x_buf = &cla_table->cla_x_buf;
+ struct cqm_buf_s *cla_y_buf = &cla_table->cla_y_buf;
+ struct cqm_buf_s *cla_z_buf = &cla_table->cla_z_buf;
+ struct cqm_buf_list_s *buf_node_x = NULL;
+ struct cqm_buf_list_s *buf_node_y = NULL;
+ struct cqm_buf_list_s *buf_node_z = NULL;
+ u32 x_index = 0;
+ u32 y_index = 0;
+ u32 z_index = 0;
+ u32 trunk_size = PAGE_SIZE << cla_table->trunk_order;
+ u32 offset = 0;
+
+ z_index = index & ((1 << (cla_table->z + 1)) - 1);
+ y_index = (index >> (cla_table->z + 1)) &
+ ((1 << (cla_table->y - cla_table->z)) - 1);
+ x_index = index >> (cla_table->y + 1);
+
+ if ((x_index >= cla_y_buf->buf_number) ||
+ ((x_index * (trunk_size / sizeof(dma_addr_t)) + y_index) >=
+ cla_z_buf->buf_number)) {
+ cqm_err(handle->dev_hdl,
+ "Cla get: index exceeds buf_number, x_index %u, y_index %u, y_buf_number %u, z_buf_number %u\n",
+ x_index, y_index, cla_y_buf->buf_number,
+ cla_z_buf->buf_number);
+ return CQM_FAIL;
+ }
+
+ buf_node_x = cla_x_buf->buf_list;
+ buf_node_y = &cla_y_buf->buf_list[x_index];
+ buf_node_z = &(cla_z_buf->buf_list[x_index *
+ (trunk_size / sizeof(dma_addr_t)) + y_index]);
+
+ /* Y buf node does not exist, allocates y node page */
+ if (!buf_node_y->va) {
+ if (cqm_cla_alloc(
+ cqm_handle, cla_table,
+ buf_node_x, buf_node_y, x_index) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_alloc));
+ return CQM_FAIL;
+ }
+ }
+
+ /* Z buf node does not exist, allocates z node page */
+ if (!buf_node_z->va) {
+ if (cqm_cla_alloc(cqm_handle,
+ cla_table,
+ buf_node_y,
+ buf_node_z,
+ y_index) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_alloc));
+ if (buf_node_y->refcount == 0) {
+ /* Free y node, needs cache_invalid */
+ cqm_cla_free(
+ cqm_handle, cla_table,
+ buf_node_x, buf_node_y, x_index,
+ CQM_CLA_DEL_GPA_WITH_CACHE_INVALID);
+ }
+ return CQM_FAIL;
+ }
+
+ cqm_dbg("Cla get: 2L: y_refcount=0x%x\n", buf_node_y->refcount);
+ /* Y buf node's reference count should be +1 */
+ buf_node_y->refcount++;
+ }
+
+ cqm_dbg("Cla get: 2L: z_refcount=0x%x, count=0x%x\n",
+ buf_node_z->refcount, count);
+ buf_node_z->refcount += count;
+ offset = z_index * cla_table->obj_size;
+ *ret_addr = (u8 *)(buf_node_z->va) + offset;
+ *pa = buf_node_z->pa + offset;
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_cla_get_level_one(struct cqm_handle_s *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ u32 index, u32 count, dma_addr_t *pa,
+ u8 **ret_addr)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_buf_s *cla_y_buf = &cla_table->cla_y_buf;
+ struct cqm_buf_s *cla_z_buf = &cla_table->cla_z_buf;
+ struct cqm_buf_list_s *buf_node_y = NULL;
+ struct cqm_buf_list_s *buf_node_z = NULL;
+ u32 y_index = 0;
+ u32 z_index = 0;
+ u32 offset = 0;
+
+ z_index = index & ((1 << (cla_table->z + 1)) - 1);
+ y_index = index >> (cla_table->z + 1);
+
+ if (y_index >= cla_z_buf->buf_number) {
+ cqm_err(handle->dev_hdl,
+ "Cla get: index exceeds buf_number, y_index %u, z_buf_number %u\n",
+ y_index, cla_z_buf->buf_number);
+ return CQM_FAIL;
+ }
+ buf_node_z = &cla_z_buf->buf_list[y_index];
+ buf_node_y = cla_y_buf->buf_list;
+
+ /* Z buf node does not exist, first allocate the page */
+ if (!buf_node_z->va) {
+ if (cqm_cla_alloc(cqm_handle,
+ cla_table,
+ buf_node_y,
+ buf_node_z,
+ y_index) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_alloc));
+ cqm_err(handle->dev_hdl,
+ "Cla get: cla_table->type=%u\n",
+ cla_table->type);
+ return CQM_FAIL;
+ }
+ }
+
+ cqm_dbg("Cla get: 1L: z_refcount=0x%x, count=0x%x\n",
+ buf_node_z->refcount, count);
+ buf_node_z->refcount += count;
+ offset = z_index * cla_table->obj_size;
+ *ret_addr = (u8 *)(buf_node_z->va) + offset;
+ *pa = buf_node_z->pa + offset;
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_cla_get - Allocate the count of buffer from the index position in the
+ * cla table
+ * @cqm_handle: cqm handle
+ * @cla_table: cla table
+ * @index: the index of table
+ * @count: the count of buffer
+ * @pa: the physical address
+ * Return: the virtual address
+ */
+u8 *cqm_cla_get(struct cqm_handle_s *cqm_handle,
+ struct cqm_cla_table_s *cla_table, u32 index,
+ u32 count, dma_addr_t *pa)
+{
+ struct cqm_buf_s *cla_z_buf = &cla_table->cla_z_buf;
+ u8 *ret_addr = NULL;
+ u32 offset = 0;
+
+ mutex_lock(&cla_table->lock);
+ if (cla_table->cla_lvl == CQM_CLA_LVL_0) {
+ /* Level 0 CLA pages are statically allocated */
+ offset = index * cla_table->obj_size;
+ ret_addr = (u8 *)(cla_z_buf->buf_list->va) + offset;
+ *pa = cla_z_buf->buf_list->pa + offset;
+ } else if (cla_table->cla_lvl == CQM_CLA_LVL_1) {
+ if (cqm_cla_get_level_one(cqm_handle, cla_table,
+ index, count,
+ pa, &ret_addr) == CQM_FAIL) {
+ mutex_unlock(&cla_table->lock);
+ return NULL;
+ }
+ } else {
+ if (cqm_cla_get_level_two(cqm_handle,
+ cla_table,
+ index,
+ count,
+ pa,
+ &ret_addr) == CQM_FAIL) {
+ mutex_unlock(&cla_table->lock);
+ return NULL;
+ }
+ }
+
+ mutex_unlock(&cla_table->lock);
+ return ret_addr;
+}
+
+/**
+ * cqm_cla_put -Decrease the reference count of the trunk page, if it is reduced
+ * to 0, release the trunk page
+ * @cqm_handle: cqm handle
+ * @cla_table: cla table
+ * @index: the index of table
+ * @count: the count of buffer
+ */
+void cqm_cla_put(struct cqm_handle_s *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ u32 index, u32 count)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_buf_s *cla_x_buf = &cla_table->cla_x_buf;
+ struct cqm_buf_s *cla_y_buf = &cla_table->cla_y_buf;
+ struct cqm_buf_s *cla_z_buf = &cla_table->cla_z_buf;
+ struct cqm_buf_list_s *buf_node_x = NULL;
+ struct cqm_buf_list_s *buf_node_y = NULL;
+ struct cqm_buf_list_s *buf_node_z = NULL;
+ u32 x_index = 0;
+ u32 y_index = 0;
+ u32 trunk_size = PAGE_SIZE << cla_table->trunk_order;
+
+ /* Buffer is statically allocated,
+ * no need to control the reference count
+ */
+ if (cla_table->alloc_static == true)
+ return;
+
+ mutex_lock(&cla_table->lock);
+
+ if (cla_table->cla_lvl == CQM_CLA_LVL_1) {
+ y_index = index >> (cla_table->z + 1);
+
+ if (y_index >= cla_z_buf->buf_number) {
+ cqm_err(handle->dev_hdl,
+ "Cla put: index exceeds buf_number, y_index %u, z_buf_number %u\n",
+ y_index, cla_z_buf->buf_number);
+ cqm_err(handle->dev_hdl,
+ "Cla put: cla_table->type=%u\n",
+ cla_table->type);
+ mutex_unlock(&cla_table->lock);
+ return;
+ }
+
+ buf_node_z = &cla_z_buf->buf_list[y_index];
+ buf_node_y = cla_y_buf->buf_list;
+
+ /* When the z node page reference count is 0,
+ * release the z node page
+ */
+ cqm_dbg("Cla put: 1L: z_refcount=0x%x, count=0x%x\n",
+ buf_node_z->refcount, count);
+ buf_node_z->refcount -= count;
+ if (buf_node_z->refcount == 0) {
+ /* Z node does not need cache invalid */
+ cqm_cla_free(cqm_handle, cla_table, buf_node_y,
+ buf_node_z, y_index,
+ CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID);
+ }
+ } else if (cla_table->cla_lvl == CQM_CLA_LVL_2) {
+ y_index = (index >> (cla_table->z + 1)) &
+ ((1 << (cla_table->y - cla_table->z)) - 1);
+ x_index = index >> (cla_table->y + 1);
+
+ if ((x_index >= cla_y_buf->buf_number) ||
+ ((x_index * (trunk_size / sizeof(dma_addr_t)) + y_index) >=
+ cla_z_buf->buf_number)) {
+ cqm_err(handle->dev_hdl,
+ "Cla put: index exceeds buf_number, x_index %u, y_index %u, y_buf_number %u, z_buf_number %u\n",
+ x_index, y_index, cla_y_buf->buf_number,
+ cla_z_buf->buf_number);
+ mutex_unlock(&cla_table->lock);
+ return;
+ }
+
+ buf_node_x = cla_x_buf->buf_list;
+ buf_node_y = &cla_y_buf->buf_list[x_index];
+ buf_node_z = &(cla_z_buf->buf_list[x_index *
+ (trunk_size / sizeof(dma_addr_t)) + y_index]);
+ cqm_dbg("Cla put: 2L: z_refcount=0x%x, count=0x%x\n",
+ buf_node_z->refcount, count);
+
+ /* When the z node page reference count is 0,
+ * release the z node page
+ */
+ buf_node_z->refcount -= count;
+ if (buf_node_z->refcount == 0) {
+ cqm_cla_free(cqm_handle, cla_table, buf_node_y,
+ buf_node_z, y_index,
+ CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID);
+
+ /* When the y node page reference count is 0,
+ * release the y node page
+ */
+ cqm_dbg("Cla put: 2L: y_refcount=0x%x\n",
+ buf_node_y->refcount);
+ buf_node_y->refcount--;
+ if (buf_node_y->refcount == 0) {
+ /* Y node needs cache invalid */
+ cqm_cla_free(
+ cqm_handle, cla_table, buf_node_x,
+ buf_node_y, x_index,
+ CQM_CLA_DEL_GPA_WITH_CACHE_INVALID);
+ }
+ }
+ }
+
+ mutex_unlock(&cla_table->lock);
+}
+
+/**
+ * cqm_cla_table_get - Find the CLA table structure corresponding to a BAT entry
+ * @bat_table: bat table
+ * @entry_type: entry type
+ * @count: the count of buffer
+ * Return: the CLA table
+ */
+struct cqm_cla_table_s *cqm_cla_table_get(struct cqm_bat_table_s *bat_table,
+ u32 entry_type)
+{
+ struct cqm_cla_table_s *cla_table = NULL;
+ u32 i = 0;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ if (entry_type == cla_table->type)
+ return cla_table;
+ }
+
+ return NULL;
+}
+
+#define bitmap_section
+
+/**
+ * __cqm_bitmap_init - Initialize a bitmap
+ * @bitmap: cqm bitmap table
+ * Return: 0 - success, negative - failure
+ */
+s32 __cqm_bitmap_init(struct cqm_bitmap_s *bitmap)
+{
+ spin_lock_init(&bitmap->lock);
+
+ /* The max_num of bitmap is aligned by 8, and then shifted right by
+ * 3bits to get how many Bytes are needed
+ */
+ bitmap->table =
+ (ulong *)vmalloc((ALIGN(bitmap->max_num, 8) >> 3));
+ CQM_PTR_CHECK_RET(bitmap->table, return CQM_FAIL,
+ CQM_ALLOC_FAIL(bitmap->table));
+ memset(bitmap->table, 0, (ALIGN(bitmap->max_num, 8) >> 3));
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_bitmap_init_by_type(struct cqm_handle_s *cqm_handle,
+ struct cqm_cla_table_s *cla_table,
+ struct cqm_bitmap_s *bitmap)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_func_capability_s *capability = &cqm_handle->func_capability;
+ s32 ret = CQM_SUCCESS;
+
+ switch (cla_table->type) {
+ case CQM_BAT_ENTRY_T_QPC:
+ bitmap->max_num = capability->qpc_number;
+ bitmap->reserved_top = capability->qpc_reserved;
+ bitmap->last = capability->qpc_reserved;
+ cqm_info(handle->dev_hdl, "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = __cqm_bitmap_init(bitmap);
+ break;
+ case CQM_BAT_ENTRY_T_MPT:
+ bitmap->max_num = capability->mpt_number;
+ bitmap->reserved_top = capability->mpt_reserved;
+ bitmap->last = capability->mpt_reserved;
+ cqm_info(handle->dev_hdl, "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = __cqm_bitmap_init(bitmap);
+ break;
+ case CQM_BAT_ENTRY_T_SCQC:
+ bitmap->max_num = capability->scqc_number;
+ bitmap->reserved_top = capability->scq_reserved;
+ bitmap->last = capability->scq_reserved;
+ cqm_info(handle->dev_hdl, "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = __cqm_bitmap_init(bitmap);
+ break;
+ case CQM_BAT_ENTRY_T_SRQC:
+ bitmap->max_num = capability->srqc_number;
+ bitmap->reserved_top = 0;
+ bitmap->last = 0;
+ cqm_info(handle->dev_hdl, "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = __cqm_bitmap_init(bitmap);
+ break;
+ default:
+ break;
+ }
+
+ return ret;
+}
+
+/**
+ * cqm_bitmap_init - Initialize a bitmap
+ * @cqm_handle: cqm handle
+ * Return: 0 - success, negative - failure
+ */
+s32 cqm_bitmap_init(struct cqm_handle_s *cqm_handle)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_bat_table_s *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table_s *cla_table = NULL;
+ struct cqm_bitmap_s *bitmap = NULL;
+ u32 i = 0;
+ s32 ret = CQM_SUCCESS;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ if (cla_table->obj_num == 0) {
+ cqm_info(handle->dev_hdl, "Cla alloc: cla_type %u, obj_num=0, don't init bitmap\n",
+ cla_table->type);
+ continue;
+ }
+
+ bitmap = &cla_table->bitmap;
+ ret = cqm_bitmap_init_by_type(cqm_handle, cla_table, bitmap);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, "Bitmap init: failed to init cla_table_type=%u, obj_num=0x%x\n",
+ cla_table->type, cla_table->obj_num);
+ goto err;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_bitmap_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_bitmap_uninit - Uninitialize a bitmap
+ * @cqm_handle: cqm handle
+ */
+void cqm_bitmap_uninit(struct cqm_handle_s *cqm_handle)
+{
+ struct cqm_bat_table_s *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table_s *cla_table = NULL;
+ struct cqm_bitmap_s *bitmap = NULL;
+ u32 i = 0;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ bitmap = &cla_table->bitmap;
+ if (cla_table->type != CQM_BAT_ENTRY_T_INVALID) {
+ if (bitmap->table) {
+ vfree(bitmap->table);
+ bitmap->table = NULL;
+ }
+ }
+ }
+}
+
+/**
+ * cqm_bitmap_check_range - Starting from begin, check whether count bits are
+ * free in the table, required: 1. This set of bits cannot cross step, 2. This
+ * group of bits must be 0
+ * @table: bitmap table
+ * @step: steps
+ * @max_num: max num
+ * @begin: begin position
+ * @count: the count of bit to check
+ * Return: If check valid return begin position
+ */
+u32 cqm_bitmap_check_range(const ulong *table, u32 step,
+ u32 max_num, u32 begin, u32 count)
+{
+ u32 i = 0;
+ u32 end = (begin + (count - 1));
+
+ /* Single bit is not checked */
+ if (count == 1)
+ return begin;
+
+ /* End is out of bounds */
+ if (end >= max_num)
+ return max_num;
+
+ /* Bit check, if there is a bit other than 0, return next bit */
+ for (i = (begin + 1); i <= end; i++) {
+ if (test_bit((s32)i, table))
+ return i + 1;
+ }
+
+ /* Check if it is in a different step */
+ if ((begin & (~(step - 1))) != (end & (~(step - 1))))
+ return (end & (~(step - 1)));
+
+ /* If check pass, return begin position */
+ return begin;
+}
+
+static void cqm_bitmap_set_bit(struct cqm_bitmap_s *bitmap, u32 index,
+ u32 max_num, u32 count, bool update_last,
+ ulong *table)
+{
+ u32 i;
+
+ /* Set 1 to the found bit and reset last */
+ if (index < max_num) {
+ for (i = index; i < (index + count); i++)
+ set_bit(i, table);
+
+ if (update_last) {
+ bitmap->last = (index + count);
+ if (bitmap->last >= bitmap->max_num)
+ bitmap->last = bitmap->reserved_top;
+ }
+ }
+}
+
+/**
+ * cqm_bitmap_alloc - Allocate a bitmap index, 0 and 1 should not be used, Scan
+ * back from the place where you last applied, and needs to support the
+ * application of a series of consecutive indexes, and should not to cross trunk
+ * @table: bitmap table
+ * @step: steps
+ * @count: the count of bit to check
+ * @update_last: update last
+ * Return: Success - return the index, failure - return the max
+ */
+u32 cqm_bitmap_alloc(struct cqm_bitmap_s *bitmap, u32 step, u32 count,
+ bool update_last)
+{
+ u32 index = 0;
+ u32 max_num = bitmap->max_num;
+ u32 last = bitmap->last;
+ ulong *table = bitmap->table;
+
+ spin_lock(&bitmap->lock);
+
+ /* Search for a free bit from the last position */
+ do {
+ index = find_next_zero_bit(table, max_num, last);
+ if (index < max_num) {
+ last = cqm_bitmap_check_range(table, step,
+ max_num, index, count);
+ } else {
+ break;
+ }
+ } while (last != index);
+
+ /* The above search failed, search for a free bit from the beginning */
+ if (index >= max_num) {
+ last = bitmap->reserved_top;
+ do {
+ index = find_next_zero_bit(table, max_num, last);
+ if (index < max_num) {
+ last = cqm_bitmap_check_range(table, step,
+ max_num,
+ index, count);
+ } else {
+ break;
+ }
+ } while (last != index);
+ }
+ cqm_bitmap_set_bit(bitmap, index, max_num, count, update_last, table);
+ spin_unlock(&bitmap->lock);
+ return index;
+}
+
+/**
+ * cqm_bitmap_alloc_reserved - Allocate the reserve bit according to index
+ * @bitmap: bitmap table
+ * @count: count
+ * @index: the index of bitmap
+ * Return: Success - return the index, failure - return the max
+ */
+u32 cqm_bitmap_alloc_reserved(struct cqm_bitmap_s *bitmap, u32 count, u32 index)
+{
+ ulong *table = bitmap->table;
+ u32 ret_index = CQM_INDEX_INVALID;
+
+ if ((index >= bitmap->reserved_top) || (index >= bitmap->max_num) ||
+ (count != 1)) {
+ return CQM_INDEX_INVALID;
+ }
+
+ spin_lock(&bitmap->lock);
+
+ if (test_bit(index, table)) {
+ ret_index = CQM_INDEX_INVALID;
+ } else {
+ set_bit(index, table);
+ ret_index = index;
+ }
+
+ spin_unlock(&bitmap->lock);
+ return ret_index;
+}
+
+/**
+ * cqm_bitmap_free - Release a bitmap index
+ * @bitmap: bitmap table
+ * @index: the index of bitmap
+ * @count: count
+ */
+void cqm_bitmap_free(struct cqm_bitmap_s *bitmap, u32 index, u32 count)
+{
+ ulong i = 0;
+
+ spin_lock(&bitmap->lock);
+
+ for (i = index; i < (index + count); i++)
+ clear_bit((s32)i, bitmap->table);
+
+ spin_unlock(&bitmap->lock);
+}
+
+#define obj_table_section
+
+/**
+ * _cqm_object_table_init - Initialize a table of object and index
+ * @cqm_handle: cqm handle
+ * Return: 0 - success, negative - failure
+ */
+s32 __cqm_object_table_init(struct cqm_object_table_s *obj_table)
+{
+ rwlock_init(&obj_table->lock);
+
+ obj_table->table = (struct cqm_object_s **)vmalloc(obj_table->max_num *
+ sizeof(void *));
+ CQM_PTR_CHECK_RET(obj_table->table, return CQM_FAIL,
+ CQM_ALLOC_FAIL(table));
+ memset(obj_table->table, 0, obj_table->max_num * sizeof(void *));
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_object_table_init - Initialize the association table of object and index
+ * @cqm_handle: cqm handle
+ * Return: 0 - success, negative - failure
+ */
+s32 cqm_object_table_init(struct cqm_handle_s *cqm_handle)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_func_capability_s *capability = &cqm_handle->func_capability;
+ struct cqm_bat_table_s *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table_s *cla_table = NULL;
+ struct cqm_object_table_s *obj_table = NULL;
+ s32 ret = CQM_SUCCESS;
+ u32 i = 0;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ if (cla_table->obj_num == 0) {
+ cqm_info(handle->dev_hdl,
+ "Obj table init: cla_table_type %u, obj_num=0, don't init obj table\n",
+ cla_table->type);
+ continue;
+ }
+
+ obj_table = &cla_table->obj_table;
+
+ switch (cla_table->type) {
+ case CQM_BAT_ENTRY_T_QPC:
+ obj_table->max_num = capability->qpc_number;
+ ret = __cqm_object_table_init(obj_table);
+ break;
+ case CQM_BAT_ENTRY_T_MPT:
+ obj_table->max_num = capability->mpt_number;
+ ret = __cqm_object_table_init(obj_table);
+ break;
+ case CQM_BAT_ENTRY_T_SCQC:
+ obj_table->max_num = capability->scqc_number;
+ ret = __cqm_object_table_init(obj_table);
+ break;
+ case CQM_BAT_ENTRY_T_SRQC:
+ obj_table->max_num = capability->srqc_number;
+ ret = __cqm_object_table_init(obj_table);
+ break;
+ default:
+ break;
+ }
+
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ "Obj table init: failed to init cla_table_type=%u, obj_num=0x%x\n",
+ cla_table->type, cla_table->obj_num);
+ goto err;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_object_table_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_object_table_uninit - Deinitialize the association table of object and
+ * index
+ * @cqm_handle: cqm handle
+ */
+void cqm_object_table_uninit(struct cqm_handle_s *cqm_handle)
+{
+ struct cqm_bat_table_s *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table_s *cla_table = NULL;
+ struct cqm_object_table_s *obj_table = NULL;
+ u32 i = 0;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ obj_table = &cla_table->obj_table;
+ if (cla_table->type != CQM_BAT_ENTRY_T_INVALID) {
+ if (obj_table->table) {
+ vfree(obj_table->table);
+ obj_table->table = NULL;
+ }
+ }
+ }
+}
+
+/**
+ * cqm_object_table_insert - Insert an object, turn off the soft interrupt
+ * @cqm_handle: cqm handle
+ * @object_table: object table
+ * @index: the index of table
+ * @obj: the object to insert
+ * Return: 0 - success, negative - failure
+ */
+s32 cqm_object_table_insert(struct cqm_handle_s *cqm_handle,
+ struct cqm_object_table_s *object_table, u32 index,
+ struct cqm_object_s *obj)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+
+ if (index >= object_table->max_num) {
+ cqm_err(handle->dev_hdl, "Obj table insert: index 0x%x exceeds max_num 0x%x\n",
+ index, object_table->max_num);
+ return CQM_FAIL;
+ }
+
+ write_lock(&object_table->lock);
+
+ if (!object_table->table[index]) {
+ object_table->table[index] = obj;
+ write_unlock(&object_table->lock);
+ return CQM_SUCCESS;
+ }
+ write_unlock(&object_table->lock);
+ cqm_err(handle->dev_hdl, "Obj table insert: object_table->table[0x%x] has been inserted\n",
+ index);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_object_table_remove - remove an object
+ * @cqm_handle: cqm handle
+ * @object_table: object table
+ * @index: the index of table
+ * @obj: the object to remove
+ * Return: 0 - success, negative - failure
+ */
+void cqm_object_table_remove(struct cqm_handle_s *cqm_handle,
+ struct cqm_object_table_s *object_table,
+ u32 index, const struct cqm_object_s *obj)
+{
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+
+ if (index >= object_table->max_num) {
+ cqm_err(handle->dev_hdl, "Obj table remove: index 0x%x exceeds max_num 0x%x\n",
+ index, object_table->max_num);
+ return;
+ }
+
+ write_lock(&object_table->lock);
+
+ if ((object_table->table[index]) &&
+ (object_table->table[index] == obj)) {
+ object_table->table[index] = NULL;
+ } else {
+ cqm_err(handle->dev_hdl, "Obj table remove: object_table->table[0x%x] has been removed\n",
+ index);
+ }
+
+ write_unlock(&object_table->lock);
+}
+
+/**
+ * cqm_srq_used_rq_delete - Delete rq in TOE SRQ mode
+ * @object: cqm object
+ */
+void cqm_srq_used_rq_delete(struct cqm_object_s *object)
+{
+ struct cqm_queue_s *common = container_of(object, struct cqm_queue_s,
+ object);
+ struct cqm_nonrdma_qinfo_s *qinfo = container_of(
+ common,
+ struct cqm_nonrdma_qinfo_s,
+ common);
+ u32 link_wqe_offset = qinfo->wqe_per_buf * qinfo->wqe_size;
+ struct cqm_srq_linkwqe_s *srq_link_wqe = NULL;
+ dma_addr_t addr;
+ struct cqm_handle_s *cqm_handle = (struct cqm_handle_s *)
+ (common->object.cqm_handle);
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+
+ /* The current SRQ solution does not support the case where RQ
+ * initialization without container, which may cause error when RQ
+ * resources are released. So RQ initializes with only one container,
+ * and releases only one contaienr when resourced are released.
+ */
+ CQM_PTR_CHECK_NO_RET(
+ common->head_container, "Rq del: rq has no contianer to release\n",
+ return);
+
+ /* Get current container pa from link wqe, and ummap it */
+ srq_link_wqe = (struct cqm_srq_linkwqe_s *)(common->head_container +
+ link_wqe_offset);
+ /* Convert only the big endian of the wqe part of the link */
+ cqm_swab32((u8 *)(srq_link_wqe), sizeof(struct cqm_linkwqe_s) >> 2);
+
+ addr = CQM_ADDR_COMBINE(srq_link_wqe->current_buffer_gpa_h,
+ srq_link_wqe->current_buffer_gpa_l);
+ if (addr == 0) {
+ cqm_err(handle->dev_hdl, "Rq del: buffer physical addr is null\n");
+ return;
+ }
+ pci_unmap_single(cqm_handle->dev, addr, qinfo->container_size,
+ PCI_DMA_BIDIRECTIONAL);
+
+ /* Get current container va from link wqe, and free it */
+ addr = CQM_ADDR_COMBINE(srq_link_wqe->current_buffer_addr_h,
+ srq_link_wqe->current_buffer_addr_l);
+ if (addr == 0) {
+ cqm_err(handle->dev_hdl, "Rq del: buffer virtual addr is null\n");
+ return;
+ }
+ kfree((void *)addr);
+}
+
+#define obj_intern_if_section
+
+/**
+ * cqm_qpc_mpt_bitmap_alloc - Apply index from bitmap when creating qpc or mpt
+ * @object: cqm object
+ * @cla_table: cla table
+ * Return: 0 - success, negative - failure
+ */
+s32 cqm_qpc_mpt_bitmap_alloc(struct cqm_object_s *object,
+ struct cqm_cla_table_s *cla_table)
+{
+ struct cqm_handle_s *cqm_handle = (struct cqm_handle_s *)
+ object->cqm_handle;
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_qpc_mpt_s *common = container_of(object,
+ struct cqm_qpc_mpt_s,
+ object);
+ struct cqm_qpc_mpt_info_s *qpc_mpt_info =
+ container_of(
+ common,
+ struct cqm_qpc_mpt_info_s,
+ common);
+ struct cqm_bitmap_s *bitmap = &cla_table->bitmap;
+ u32 index = 0;
+ u32 count = 0;
+
+ count = (ALIGN(object->object_size, cla_table->obj_size)) /
+ cla_table->obj_size;
+ qpc_mpt_info->index_count = count;
+
+ if (qpc_mpt_info->common.xid == CQM_INDEX_INVALID) {
+ /* Allocate index normally */
+ index = cqm_bitmap_alloc(
+ bitmap,
+ 1 << (cla_table->z + 1),
+ count,
+ cqm_handle->func_capability.xid_alloc_mode);
+ if (index < bitmap->max_num) {
+ qpc_mpt_info->common.xid = index;
+ } else {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bitmap_alloc));
+ return CQM_FAIL;
+ }
+ } else {
+ /* Allocate reserved index */
+ index = cqm_bitmap_alloc_reserved(
+ bitmap, count,
+ qpc_mpt_info->common.xid);
+ if (index != qpc_mpt_info->common.xid) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bitmap_alloc_reserved));
+ return CQM_FAIL;
+ }
+ }
+
+ return CQM_SUCCESS;
+}
+
+static struct cqm_cla_table_s *cqm_qpc_mpt_prepare_cla_table(
+ struct cqm_object_s *object)
+{
+ struct cqm_handle_s *cqm_handle = (struct cqm_handle_s *)
+ object->cqm_handle;
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_bat_table_s *bat_table = &cqm_handle->bat_table;
+
+ struct cqm_cla_table_s *cla_table = NULL;
+
+ /* Get the corresponding cla table */
+ if (object->object_type == CQM_OBJECT_SERVICE_CTX) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_QPC);
+ } else {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
+ return NULL;
+ }
+
+ CQM_PTR_CHECK_RET(cla_table, return NULL,
+ CQM_FUNCTION_FAIL(cqm_cla_table_get));
+
+ /* Allocate index for bitmap */
+ if (cqm_qpc_mpt_bitmap_alloc(object, cla_table) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_qpc_mpt_bitmap_alloc));
+ return NULL;
+ }
+
+ return cla_table;
+}
+
+/**
+ * cqm_qpc_mpt_create - Create qpc or mpt
+ * @object: cqm object
+ * Return: 0 - success, negative - failure
+ */
+s32 cqm_qpc_mpt_create(struct cqm_object_s *object)
+{
+ struct cqm_handle_s *cqm_handle = (struct cqm_handle_s *)
+ object->cqm_handle;
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_qpc_mpt_s *common =
+ container_of(object, struct cqm_qpc_mpt_s, object);
+ struct cqm_qpc_mpt_info_s *qpc_mpt_info =
+ container_of(common, struct cqm_qpc_mpt_info_s, common);
+ struct cqm_cla_table_s *cla_table = NULL;
+ struct cqm_bitmap_s *bitmap = NULL;
+ struct cqm_object_table_s *object_table = NULL;
+ u32 index = 0;
+ u32 count = 0;
+
+ cla_table = cqm_qpc_mpt_prepare_cla_table(object);
+ CQM_PTR_CHECK_RET(cla_table, return CQM_FAIL,
+ CQM_FUNCTION_FAIL(cqm_qpc_mpt_prepare_cla_table));
+
+ bitmap = &cla_table->bitmap;
+ index = qpc_mpt_info->common.xid;
+ count = qpc_mpt_info->index_count;
+
+ /* Find the trunk page from BAT/CLA and allocate the buffer, the
+ * business needs to ensure that the released buffer has been cleared
+ */
+ if (cla_table->alloc_static == true) {
+ qpc_mpt_info->common.vaddr =
+ cqm_static_qpc_cla_get(cqm_handle, cla_table,
+ index, count, &common->paddr);
+ } else {
+ qpc_mpt_info->common.vaddr =
+ cqm_cla_get(cqm_handle, cla_table,
+ index, count, &common->paddr);
+ }
+ if (!qpc_mpt_info->common.vaddr) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_get));
+ cqm_err(handle->dev_hdl,
+ "Qpc mpt init: qpc mpt vaddr is null, cla_table->alloc_static=%d\n",
+ cla_table->alloc_static);
+ goto err1;
+ }
+
+ /* Associate index with object, FC executes in interrupt context */
+ object_table = &cla_table->obj_table;
+
+ if (cqm_object_table_insert(cqm_handle, object_table, index, object) !=
+ CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_object_table_insert));
+ goto err2;
+ }
+
+ return CQM_SUCCESS;
+
+err2:
+ cqm_cla_put(cqm_handle, cla_table, index, count);
+err1:
+ cqm_bitmap_free(bitmap, index, count);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_qpc_mpt_delete - Delete qpc or mpt
+ * @object: cqm object
+ */
+void cqm_qpc_mpt_delete(struct cqm_object_s *object)
+{
+ struct cqm_handle_s *cqm_handle = (struct cqm_handle_s *)
+ object->cqm_handle;
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_qpc_mpt_s *common = container_of(object,
+ struct cqm_qpc_mpt_s,
+ object);
+ struct cqm_qpc_mpt_info_s *qpc_mpt_info = container_of(
+ common,
+ struct cqm_qpc_mpt_info_s,
+ common);
+ struct cqm_bat_table_s *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table_s *cla_table = NULL;
+ struct cqm_bitmap_s *bitmap = NULL;
+ struct cqm_object_table_s *object_table = NULL;
+ u32 index = qpc_mpt_info->common.xid;
+ u32 count = qpc_mpt_info->index_count;
+
+ /* Find the response cla table */
+ atomic_inc(&cqm_handle->ex_handle->hw_stats.cqm_stats.cqm_qpc_mpt_delete_cnt);
+
+ if (object->object_type == CQM_OBJECT_SERVICE_CTX) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_QPC);
+ } else {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
+ return;
+ }
+
+ CQM_PTR_CHECK_NO_RET(
+ cla_table, CQM_FUNCTION_FAIL(cqm_cla_table_get), return);
+
+ /* Disassociate index with object */
+ object_table = &cla_table->obj_table;
+
+ cqm_object_table_remove(cqm_handle, object_table, index, object);
+
+ /* Wait for the completion and ensure that all references to the QPC
+ * are completed
+ */
+ if (atomic_dec_and_test(&object->refcount)) {
+ complete(&object->free);
+ } else {
+ cqm_err(handle->dev_hdl,
+ "Qpc mpt del: object is referred by others, has to wait for completion\n");
+ }
+
+ /* The QPC static allocation needs to be non-blocking, and the service
+ * guarantees that the QPC is completed when the QPC is deleted
+ */
+ if (cla_table->alloc_static == false)
+ wait_for_completion(&object->free);
+ /* Free qpc buffer */
+ cqm_cla_put(cqm_handle, cla_table, index, count);
+
+ /* Free index into bitmap */
+ bitmap = &cla_table->bitmap;
+ cqm_bitmap_free(bitmap, index, count);
+}
+
+/**
+ * cqm_linkwqe_fill - Fill link wqe for non RDMA queue buffer
+ * @buf: cqm buffer
+ * @wqe_per_buf: not include link wqe
+ * @wqe_size: wqe size
+ * @wqe_number: not include link wqe
+ * @tail: true linkwqe must be at the tail, false linkwqe may not be at the tail
+ * @link_mode: link wqe mode
+ */
+void cqm_linkwqe_fill(struct cqm_buf_s *buf,
+ u32 wqe_per_buf,
+ u32 wqe_size,
+ u32 wqe_number,
+ bool tail,
+ u8 link_mode)
+{
+ struct cqm_linkwqe_s *wqe = NULL;
+ struct cqm_linkwqe_128b_s *linkwqe = NULL;
+ u8 *va = NULL;
+ u32 i = 0;
+ dma_addr_t addr;
+
+ /* Except for the last buffer, the linkwqe of other buffers is directly
+ * filled to the tail
+ */
+ for (i = 0; i < buf->buf_number; i++) {
+ va = (u8 *)(buf->buf_list[i].va);
+
+ if (i != (buf->buf_number - 1)) {
+ wqe = (struct cqm_linkwqe_s *)(va + (u32)(wqe_size *
+ wqe_per_buf));
+ wqe->wf = CQM_WQE_WF_LINK;
+ wqe->ctrlsl = CQM_LINK_WQE_CTRLSL_VALUE;
+ wqe->lp = CQM_LINK_WQE_LP_INVALID;
+ /* The Obit of valid link wqe needs to be set to 1, and
+ * each service needs to confirm that o-bit=1 means
+ * valid, o-bit=0 means invalid
+ */
+ wqe->o = CQM_LINK_WQE_OWNER_VALID;
+ addr = buf->buf_list[(u32)(i + 1)].pa;
+ wqe->next_page_gpa_h = CQM_ADDR_HI(addr);
+ wqe->next_page_gpa_l = CQM_ADDR_LW(addr);
+ } else {
+ /* The last buffer of Linkwqe must fill specially */
+ if (tail == true) {
+ /* Must be filled at the end of the page */
+ wqe = (struct cqm_linkwqe_s *)(va +
+ (u32)(wqe_size * wqe_per_buf));
+ } else {
+ /* The last linkwqe is filled immediately after
+ * the last wqe
+ */
+ wqe = (struct cqm_linkwqe_s *)
+ (va + (u32)(wqe_size *
+ (wqe_number - wqe_per_buf *
+ (buf->buf_number - 1))));
+ }
+ wqe->wf = CQM_WQE_WF_LINK;
+ wqe->ctrlsl = CQM_LINK_WQE_CTRLSL_VALUE;
+
+ /* In link mode, the last link wqe is invalid, In ring
+ * mode, the last link wqe is valid, pointing to the
+ * home page, and lp is set
+ */
+ if (link_mode == CQM_QUEUE_LINK_MODE) {
+ wqe->o = CQM_LINK_WQE_OWNER_INVALID;
+ } else {
+ /* The lp field of the last link_wqe is filled
+ * with 1,indicating that the o-bit is flipped
+ * from here
+ */
+ wqe->lp = CQM_LINK_WQE_LP_VALID;
+ wqe->o = CQM_LINK_WQE_OWNER_VALID;
+ addr = buf->buf_list[0].pa;
+ wqe->next_page_gpa_h = CQM_ADDR_HI(addr);
+ wqe->next_page_gpa_l = CQM_ADDR_LW(addr);
+ }
+ }
+
+ if (wqe_size == 128) {
+ /* The 128B wqe before and after 64B have obit need to be
+ * assigned, For IFOE, 63th penultimate bit of last 64B is
+ * obit, for TOE, 157th penultimate bit of last 64B is obit
+ */
+ linkwqe = (struct cqm_linkwqe_128b_s *)wqe;
+ linkwqe->second_64b.third_16B.bs.toe_o =
+ CQM_LINK_WQE_OWNER_VALID;
+ linkwqe->second_64b.forth_16B.bs.ifoe_o =
+ CQM_LINK_WQE_OWNER_VALID;
+
+ /* big endian conversion */
+ cqm_swab32((u8 *)wqe,
+ sizeof(struct cqm_linkwqe_128b_s) >> 2);
+ } else {
+ /* big endian conversion */
+ cqm_swab32((u8 *)wqe,
+ sizeof(struct cqm_linkwqe_s) >> 2);
+ }
+ }
+}
+
+static s32 cqm_nonrdma_queue_ctx_create_srq(struct cqm_object_s *object)
+{
+ struct cqm_handle_s *cqm_handle = (struct cqm_handle_s *)
+ object->cqm_handle;
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_queue_s *common = container_of(object,
+ struct cqm_queue_s, object);
+ struct cqm_nonrdma_qinfo_s *qinfo = container_of(
+ common,
+ struct cqm_nonrdma_qinfo_s,
+ common);
+ s32 shift = 0;
+
+ shift = cqm_shift(qinfo->q_ctx_size);
+ common->q_ctx_vaddr = (u8 *)cqm_kmalloc_align(
+ qinfo->q_ctx_size,
+ GFP_KERNEL | __GFP_ZERO,
+ (u16)shift);
+ if (!common->q_ctx_vaddr) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(q_ctx_vaddr));
+ return CQM_FAIL;
+ }
+
+ common->q_ctx_paddr =
+ pci_map_single(cqm_handle->dev, common->q_ctx_vaddr,
+ qinfo->q_ctx_size, PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev, common->q_ctx_paddr)) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(q_ctx_vaddr));
+ cqm_kfree_align(common->q_ctx_vaddr);
+ common->q_ctx_vaddr = NULL;
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_nonrdma_queue_ctx_create_scq(struct cqm_object_s *object)
+{
+ struct cqm_handle_s *cqm_handle = (struct cqm_handle_s *)
+ object->cqm_handle;
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_queue_s *common = container_of(object,
+ struct cqm_queue_s,
+ object);
+ struct cqm_nonrdma_qinfo_s *qinfo = container_of(
+ common,
+ struct cqm_nonrdma_qinfo_s,
+ common);
+ struct cqm_bat_table_s *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table_s *cla_table = NULL;
+ struct cqm_bitmap_s *bitmap = NULL;
+ struct cqm_object_table_s *object_table = NULL;
+
+ /* Find the corresponding cla table */
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SCQC);
+ if (!cla_table) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_table_get));
+ return CQM_FAIL;
+ }
+
+ /* Allocate index for bitmap */
+ bitmap = &cla_table->bitmap;
+ qinfo->index_count = (ALIGN(qinfo->q_ctx_size, cla_table->obj_size)) /
+ cla_table->obj_size;
+ qinfo->common.index = cqm_bitmap_alloc(bitmap, 1 << (cla_table->z + 1),
+ qinfo->index_count, cqm_handle->func_capability.xid_alloc_mode);
+ if (qinfo->common.index >= bitmap->max_num) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bitmap_alloc));
+ return CQM_FAIL;
+ }
+
+ /* Find the trunk page from BAT/CLA and allocate buffer */
+ common->q_ctx_vaddr = cqm_cla_get(cqm_handle, cla_table,
+ qinfo->common.index,
+ qinfo->index_count,
+ &common->q_ctx_paddr);
+ if (!common->q_ctx_vaddr) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_get));
+ cqm_bitmap_free(bitmap, qinfo->common.index,
+ qinfo->index_count);
+ return CQM_FAIL;
+ }
+
+ /* Associate index with object */
+ object_table = &cla_table->obj_table;
+
+ if (cqm_object_table_insert(
+ cqm_handle, object_table,
+ qinfo->common.index, object) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_object_table_insert));
+ cqm_cla_put(cqm_handle, cla_table, qinfo->common.index,
+ qinfo->index_count);
+ cqm_bitmap_free(bitmap, qinfo->common.index,
+ qinfo->index_count);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_nonrdma_queue_ctx_create(struct cqm_object_s *object)
+{
+ if (object->object_type == CQM_OBJECT_NONRDMA_SRQ)
+ return cqm_nonrdma_queue_ctx_create_srq(object);
+ else if (object->object_type == CQM_OBJECT_NONRDMA_SCQ)
+ return cqm_nonrdma_queue_ctx_create_scq(object);
+
+ return CQM_SUCCESS;
+}
+
+#define CQM_NORDMA_CHECK_WEQ_NUMBER(number) \
+ (((number) < CQM_CQ_DEPTH_MIN) || ((number) > CQM_CQ_DEPTH_MAX))
+
+/**
+ * cqm_nonrdma_queue_create - Create queue for non RDMA service
+ * @buf: cqm object
+ * Return: 0 - success, negative - failure
+ */
+s32 cqm_nonrdma_queue_create(struct cqm_object_s *object)
+{
+ struct cqm_handle_s *cqm_handle = (struct cqm_handle_s *)
+ object->cqm_handle;
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_service_s *service = &cqm_handle->service;
+ struct cqm_queue_s *common = container_of(object,
+ struct cqm_queue_s,
+ object);
+ struct cqm_nonrdma_qinfo_s *qinfo = container_of(
+ common,
+ struct cqm_nonrdma_qinfo_s,
+ common);
+ struct cqm_buf_s *q_room_buf = &common->q_room_buf_1;
+ u32 wqe_number = qinfo->common.object.object_size;
+ u32 wqe_size = qinfo->wqe_size;
+ u32 order = service->buf_order;
+ u32 buf_number = 0;
+ u32 buf_size = 0;
+ bool tail = false; /* Whether linkwqe is at the end of the page */
+
+ /* When creating CQ/SCQ queue, the page size is 4k, linkwqe must be at
+ * the end of the page
+ */
+ if (object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
+ /* Depth must be 2^n alignment, depth range is 256~32K */
+ if (CQM_NORDMA_CHECK_WEQ_NUMBER(wqe_number)) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_number));
+ return CQM_FAIL;
+ }
+ if (cqm_check_align(wqe_number) == false) {
+ cqm_err(handle->dev_hdl, "Nonrdma queue alloc: wqe_number is not align on 2^n\n");
+ return CQM_FAIL;
+ }
+
+ order = CQM_4K_PAGE_ORDER; /* wqe page is 4k */
+ tail = true; /* linkwqe must be at the end of the page */
+ buf_size = CQM_4K_PAGE_SIZE;
+ } else {
+ buf_size = PAGE_SIZE << order;
+ }
+
+ /* Calculate how many buffers are required, -1 is to deduct link wqe in
+ * a buf
+ */
+ qinfo->wqe_per_buf = (buf_size / wqe_size) - 1;
+ /* The depth from service includes the number of linkwqe */
+ buf_number = ALIGN((wqe_size * wqe_number), buf_size) / buf_size;
+ /* Allocate cqm buffer */
+ q_room_buf->buf_number = buf_number;
+ q_room_buf->buf_size = buf_size;
+ q_room_buf->page_number = (buf_number << order);
+ if (cqm_buf_alloc(cqm_handle, q_room_buf, false) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_buf_alloc));
+ return CQM_FAIL;
+ }
+ /* Fill link wqe, (wqe_number - buf_number) is the number of wqe without
+ * linkwqe
+ */
+ cqm_linkwqe_fill(q_room_buf, qinfo->wqe_per_buf, wqe_size,
+ wqe_number - buf_number, tail,
+ common->queue_link_mode);
+
+ /* Create queue header */
+ qinfo->common.q_header_vaddr =
+ (struct cqm_queue_header_s *)cqm_kmalloc_align(
+ sizeof(struct cqm_queue_header_s),
+ GFP_KERNEL | __GFP_ZERO, CQM_QHEAD_ALIGN_ORDER);
+ if (!qinfo->common.q_header_vaddr) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(q_header_vaddr));
+ goto err1;
+ }
+
+ common->q_header_paddr =
+ pci_map_single(cqm_handle->dev,
+ qinfo->common.q_header_vaddr,
+ sizeof(struct cqm_queue_header_s),
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev, common->q_header_paddr)) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(q_header_vaddr));
+ goto err2;
+ }
+
+ /* Create queue ctx */
+ if (cqm_nonrdma_queue_ctx_create(object) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_nonrdma_queue_ctx_create));
+ goto err3;
+ }
+
+ return CQM_SUCCESS;
+
+err3:
+ pci_unmap_single(cqm_handle->dev, common->q_header_paddr,
+ sizeof(struct cqm_queue_header_s),
+ PCI_DMA_BIDIRECTIONAL);
+err2:
+ cqm_kfree_align(qinfo->common.q_header_vaddr);
+ qinfo->common.q_header_vaddr = NULL;
+err1:
+ cqm_buf_free(q_room_buf, cqm_handle->dev);
+ return CQM_FAIL;
+}
+
+static void cqm_nonrdma_queue_free_scq_srq(struct cqm_object_s *object,
+ struct cqm_cla_table_s *cla_table)
+{
+ struct cqm_handle_s *cqm_handle = (struct cqm_handle_s *)
+ object->cqm_handle;
+ struct cqm_queue_s *common = container_of(object,
+ struct cqm_queue_s,
+ object);
+ struct cqm_nonrdma_qinfo_s *qinfo =
+ container_of(common, struct cqm_nonrdma_qinfo_s, common);
+ struct cqm_buf_s *q_room_buf = &common->q_room_buf_1;
+ u32 index = qinfo->common.index;
+ u32 count = qinfo->index_count;
+ struct cqm_bitmap_s *bitmap = NULL;
+
+ /* If it is in TOE SRQ mode, delete the RQ */
+ if (common->queue_link_mode == CQM_QUEUE_TOE_SRQ_LINK_MODE) {
+ cqm_dbg("Nonrdma queue del: delete srq used rq\n");
+ cqm_srq_used_rq_delete(&common->object);
+ } else {
+ /* Free it if exists q room */
+ cqm_buf_free(q_room_buf, cqm_handle->dev);
+ }
+ /* Free SRQ or SCQ ctx */
+ if (object->object_type == CQM_OBJECT_NONRDMA_SRQ) {
+ /* ctx of nonrdma's SRQ is applied independently */
+ if (common->q_ctx_vaddr) {
+ pci_unmap_single(cqm_handle->dev, common->q_ctx_paddr,
+ qinfo->q_ctx_size,
+ PCI_DMA_BIDIRECTIONAL);
+
+ cqm_kfree_align(common->q_ctx_vaddr);
+ common->q_ctx_vaddr = NULL;
+ }
+ } else if (object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
+ /* The ctx of SCQ of nonrdma is managed by BAT/CLA */
+ cqm_cla_put(cqm_handle, cla_table, index, count);
+
+ /* Release index into bitmap */
+ bitmap = &cla_table->bitmap;
+ cqm_bitmap_free(bitmap, index, count);
+ }
+}
+
+/**
+ * cqm_nonrdma_queue_delete - Free queue for non RDMA service
+ * @buf: cqm object
+ */
+void cqm_nonrdma_queue_delete(struct cqm_object_s *object)
+{
+ struct cqm_handle_s *cqm_handle = (struct cqm_handle_s *)
+ object->cqm_handle;
+ struct hifc_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_queue_s *common = container_of(object,
+ struct cqm_queue_s, object);
+ struct cqm_nonrdma_qinfo_s *qinfo = container_of(
+ common,
+ struct cqm_nonrdma_qinfo_s,
+ common);
+ struct cqm_bat_table_s *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table_s *cla_table = NULL;
+ struct cqm_object_table_s *object_table = NULL;
+ u32 index = qinfo->common.index;
+
+ atomic_inc(&cqm_handle->ex_handle->hw_stats.cqm_stats.cqm_nonrdma_queue_delete_cnt);
+
+ /* SCQ has independent SCQN association */
+ if (object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SCQC);
+ CQM_PTR_CHECK_NO_RET(
+ cla_table,
+ CQM_FUNCTION_FAIL(cqm_cla_table_get),
+ return);
+
+ /* index and object disassociate */
+ object_table = &cla_table->obj_table;
+
+ cqm_object_table_remove(cqm_handle, object_table,
+ index, object);
+ }
+
+ /* Wait for the completion and ensure that all references to the QPC
+ * are completed
+ */
+ if (atomic_dec_and_test(&object->refcount))
+ complete(&object->free);
+ else
+ cqm_err(handle->dev_hdl, "Nonrdma queue del: object is referred by others, has to wait for completion\n");
+ wait_for_completion(&object->free);
+
+ /* Free it if exists q header */
+ if (qinfo->common.q_header_vaddr) {
+ pci_unmap_single(cqm_handle->dev, common->q_header_paddr,
+ sizeof(struct cqm_queue_header_s),
+ PCI_DMA_BIDIRECTIONAL);
+
+ cqm_kfree_align(qinfo->common.q_header_vaddr);
+ qinfo->common.q_header_vaddr = NULL;
+ }
+ cqm_nonrdma_queue_free_scq_srq(object, cla_table);
+}
+
+#define obj_extern_if_section
+
+/**
+ * cqm_object_qpc_mpt_create - Create QPC and MPT
+ * @ex_handle: hw dev handle
+ * @service_type: service type
+ * @object_type: must be mpt and ctx
+ * @object_size: the unit is byte
+ * @object_priv: the private structure for service, can be NULL
+ * @index: get the reserved qpn based on this value, if wants to automatically
+ * allocate it, the value should be CQM_INDEX_INVALID
+ * Return: service ctx
+ */
+struct cqm_qpc_mpt_s *cqm_object_qpc_mpt_create(
+ void *ex_handle,
+ enum cqm_object_type_e object_type,
+ u32 object_size, void *object_priv,
+ u32 index)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+ struct cqm_handle_s *cqm_handle = NULL;
+ struct cqm_qpc_mpt_info_s *qpc_mpt_info = NULL;
+ s32 ret = CQM_FAIL;
+
+ CQM_PTR_CHECK_RET(ex_handle, return NULL, CQM_PTR_NULL(ex_handle));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_qpc_mpt_create_cnt);
+
+ cqm_handle = (struct cqm_handle_s *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_RET(cqm_handle, return NULL, CQM_PTR_NULL(cqm_handle));
+
+ /* If service does not register, returns NULL */
+ if (cqm_handle->service.has_register == false) {
+ cqm_err(handle->dev_hdl, "service is not register");
+ return NULL;
+ }
+
+ if (object_type != CQM_OBJECT_SERVICE_CTX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return NULL;
+ }
+
+ qpc_mpt_info = (struct cqm_qpc_mpt_info_s *)
+ kmalloc(sizeof(struct cqm_qpc_mpt_info_s),
+ GFP_ATOMIC | __GFP_ZERO);
+ CQM_PTR_CHECK_RET(qpc_mpt_info, return NULL,
+ CQM_ALLOC_FAIL(qpc_mpt_info));
+
+ qpc_mpt_info->common.object.object_type = object_type;
+ qpc_mpt_info->common.object.object_size = object_size;
+ atomic_set(&qpc_mpt_info->common.object.refcount, 1);
+ init_completion(&qpc_mpt_info->common.object.free);
+ qpc_mpt_info->common.object.cqm_handle = cqm_handle;
+ qpc_mpt_info->common.xid = index;
+ qpc_mpt_info->common.priv = object_priv;
+
+ ret = cqm_qpc_mpt_create(&qpc_mpt_info->common.object);
+ if (ret == CQM_SUCCESS)
+ return &qpc_mpt_info->common;
+
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_qpc_mpt_create));
+ kfree(qpc_mpt_info);
+ return NULL;
+}
+
+/**
+ * cqm_object_fc_srq_create - Create RQ for FC, the number of valid wqe in the
+ * queue must be meet the incoming wqe number. Because linkwqe can only be
+ * filled at the end of the page, the actual effective number exceeds demand,
+ * need to inform the number of business creation.
+ * @ex_handle: hw dev handle
+ * @service_type: service type
+ * @object_type: must be CQM_OBJECT_NONRDMA_SRQ
+ * @wqe_number: valid wqe number
+ * @wqe_size: wqe size
+ * @object_priv: the private structure for service
+ * Return: srq structure
+ */
+struct cqm_queue_s *cqm_object_fc_srq_create(
+ void *ex_handle,
+ enum cqm_object_type_e object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv)
+{
+ struct cqm_nonrdma_qinfo_s *nonrdma_qinfo = NULL;
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+ struct cqm_handle_s *cqm_handle = NULL;
+ struct cqm_service_s *service = NULL;
+ u32 valid_wqe_per_buffer = 0;
+ u32 wqe_sum = 0; /* includes linkwqe, normal wqe */
+ u32 buf_size = 0;
+ u32 buf_num = 0;
+ s32 ret = CQM_FAIL;
+
+ CQM_PTR_CHECK_RET(ex_handle, return NULL, CQM_PTR_NULL(ex_handle));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_fc_srq_create_cnt);
+
+ cqm_handle = (struct cqm_handle_s *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_RET(cqm_handle, return NULL, CQM_PTR_NULL(cqm_handle));
+
+ /* service_type must be FC */
+ if (cqm_handle->service.has_register == false) {
+ cqm_err(handle->dev_hdl, "service is not register\n");
+ return NULL;
+ }
+
+ /* wqe_size can not exceed PAGE_SIZE and should not be 0, and must be
+ * 2^n aligned.
+ */
+ if ((wqe_size >= PAGE_SIZE) || (cqm_check_align(wqe_size) == false)) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_size));
+ return NULL;
+ }
+
+ /* FC's RQ is SRQ (unlike TOE's SRQ, fc is that all packets received by
+ * the stream will be put on the same rq, and TOE's srq is similar to
+ * rq's resource pool)
+ */
+ if (object_type != CQM_OBJECT_NONRDMA_SRQ) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return NULL;
+ }
+
+ service = &cqm_handle->service;
+ buf_size = PAGE_SIZE << (service->buf_order);
+ valid_wqe_per_buffer = buf_size / wqe_size - 1; /* Minus 1 link wqe */
+ buf_num = wqe_number / valid_wqe_per_buffer;
+ if (wqe_number % valid_wqe_per_buffer != 0)
+ buf_num++;
+
+ /* Calculate the total number of all wqe */
+ wqe_sum = buf_num * (valid_wqe_per_buffer + 1);
+ nonrdma_qinfo = (struct cqm_nonrdma_qinfo_s *)
+ kmalloc(sizeof(struct cqm_nonrdma_qinfo_s),
+ GFP_KERNEL | __GFP_ZERO);
+
+ CQM_PTR_CHECK_RET(nonrdma_qinfo, return NULL,
+ CQM_ALLOC_FAIL(nonrdma_qinfo));
+
+ /* Initialize object members */
+ nonrdma_qinfo->common.object.object_type = object_type;
+ /* The total number of all wqe */
+ nonrdma_qinfo->common.object.object_size = wqe_sum;
+ atomic_set(&nonrdma_qinfo->common.object.refcount, 1);
+ init_completion(&nonrdma_qinfo->common.object.free);
+ nonrdma_qinfo->common.object.cqm_handle = cqm_handle;
+
+ /* Initialize the doorbell used by the current queue, default is the
+ * hardware doorbell
+ */
+ nonrdma_qinfo->common.current_q_doorbell = CQM_HARDWARE_DOORBELL;
+ nonrdma_qinfo->common.queue_link_mode = CQM_QUEUE_RING_MODE;
+
+ /* Initialize external public members */
+ nonrdma_qinfo->common.priv = object_priv;
+ nonrdma_qinfo->common.valid_wqe_num = wqe_sum - buf_num;
+
+ /* Initialize internal private members */
+ nonrdma_qinfo->wqe_size = wqe_size;
+ /* The SRQ for FC, which needs to create ctx */
+ nonrdma_qinfo->q_ctx_size = service->service_template.srq_ctx_size;
+
+ ret = cqm_nonrdma_queue_create(&nonrdma_qinfo->common.object);
+ if (ret == CQM_SUCCESS)
+ return &nonrdma_qinfo->common;
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_nonrdma_queue_create));
+ kfree(nonrdma_qinfo);
+ return NULL;
+}
+
+static int cqm_object_nonrdma_queue_create_check(
+ void *ex_handle,
+ enum cqm_object_type_e object_type,
+ u32 wqe_size)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+ struct cqm_handle_s *cqm_handle = NULL;
+
+ CQM_PTR_CHECK_RET(ex_handle, return CQM_FAIL, CQM_PTR_NULL(ex_handle));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_nonrdma_queue_create_cnt);
+
+ cqm_handle = (struct cqm_handle_s *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_RET(cqm_handle, return CQM_FAIL,
+ CQM_PTR_NULL(cqm_handle));
+
+ /* If service does not register, returns NULL */
+ if (cqm_handle->service.has_register == false) {
+ cqm_err(handle->dev_hdl, "service is not register\n");
+ return CQM_FAIL;
+ }
+ /* Wqe size cannot exceed PAGE_SIZE, cannot be 0, and must be 2^n
+ * aligned. cqm_check_align check excludes 0, 1, non 2^n alignment
+ */
+ if ((wqe_size >= PAGE_SIZE) || (cqm_check_align(wqe_size) == false)) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_size));
+ return CQM_FAIL;
+ }
+
+ /* Supported Nonrdma queue: RQ, SQ, SRQ, CQ, SCQ */
+ if ((object_type < CQM_OBJECT_NONRDMA_EMBEDDED_RQ) ||
+ (object_type > CQM_OBJECT_NONRDMA_SCQ)) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_object_nonrdma_queue_create - Create queues for non-RDMA services
+ * @ex_handle: hw dev handle
+ * @service_type: service type
+ * @object_type: can create embedded RQ/SQ/CQ and SRQ/SCQ
+ * @wqe_number: wqe number, including link wqe
+ * @wqe_size: wqe size, nust be 2^n
+ * @object_priv: the private structure for service, can be NULL
+ * Return: srq structure
+ */
+struct cqm_queue_s *cqm_object_nonrdma_queue_create(
+ void *ex_handle,
+ enum cqm_object_type_e object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv)
+{
+ struct hifc_hwdev *handle = (struct hifc_hwdev *)ex_handle;
+ struct cqm_handle_s *cqm_handle = NULL;
+ struct cqm_nonrdma_qinfo_s *nonrdma_qinfo = NULL;
+ struct cqm_service_s *service = NULL;
+ s32 ret = CQM_FAIL;
+
+ cqm_handle = (struct cqm_handle_s *)(handle->cqm_hdl);
+ if (cqm_object_nonrdma_queue_create_check(ex_handle,
+ object_type,
+ wqe_size) == CQM_FAIL) {
+ return NULL;
+ }
+
+ nonrdma_qinfo = (struct cqm_nonrdma_qinfo_s *)
+ kmalloc(sizeof(struct cqm_nonrdma_qinfo_s),
+ GFP_KERNEL | __GFP_ZERO);
+ CQM_PTR_CHECK_RET(nonrdma_qinfo, return NULL,
+ CQM_ALLOC_FAIL(nonrdma_qinfo));
+
+ /* Initialize object members */
+ nonrdma_qinfo->common.object.object_type = object_type;
+ nonrdma_qinfo->common.object.object_size = wqe_number;
+ atomic_set(&nonrdma_qinfo->common.object.refcount, 1);
+ init_completion(&nonrdma_qinfo->common.object.free);
+ nonrdma_qinfo->common.object.cqm_handle = cqm_handle;
+
+ /* Initialize the doorbell used by the current queue, default is the
+ * hardware doorbell
+ */
+ nonrdma_qinfo->common.current_q_doorbell = CQM_HARDWARE_DOORBELL;
+ nonrdma_qinfo->common.queue_link_mode = CQM_QUEUE_RING_MODE;
+
+ /* Initialize external public members */
+ nonrdma_qinfo->common.priv = object_priv;
+
+ /* Initialize internal private members */
+ nonrdma_qinfo->wqe_size = wqe_size;
+ service = &cqm_handle->service;
+ switch (object_type) {
+ case CQM_OBJECT_NONRDMA_SCQ:
+ nonrdma_qinfo->q_ctx_size =
+ service->service_template.scq_ctx_size;
+ break;
+ case CQM_OBJECT_NONRDMA_SRQ:
+ /* The creation for SRQ uses a dedicated interface */
+ nonrdma_qinfo->q_ctx_size =
+ service->service_template.srq_ctx_size;
+ break;
+ default:
+ break;
+ }
+
+ ret = cqm_nonrdma_queue_create(&nonrdma_qinfo->common.object);
+ if (ret == CQM_SUCCESS)
+ return &nonrdma_qinfo->common;
+
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_nonrdma_queue_create));
+ kfree(nonrdma_qinfo);
+ return NULL;
+}
+
+s32 cqm_qpc_mpt_delete_ret(struct cqm_object_s *object)
+{
+ u32 object_type = 0;
+
+ object_type = object->object_type;
+ switch (object_type) {
+ case CQM_OBJECT_SERVICE_CTX:
+ cqm_qpc_mpt_delete(object);
+ return CQM_SUCCESS;
+ default:
+ return CQM_FAIL;
+ }
+}
+
+s32 cqm_nonrdma_queue_delete_ret(struct cqm_object_s *object)
+{
+ u32 object_type = 0;
+
+ object_type = object->object_type;
+ switch (object_type) {
+ case CQM_OBJECT_NONRDMA_SCQ:
+ case CQM_OBJECT_NONRDMA_SRQ:
+ cqm_nonrdma_queue_delete(object);
+ return CQM_SUCCESS;
+ default:
+ return CQM_FAIL;
+ }
+}
+
+/**
+ * cqm_object_nonrdma_queue_create - Delete the created object, the function
+ * will sleep and wait for all operations on the object to complete before
+ * returning
+ * @object: cqm object
+ */
+void cqm_object_delete(struct cqm_object_s *object)
+{
+ struct cqm_handle_s *cqm_handle = NULL;
+ struct hifc_hwdev *handle = NULL;
+
+ CQM_PTR_CHECK_NO_RET(object, CQM_PTR_NULL(object), return);
+ if (!object->cqm_handle) {
+ pr_err("[CQM]Obj del: cqm_handle is null, refcount %d\n",
+ (int)object->refcount.counter);
+ kfree(object);
+ return;
+ }
+ cqm_handle = (struct cqm_handle_s *)object->cqm_handle;
+
+ if (!cqm_handle->ex_handle) {
+ pr_err("[CQM]Obj del: ex_handle is null, refcount %d\n",
+ (int)object->refcount.counter);
+ kfree(object);
+ return;
+ }
+ handle = cqm_handle->ex_handle;
+
+ if (cqm_qpc_mpt_delete_ret(object) == CQM_SUCCESS) {
+ kfree(object);
+ return;
+ }
+
+ if (cqm_nonrdma_queue_delete_ret(object) == CQM_SUCCESS) {
+ kfree(object);
+ return;
+ }
+
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
+ kfree(object);
+}
diff --git a/drivers/scsi/huawei/hifc/hifc_cqm_object.h b/drivers/scsi/huawei/hifc/hifc_cqm_object.h
new file mode 100644
index 000000000000..308166ddd534
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_cqm_object.h
@@ -0,0 +1,244 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+
+#ifndef __CQM_OBJECT_H__
+#define __CQM_OBJECT_H__
+
+#define CLA_TABLE_PAGE_ORDER (0)
+#define CQM_4K_PAGE_ORDER (0)
+
+#define CQM_CQ_DEPTH_MAX (32768)
+#define CQM_CQ_DEPTH_MIN (256)
+#define CQM_BAT_SIZE_FT_PF (192)
+
+#define CQM_WQE_WF_LINK 1
+#define CQM_WQE_WF_NORMAL 0
+#define CQM_QUEUE_LINK_MODE 0
+#define CQM_QUEUE_RING_MODE 1
+#define CQM_4K_PAGE_SIZE 4096
+
+#define CQM_SUCCESS 0
+#define CQM_FAIL -1
+#define CQM_QUEUE_TOE_SRQ_LINK_MODE 2
+#define CQM_CMD_TIMEOUT 10000 /*ms*/
+
+#define CQM_INDEX_INVALID ~(0U)
+#define CQM_INDEX_RESERVED (0xfffff) /* reserved by cqm alloc */
+
+enum cqm_bat_entry_type_e {
+ CQM_BAT_ENTRY_T_CFG = 0,
+ CQM_BAT_ENTRY_T_HASH,
+ CQM_BAT_ENTRY_T_QPC,
+ CQM_BAT_ENTRY_T_SCQC,
+ CQM_BAT_ENTRY_T_SRQC,
+ CQM_BAT_ENTRY_T_MPT,
+ CQM_BAT_ENTRY_T_GID,
+ CQM_BAT_ENTRY_T_LUN,
+ CQM_BAT_ENTRY_T_TASKMAP,
+ CQM_BAT_ENTRY_T_L3I,
+ CQM_BAT_ENTRY_T_CHILDC,
+ CQM_BAT_ENTRY_T_TIMER,
+ CQM_BAT_ENTRY_T_XID2CID,
+ CQM_BAT_ENTRY_T_REORDER,
+
+ CQM_BAT_ENTRY_T_INVALID = 0xff,
+};
+
+enum cqm_cmd_type_e {
+ CQM_CMD_T_INVALID = 0,
+ CQM_CMD_T_BAT_UPDATE,
+ CQM_CMD_T_CLA_UPDATE,
+ CQM_CMD_T_BLOOMFILTER_SET,
+ CQM_CMD_T_BLOOMFILTER_CLEAR,
+ CQM_CMD_T_COMPACT_SRQ_UPDATE,
+ CQM_CMD_T_CLA_CACHE_INVALID,
+ CQM_CMD_T_BLOOMFILTER_INIT,
+ QM_CMD_T_MAX
+};
+
+/*linkwqe*/
+#define CQM_LINK_WQE_CTRLSL_VALUE 2
+#define CQM_LINK_WQE_LP_VALID 1
+#define CQM_LINK_WQE_LP_INVALID 0
+#define CQM_LINK_WQE_OWNER_VALID 1
+#define CQM_LINK_WQE_OWNER_INVALID 0
+
+/*CLA update mode*/
+#define CQM_CLA_RECORD_NEW_GPA 0
+#define CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID 1
+#define CQM_CLA_DEL_GPA_WITH_CACHE_INVALID 2
+
+#define CQM_CLA_LVL_0 0
+#define CQM_CLA_LVL_1 1
+#define CQM_CLA_LVL_2 2
+
+#define CQM_MAX_INDEX_BIT 19
+#define CQM_CHIP_CACHELINE 256
+enum cqm_cmd_ack_type_e {
+ CQM_CMD_ACK_TYPE_CMDQ = 0, /* ack: write back to cmdq */
+ CQM_CMD_ACK_TYPE_SHARE_CQN = 1, /* ack report scq by root ctx ctx */
+ CQM_CMD_ACK_TYPE_APP_CQN = 2 /* ack report scq by parent ctx */
+};
+
+struct cqm_bat_entry_cfg_s {
+ u32 cur_conn_num_h_4 :4;
+ u32 rsv1 :4;
+ u32 max_conn_num :20;
+ u32 rsv2 :4;
+
+ u32 max_conn_cache :10;
+ u32 rsv3 :6;
+ u32 cur_conn_num_l_16 :16;
+
+ u32 bloom_filter_addr :16;
+ u32 cur_conn_cache :10;
+ u32 rsv4 :6;
+
+ u32 bucket_num :16;
+ u32 bloom_filter_len :16;
+};
+
+#define CQM_BAT_NO_BYPASS_CACHE 0
+#define CQM_BAT_ENTRY_SIZE_256 0
+#define CQM_BAT_ENTRY_SIZE_512 1
+#define CQM_BAT_ENTRY_SIZE_1024 2
+
+struct cqm_bat_entry_standerd_s {
+ u32 entry_size :2;
+ u32 rsv1 :6;
+ u32 max_number :20;
+ u32 rsv2 :4;
+
+ u32 cla_gpa_h :32;
+
+ u32 cla_gpa_l :32;
+
+ u32 rsv3 :8;
+ u32 z :5;
+ u32 y :5;
+ u32 x :5;
+ u32 rsv24 :1;
+ u32 bypass :1;
+ u32 cla_level :2;
+ u32 rsv5 :5;
+};
+
+struct cqm_bat_entry_taskmap_s {
+ u32 gpa0_h;
+ u32 gpa0_l;
+
+ u32 gpa1_h;
+ u32 gpa1_l;
+
+ u32 gpa2_h;
+ u32 gpa2_l;
+
+ u32 gpa3_h;
+ u32 gpa3_l;
+};
+
+struct cqm_cla_cache_invalid_cmd_s {
+ u32 gpa_h;
+ u32 gpa_l;
+ u32 cache_size;/* CLA cache size=4096B */
+};
+
+struct cqm_cla_update_cmd_s {
+ /* need to update gpa addr */
+ u32 gpa_h;
+ u32 gpa_l;
+
+ /* update value */
+ u32 value_h;
+ u32 value_l;
+};
+
+struct cqm_bat_update_cmd_s {
+#define CQM_BAT_MAX_SIZE 256
+ u32 offset; /* byte offset,16Byte aligned */
+ u32 byte_len; /* max size: 256byte */
+ u8 data[CQM_BAT_MAX_SIZE];
+};
+
+struct cqm_handle_s;
+
+struct cqm_linkwqe_s {
+ u32 rsv1 :14;
+ u32 wf :1;
+ u32 rsv2 :14;
+ u32 ctrlsl :2;
+ u32 o :1;
+
+ u32 rsv3 :31;
+ u32 lp :1;
+
+ u32 next_page_gpa_h;
+ u32 next_page_gpa_l;
+
+ u32 next_buffer_addr_h;
+ u32 next_buffer_addr_l;
+};
+
+struct cqm_srq_linkwqe_s {
+ struct cqm_linkwqe_s linkwqe;
+ /*add by wss for srq*/
+ u32 current_buffer_gpa_h;
+ u32 current_buffer_gpa_l;
+ u32 current_buffer_addr_h;
+ u32 current_buffer_addr_l;
+
+ u32 fast_link_page_addr_h;
+ u32 fast_link_page_addr_l;
+
+ u32 fixed_next_buffer_addr_h;
+ u32 fixed_next_buffer_addr_l;
+};
+
+union cqm_linkwqe_first_64b_s {
+ struct cqm_linkwqe_s basic_linkwqe;
+ u32 value[16];
+};
+
+struct cqm_linkwqe_second_64b_s {
+ u32 rsvd0[4];
+ u32 rsvd1[4];
+ union {
+ struct {
+ u32 rsvd0[3];
+ u32 rsvd1 :29;
+ u32 toe_o :1;
+ u32 resvd2 :2;
+ } bs;
+ u32 value[4];
+ } third_16B;
+
+ union {
+ struct {
+ u32 rsvd0[2];
+ u32 rsvd1 :31;
+ u32 ifoe_o :1;
+ u32 rsvd2;
+ } bs;
+ u32 value[4];
+ } forth_16B;
+
+};
+
+struct cqm_linkwqe_128b_s {
+ union cqm_linkwqe_first_64b_s first_64b;
+ struct cqm_linkwqe_second_64b_s second_64b;
+};
+
+s32 cqm_bat_init(struct cqm_handle_s *cqm_handle);
+void cqm_bat_uninit(struct cqm_handle_s *cqm_handle);
+s32 cqm_cla_init(struct cqm_handle_s *cqm_handle);
+void cqm_cla_uninit(struct cqm_handle_s *cqm_handle);
+s32 cqm_bitmap_init(struct cqm_handle_s *cqm_handle);
+void cqm_bitmap_uninit(struct cqm_handle_s *cqm_handle);
+s32 cqm_object_table_init(struct cqm_handle_s *cqm_handle);
+void cqm_object_table_uninit(struct cqm_handle_s *cqm_handle);
+
+#endif /* __CQM_OBJECT_H__ */
diff --git a/drivers/scsi/huawei/hifc/hifc_eqs.c b/drivers/scsi/huawei/hifc/hifc_eqs.c
new file mode 100644
index 000000000000..803866e1fbf9
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_eqs.c
@@ -0,0 +1,1347 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/workqueue.h>
+#include <linux/pci.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/module.h>
+
+#include "hifc_knl_adp.h"
+#include "hifc_hw.h"
+#include "hifc_hwif.h"
+#include "hifc_api_cmd.h"
+#include "hifc_mgmt.h"
+#include "hifc_hwdev.h"
+#include "hifc_eqs.h"
+
+#define HIFC_EQS_WQ_NAME "hifc_eqs"
+
+#define AEQ_CTRL_0_INTR_IDX_SHIFT 0
+#define AEQ_CTRL_0_FUNC_BUSY_SHIFT 10
+#define AEQ_CTRL_0_DMA_ATTR_SHIFT 12
+#define AEQ_CTRL_0_PCI_INTF_IDX_SHIFT 20
+#define AEQ_CTRL_0_QPS_NUM_SHIFT 22
+#define AEQ_CTRL_0_INTR_MODE_SHIFT 31
+
+#define AEQ_CTRL_0_INTR_IDX_MASK 0x3FFU
+#define AEQ_CTRL_0_FUNC_BUSY_MASK 0x1U
+#define AEQ_CTRL_0_DMA_ATTR_MASK 0x3FU
+#define AEQ_CTRL_0_PCI_INTF_IDX_MASK 0x3U
+#define AEQ_CTRL_0_QPS_NUM_MASK 0xFFU
+#define AEQ_CTRL_0_INTR_MODE_MASK 0x1U
+
+#define AEQ_CTRL_0_GET(val, member) \
+ (((val) >> AEQ_CTRL_0_##member##_SHIFT) & \
+ AEQ_CTRL_0_##member##_MASK)
+
+#define AEQ_CTRL_0_SET(val, member) \
+ (((val) & AEQ_CTRL_0_##member##_MASK) << \
+ AEQ_CTRL_0_##member##_SHIFT)
+
+#define AEQ_CTRL_0_CLEAR(val, member) \
+ ((val) & (~(AEQ_CTRL_0_##member##_MASK \
+ << AEQ_CTRL_0_##member##_SHIFT)))
+
+#define AEQ_CTRL_1_LEN_SHIFT 0
+#define AEQ_CTRL_1_FUNC_OWN_SHIFT 21
+#define AEQ_CTRL_1_ELEM_SIZE_SHIFT 24
+#define AEQ_CTRL_1_PAGE_SIZE_SHIFT 28
+
+#define AEQ_CTRL_1_LEN_MASK 0x1FFFFFU
+#define AEQ_CTRL_1_FUNC_OWN_MASK 0x1U
+#define AEQ_CTRL_1_ELEM_SIZE_MASK 0x3U
+#define AEQ_CTRL_1_PAGE_SIZE_MASK 0xFU
+
+#define AEQ_CTRL_1_GET(val, member) \
+ (((val) >> AEQ_CTRL_1_##member##_SHIFT) & \
+ AEQ_CTRL_1_##member##_MASK)
+
+#define AEQ_CTRL_1_SET(val, member) \
+ (((val) & AEQ_CTRL_1_##member##_MASK) << \
+ AEQ_CTRL_1_##member##_SHIFT)
+
+#define AEQ_CTRL_1_CLEAR(val, member) \
+ ((val) & (~(AEQ_CTRL_1_##member##_MASK \
+ << AEQ_CTRL_1_##member##_SHIFT)))
+
+#define HIFC_EQ_PROD_IDX_MASK 0xFFFFF
+#define HIFC_TASK_PROCESS_EQE_LIMIT 1024
+#define HIFC_EQ_UPDATE_CI_STEP 64
+
+static uint g_aeq_len = HIFC_DEFAULT_AEQ_LEN;
+module_param(g_aeq_len, uint, 0444);
+MODULE_PARM_DESC(g_aeq_len,
+ "aeq depth, valid range is " __stringify(HIFC_MIN_AEQ_LEN)
+ " - " __stringify(HIFC_MAX_AEQ_LEN));
+
+static uint g_ceq_len = HIFC_DEFAULT_CEQ_LEN;
+module_param(g_ceq_len, uint, 0444);
+MODULE_PARM_DESC(g_ceq_len,
+ "ceq depth, valid range is " __stringify(HIFC_MIN_CEQ_LEN)
+ " - " __stringify(HIFC_MAX_CEQ_LEN));
+
+static uint g_num_ceqe_in_tasklet = HIFC_TASK_PROCESS_EQE_LIMIT;
+module_param(g_num_ceqe_in_tasklet, uint, 0444);
+MODULE_PARM_DESC(g_num_ceqe_in_tasklet,
+ "The max number of ceqe can be processed in tasklet, default = 1024");
+
+#define CEQ_CTRL_0_INTR_IDX_SHIFT 0
+#define CEQ_CTRL_0_DMA_ATTR_SHIFT 12
+#define CEQ_CTRL_0_LIMIT_KICK_SHIFT 20
+#define CEQ_CTRL_0_PCI_INTF_IDX_SHIFT 24
+#define CEQ_CTRL_0_INTR_MODE_SHIFT 31
+
+#define CEQ_CTRL_0_INTR_IDX_MASK 0x3FFU
+#define CEQ_CTRL_0_DMA_ATTR_MASK 0x3FU
+#define CEQ_CTRL_0_LIMIT_KICK_MASK 0xFU
+#define CEQ_CTRL_0_PCI_INTF_IDX_MASK 0x3U
+#define CEQ_CTRL_0_INTR_MODE_MASK 0x1U
+
+#define CEQ_CTRL_0_SET(val, member) \
+ (((val) & CEQ_CTRL_0_##member##_MASK) << \
+ CEQ_CTRL_0_##member##_SHIFT)
+
+#define CEQ_CTRL_1_LEN_SHIFT 0
+#define CEQ_CTRL_1_PAGE_SIZE_SHIFT 28
+#define CEQ_CTRL_1_LEN_MASK 0x1FFFFFU
+#define CEQ_CTRL_1_PAGE_SIZE_MASK 0xFU
+
+#define CEQ_CTRL_1_SET(val, member) \
+ (((val) & CEQ_CTRL_1_##member##_MASK) << \
+ CEQ_CTRL_1_##member##_SHIFT)
+
+#define EQ_ELEM_DESC_TYPE_SHIFT 0
+#define EQ_ELEM_DESC_SRC_SHIFT 7
+#define EQ_ELEM_DESC_SIZE_SHIFT 8
+#define EQ_ELEM_DESC_WRAPPED_SHIFT 31
+#define EQ_ELEM_DESC_TYPE_MASK 0x7FU
+#define EQ_ELEM_DESC_SRC_MASK 0x1U
+#define EQ_ELEM_DESC_SIZE_MASK 0xFFU
+#define EQ_ELEM_DESC_WRAPPED_MASK 0x1U
+
+#define EQ_ELEM_DESC_GET(val, member) \
+ (((val) >> EQ_ELEM_DESC_##member##_SHIFT) & \
+ EQ_ELEM_DESC_##member##_MASK)
+
+#define EQ_CONS_IDX_CONS_IDX_SHIFT 0
+#define EQ_CONS_IDX_XOR_CHKSUM_SHIFT 24
+#define EQ_CONS_IDX_INT_ARMED_SHIFT 31
+#define EQ_CONS_IDX_CONS_IDX_MASK 0x1FFFFFU
+#define EQ_CONS_IDX_XOR_CHKSUM_MASK 0xFU
+#define EQ_CONS_IDX_INT_ARMED_MASK 0x1U
+
+#define EQ_CONS_IDX_SET(val, member) \
+ (((val) & EQ_CONS_IDX_##member##_MASK) << \
+ EQ_CONS_IDX_##member##_SHIFT)
+
+#define EQ_CONS_IDX_CLEAR(val, member) \
+ ((val) & (~(EQ_CONS_IDX_##member##_MASK \
+ << EQ_CONS_IDX_##member##_SHIFT)))
+
+#define EQ_WRAPPED(eq) ((u32)(eq)->wrapped << EQ_VALID_SHIFT)
+
+#define EQ_CONS_IDX(eq) ((eq)->cons_idx | \
+ ((u32)(eq)->wrapped << EQ_WRAPPED_SHIFT))
+
+#define EQ_CONS_IDX_REG_ADDR(eq) (((eq)->type == HIFC_AEQ) ? \
+ HIFC_CSR_AEQ_CONS_IDX_ADDR((eq)->q_id) : \
+ HIFC_CSR_CEQ_CONS_IDX_ADDR((eq)->q_id))
+
+#define EQ_PROD_IDX_REG_ADDR(eq) (((eq)->type == HIFC_AEQ) ? \
+ HIFC_CSR_AEQ_PROD_IDX_ADDR((eq)->q_id) : \
+ HIFC_CSR_CEQ_PROD_IDX_ADDR((eq)->q_id))
+
+#define GET_EQ_NUM_PAGES(eq, size) \
+ ((u16)(ALIGN((u32)((eq)->eq_len * (eq)->elem_size), \
+ (size)) / (size)))
+
+#define GET_EQ_NUM_ELEMS(eq, pg_size) ((pg_size) / (u32)(eq)->elem_size)
+
+#define GET_EQ_ELEMENT(eq, idx) \
+ (((u8 *)(eq)->virt_addr[(idx) / (eq)->num_elem_in_pg]) + \
+ (u32)(((idx) & ((eq)->num_elem_in_pg - 1)) * (eq)->elem_size))
+
+#define GET_AEQ_ELEM(eq, idx) ((struct hifc_aeq_elem *)\
+ GET_EQ_ELEMENT((eq), (idx)))
+
+#define GET_CEQ_ELEM(eq, idx) ((u32 *)GET_EQ_ELEMENT((eq), (idx)))
+
+#define GET_CURR_AEQ_ELEM(eq) GET_AEQ_ELEM((eq), (eq)->cons_idx)
+
+#define GET_CURR_CEQ_ELEM(eq) GET_CEQ_ELEM((eq), (eq)->cons_idx)
+
+#define PAGE_IN_4K(page_size) ((page_size) >> 12)
+#define EQ_SET_HW_PAGE_SIZE_VAL(eq) \
+ ((u32)ilog2(PAGE_IN_4K((eq)->page_size)))
+
+#define ELEMENT_SIZE_IN_32B(eq) (((eq)->elem_size) >> 5)
+#define EQ_SET_HW_ELEM_SIZE_VAL(eq) ((u32)ilog2(ELEMENT_SIZE_IN_32B(eq)))
+
+#define AEQ_DMA_ATTR_DEFAULT 0
+#define CEQ_DMA_ATTR_DEFAULT 0
+#define CEQ_LMT_KICK_DEFAULT 0
+#define EQ_MSIX_RESEND_TIMER_CLEAR 1
+#define EQ_WRAPPED_SHIFT 20
+#define EQ_VALID_SHIFT 31
+#define CEQE_TYPE_SHIFT 23
+#define CEQE_TYPE_MASK 0x7
+
+#define CEQE_TYPE(type) (((type) >> CEQE_TYPE_SHIFT) & \
+ CEQE_TYPE_MASK)
+#define CEQE_DATA_MASK 0x3FFFFFF
+#define CEQE_DATA(data) ((data) & CEQE_DATA_MASK)
+#define EQ_MIN_PAGE_SIZE 0x1000U
+#define aeq_to_aeqs(eq) \
+ container_of((eq) - (eq)->q_id, struct hifc_aeqs, aeq[0])
+
+#define ceq_to_ceqs(eq) \
+ container_of((eq) - (eq)->q_id, struct hifc_ceqs, ceq[0])
+
+/**
+ * aeq_interrupt - aeq interrupt handler
+ * @irq: irq number
+ * @data: the async event queue of the event
+ **/
+static irqreturn_t aeq_interrupt(int irq, void *data)
+{
+ struct hifc_eq *aeq = (struct hifc_eq *)data;
+ struct hifc_hwdev *hwdev = aeq->hwdev;
+
+ struct hifc_aeqs *aeqs = aeq_to_aeqs(aeq);
+ struct workqueue_struct *workq = aeqs->workq;
+ struct hifc_eq_work *aeq_work;
+
+ /* clear resend timer cnt register */
+ hifc_misx_intr_clear_resend_bit(hwdev, aeq->eq_irq.msix_entry_idx,
+ EQ_MSIX_RESEND_TIMER_CLEAR);
+
+ aeq_work = &aeq->aeq_work;
+ aeq_work->data = aeq;
+
+ queue_work(workq, &aeq_work->work);
+
+ return IRQ_HANDLED;
+}
+
+/**
+ * ceq_interrupt - ceq interrupt handler
+ * @irq: irq number
+ * @data: the completion event queue of the event
+ **/
+static irqreturn_t ceq_interrupt(int irq, void *data)
+{
+ struct hifc_eq *ceq = (struct hifc_eq *)data;
+ struct hifc_ceq_tasklet_data *ceq_tasklet_data;
+
+ ceq->hard_intr_jif = jiffies;
+
+ /* clear resend timer counters */
+ hifc_misx_intr_clear_resend_bit(ceq->hwdev, ceq->eq_irq.msix_entry_idx,
+ EQ_MSIX_RESEND_TIMER_CLEAR);
+
+ ceq_tasklet_data = &ceq->ceq_tasklet_data;
+ ceq_tasklet_data->data = data;
+ tasklet_schedule(&ceq->ceq_tasklet);
+
+ return IRQ_HANDLED;
+}
+
+static u8 eq_cons_idx_checksum_set(u32 val)
+{
+ u8 checksum = 0;
+ u8 idx;
+
+ for (idx = 0; idx < 32; idx += 4)
+ checksum ^= ((val >> idx) & 0xF);
+
+ return checksum & 0xF;
+}
+
+/**
+ * hifc_aeq_register_hw_cb - register aeq callback for specific event
+ * @hwdev: pointer to hw device
+ * @event: event for the handler
+ * @hw_cb: callback function
+ * Return: 0 - success, negative - failure
+ **/
+int hifc_aeq_register_hw_cb(void *hwdev, enum hifc_aeq_type event,
+ hifc_aeq_hwe_cb hwe_cb)
+{
+ struct hifc_aeqs *aeqs;
+
+ if (!hwdev || !hwe_cb || event >= HIFC_MAX_AEQ_EVENTS)
+ return -EINVAL;
+
+ aeqs = ((struct hifc_hwdev *)hwdev)->aeqs;
+
+ aeqs->aeq_hwe_cb[event] = hwe_cb;
+
+ set_bit(HIFC_AEQ_HW_CB_REG, &aeqs->aeq_hw_cb_state[event]);
+
+ return 0;
+}
+
+/**
+ * hifc_aeq_unregister_hw_cb - unregister the aeq callback for specific event
+ * @hwdev: pointer to hw device
+ * @event: event for the handler
+ **/
+void hifc_aeq_unregister_hw_cb(void *hwdev, enum hifc_aeq_type event)
+{
+ struct hifc_aeqs *aeqs;
+
+ if (!hwdev || event >= HIFC_MAX_AEQ_EVENTS)
+ return;
+
+ aeqs = ((struct hifc_hwdev *)hwdev)->aeqs;
+
+ clear_bit(HIFC_AEQ_HW_CB_REG, &aeqs->aeq_hw_cb_state[event]);
+
+ while (test_bit(HIFC_AEQ_HW_CB_RUNNING, &aeqs->aeq_hw_cb_state[event]))
+ usleep_range(900, 1000);
+
+ aeqs->aeq_hwe_cb[event] = NULL;
+}
+
+/**
+ * hifc_aeq_register_sw_cb - register aeq callback for sw event
+ * @hwdev: pointer to hw device
+ * @event: soft event for the handler
+ * @sw_cb: callback function
+ * Return: 0 - success, negative - failure
+ **/
+int hifc_aeq_register_swe_cb(void *hwdev, enum hifc_aeq_sw_type event,
+ hifc_aeq_swe_cb aeq_swe_cb)
+{
+ struct hifc_aeqs *aeqs;
+
+ if (!hwdev || !aeq_swe_cb || event >= HIFC_MAX_AEQ_SW_EVENTS)
+ return -EINVAL;
+
+ aeqs = ((struct hifc_hwdev *)hwdev)->aeqs;
+
+ aeqs->aeq_swe_cb[event] = aeq_swe_cb;
+
+ set_bit(HIFC_AEQ_SW_CB_REG, &aeqs->aeq_sw_cb_state[event]);
+
+ return 0;
+}
+
+/**
+ * hifc_aeq_unregister_sw_cb - unregister the aeq callback for sw event
+ * @hwdev: pointer to hw device
+ * @event: soft event for the handler
+ **/
+void hifc_aeq_unregister_swe_cb(void *hwdev, enum hifc_aeq_sw_type event)
+{
+ struct hifc_aeqs *aeqs;
+
+ if (!hwdev || event >= HIFC_MAX_AEQ_SW_EVENTS)
+ return;
+
+ aeqs = ((struct hifc_hwdev *)hwdev)->aeqs;
+
+ clear_bit(HIFC_AEQ_SW_CB_REG, &aeqs->aeq_sw_cb_state[event]);
+
+ while (test_bit(HIFC_AEQ_SW_CB_RUNNING, &aeqs->aeq_sw_cb_state[event]))
+ usleep_range(900, 1000);
+
+ aeqs->aeq_swe_cb[event] = NULL;
+}
+
+/**
+ * hifc_ceq_register_sw_cb - register ceq callback for specific event
+ * @hwdev: pointer to hw device
+ * @event: event for the handler
+ * @callback: callback function
+ * Return: 0 - success, negative - failure
+ **/
+int hifc_ceq_register_cb(void *hwdev, enum hifc_ceq_event event,
+ hifc_ceq_event_cb callback)
+{
+ struct hifc_ceqs *ceqs;
+
+ if (!hwdev || event >= HIFC_MAX_CEQ_EVENTS)
+ return -EINVAL;
+
+ ceqs = ((struct hifc_hwdev *)hwdev)->ceqs;
+
+ ceqs->ceq_cb[event] = callback;
+
+ set_bit(HIFC_CEQ_CB_REG, &ceqs->ceq_cb_state[event]);
+
+ return 0;
+}
+
+/**
+ * hifc_ceq_unregister_cb - unregister ceq callback for specific event
+ * @hwdev: pointer to hw device
+ * @event: event for the handler
+ **/
+void hifc_ceq_unregister_cb(void *hwdev, enum hifc_ceq_event event)
+{
+ struct hifc_ceqs *ceqs;
+
+ if (!hwdev || event >= HIFC_MAX_CEQ_EVENTS)
+ return;
+
+ ceqs = ((struct hifc_hwdev *)hwdev)->ceqs;
+
+ clear_bit(HIFC_CEQ_CB_REG, &ceqs->ceq_cb_state[event]);
+
+ while (test_bit(HIFC_CEQ_CB_RUNNING, &ceqs->ceq_cb_state[event]))
+ usleep_range(900, 1000);
+
+ ceqs->ceq_cb[event] = NULL;
+}
+
+/**
+ * set_eq_cons_idx - write the cons idx to the hw
+ * @eq: The event queue to update the cons idx for
+ * @arm_state: arm state value
+ **/
+static void set_eq_cons_idx(struct hifc_eq *eq, u32 arm_state)
+{
+ u32 eq_wrap_ci, val;
+ u32 addr = EQ_CONS_IDX_REG_ADDR(eq);
+
+ eq_wrap_ci = EQ_CONS_IDX(eq);
+
+ /* other filed is resverd, set to 0 */
+ val = EQ_CONS_IDX_SET(eq_wrap_ci, CONS_IDX) |
+ EQ_CONS_IDX_SET(arm_state, INT_ARMED);
+
+ val |= EQ_CONS_IDX_SET(eq_cons_idx_checksum_set(val), XOR_CHKSUM);
+
+ hifc_hwif_write_reg(eq->hwdev->hwif, addr, val);
+}
+
+/**
+ * ceq_event_handler - handle for the ceq events
+ * @eqs: eqs part of the chip
+ * @ceqe: ceq element of the event
+ **/
+static void ceq_event_handler(struct hifc_ceqs *ceqs, u32 ceqe)
+{
+ struct hifc_hwdev *hwdev = ceqs->hwdev;
+ enum hifc_ceq_event event = CEQE_TYPE(ceqe);
+ u32 ceqe_data = CEQE_DATA(ceqe);
+
+ if (event >= HIFC_MAX_CEQ_EVENTS) {
+ sdk_err(hwdev->dev_hdl, "Ceq unknown event:%d, ceqe date: 0x%x\n",
+ event, ceqe_data);
+ return;
+ }
+
+ set_bit(HIFC_CEQ_CB_RUNNING, &ceqs->ceq_cb_state[event]);
+
+ if (ceqs->ceq_cb[event] &&
+ test_bit(HIFC_CEQ_CB_REG, &ceqs->ceq_cb_state[event]))
+ ceqs->ceq_cb[event](hwdev, ceqe_data);
+
+ clear_bit(HIFC_CEQ_CB_RUNNING, &ceqs->ceq_cb_state[event]);
+}
+
+static void aeq_swe_handler(struct hifc_aeqs *aeqs,
+ struct hifc_aeq_elem *aeqe_pos,
+ enum hifc_aeq_type event)
+{
+ enum hifc_ucode_event_type ucode_event;
+ enum hifc_aeq_sw_type sw_event;
+ u64 aeqe_data;
+ u8 lev;
+
+ ucode_event = event;
+ /* SW event uses only the first 8B */
+ sw_event = ucode_event >= HIFC_NIC_FATAL_ERROR_MAX ?
+ HIFC_STATEFULL_EVENT :
+ HIFC_STATELESS_EVENT;
+ aeqe_data = be64_to_cpu((*(u64 *)aeqe_pos->aeqe_data));
+ set_bit(HIFC_AEQ_SW_CB_RUNNING,
+ &aeqs->aeq_sw_cb_state[sw_event]);
+ if (aeqs->aeq_swe_cb[sw_event] &&
+ test_bit(HIFC_AEQ_SW_CB_REG,
+ &aeqs->aeq_sw_cb_state[sw_event])) {
+ lev = aeqs->aeq_swe_cb[sw_event](aeqs->hwdev,
+ ucode_event,
+ aeqe_data);
+ hifc_swe_fault_handler(aeqs->hwdev, lev,
+ ucode_event, aeqe_data);
+ }
+ clear_bit(HIFC_AEQ_SW_CB_RUNNING,
+ &aeqs->aeq_sw_cb_state[sw_event]);
+}
+
+static void aeq_hwe_handler(struct hifc_aeqs *aeqs,
+ struct hifc_aeq_elem *aeqe_pos,
+ enum hifc_aeq_type event, u32 aeqe_desc)
+{
+ u8 size;
+
+ if (event < HIFC_MAX_AEQ_EVENTS) {
+ size = EQ_ELEM_DESC_GET(aeqe_desc, SIZE);
+ set_bit(HIFC_AEQ_HW_CB_RUNNING,
+ &aeqs->aeq_hw_cb_state[event]);
+ if (aeqs->aeq_hwe_cb[event] &&
+ test_bit(HIFC_AEQ_HW_CB_REG,
+ &aeqs->aeq_hw_cb_state[event]))
+ aeqs->aeq_hwe_cb[event](aeqs->hwdev,
+ aeqe_pos->aeqe_data, size);
+ clear_bit(HIFC_AEQ_HW_CB_RUNNING,
+ &aeqs->aeq_hw_cb_state[event]);
+
+ return;
+ }
+
+ sdk_warn(aeqs->hwdev->dev_hdl, "Unknown aeq hw event %d\n", event);
+}
+
+/**
+ * aeq_irq_handler - handler for the aeq event
+ * @eq: the async event queue of the event
+ * Return: true - success, false - failure
+ **/
+static bool aeq_irq_handler(struct hifc_eq *eq)
+{
+ struct hifc_aeqs *aeqs = aeq_to_aeqs(eq);
+ struct hifc_aeq_elem *aeqe_pos;
+ enum hifc_aeq_type event;
+ u32 aeqe_desc;
+ u32 i, eqe_cnt = 0;
+
+ for (i = 0; i < HIFC_TASK_PROCESS_EQE_LIMIT; i++) {
+ aeqe_pos = GET_CURR_AEQ_ELEM(eq);
+
+ /* Data in HW is in Big endian Format */
+ aeqe_desc = be32_to_cpu(aeqe_pos->desc);
+
+ /* HW updates wrapped bit, when it adds eq element event */
+ if (EQ_ELEM_DESC_GET(aeqe_desc, WRAPPED) == eq->wrapped)
+ return false;
+
+ /* This memory barrier is needed to keep us from reading
+ * any other fields out of the cmdq wqe until we have
+ * verified the command has been processed and
+ * written back.
+ */
+ dma_rmb();
+
+ event = EQ_ELEM_DESC_GET(aeqe_desc, TYPE);
+ if (EQ_ELEM_DESC_GET(aeqe_desc, SRC))
+ aeq_swe_handler(aeqs, aeqe_pos, event);
+ else
+ aeq_hwe_handler(aeqs, aeqe_pos, event, aeqe_desc);
+
+ eq->cons_idx++;
+
+ if (eq->cons_idx == eq->eq_len) {
+ eq->cons_idx = 0;
+ eq->wrapped = !eq->wrapped;
+ }
+
+ if (++eqe_cnt >= HIFC_EQ_UPDATE_CI_STEP) {
+ eqe_cnt = 0;
+ set_eq_cons_idx(eq, HIFC_EQ_NOT_ARMED);
+ }
+ }
+
+ return true;
+}
+
+/**
+ * ceq_irq_handler - handler for the ceq event
+ * @eq: the completion event queue of the event
+ * Return: true - success, false - failure
+ **/
+static bool ceq_irq_handler(struct hifc_eq *eq)
+{
+ struct hifc_ceqs *ceqs = ceq_to_ceqs(eq);
+ u32 ceqe, eqe_cnt = 0;
+ u32 i;
+
+ for (i = 0; i < g_num_ceqe_in_tasklet; i++) {
+ ceqe = *(GET_CURR_CEQ_ELEM(eq));
+ ceqe = be32_to_cpu(ceqe);
+
+ /* HW updates wrapped bit, when it adds eq element event */
+ if (EQ_ELEM_DESC_GET(ceqe, WRAPPED) == eq->wrapped)
+ return false;
+
+ ceq_event_handler(ceqs, ceqe);
+
+ eq->cons_idx++;
+
+ if (eq->cons_idx == eq->eq_len) {
+ eq->cons_idx = 0;
+ eq->wrapped = !eq->wrapped;
+ }
+
+ if (++eqe_cnt >= HIFC_EQ_UPDATE_CI_STEP) {
+ eqe_cnt = 0;
+ set_eq_cons_idx(eq, HIFC_EQ_NOT_ARMED);
+ }
+ }
+
+ return true;
+}
+
+/**
+ * eq_irq_handler - handler for the eq event
+ * @data: the event queue of the event
+ * Return: true - success, false - failure
+ **/
+static bool eq_irq_handler(void *data)
+{
+ struct hifc_eq *eq = (struct hifc_eq *)data;
+ bool uncompleted;
+
+ if (eq->type == HIFC_AEQ)
+ uncompleted = aeq_irq_handler(eq);
+ else
+ uncompleted = ceq_irq_handler(eq);
+
+ set_eq_cons_idx(eq, uncompleted ? HIFC_EQ_NOT_ARMED : HIFC_EQ_ARMED);
+
+ return uncompleted;
+}
+
+static void reschedule_eq_handler(struct hifc_eq *eq)
+{
+ if (eq->type == HIFC_AEQ) {
+ struct hifc_aeqs *aeqs = aeq_to_aeqs(eq);
+ struct workqueue_struct *workq = aeqs->workq;
+ struct hifc_eq_work *aeq_work = &eq->aeq_work;
+
+ queue_work(workq, &aeq_work->work);
+ } else {
+ tasklet_schedule(&eq->ceq_tasklet);
+ }
+}
+
+/**
+ * ceq_tasklet - ceq tasklet for the event
+ * @ceq_data: data that will be used by the tasklet(ceq)
+ **/
+
+static void ceq_tasklet(ulong ceq_data)
+{
+ struct hifc_ceq_tasklet_data *ceq_tasklet_data =
+ (struct hifc_ceq_tasklet_data *)ceq_data;
+ struct hifc_eq *eq = (struct hifc_eq *)ceq_tasklet_data->data;
+
+ eq->soft_intr_jif = jiffies;
+
+ if (eq_irq_handler(ceq_tasklet_data->data))
+ reschedule_eq_handler(ceq_tasklet_data->data);
+}
+
+/**
+ * eq_irq_work - eq work for the event
+ * @work: the work that is associated with the eq
+ **/
+static void eq_irq_work(struct work_struct *work)
+{
+ struct hifc_eq_work *aeq_work =
+ container_of(work, struct hifc_eq_work, work);
+
+ if (eq_irq_handler(aeq_work->data))
+ reschedule_eq_handler(aeq_work->data);
+}
+
+struct hifc_ceq_ctrl_reg {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u16 q_id;
+ u32 ctrl0;
+ u32 ctrl1;
+};
+
+static int set_ceq_ctrl_reg(struct hifc_hwdev *hwdev, u16 q_id,
+ u32 ctrl0, u32 ctrl1)
+{
+ struct hifc_ceq_ctrl_reg ceq_ctrl = {0};
+ u16 in_size = sizeof(ceq_ctrl);
+ u16 out_size = sizeof(ceq_ctrl);
+ int err;
+
+ err = hifc_global_func_id_get(hwdev, &ceq_ctrl.func_id);
+ if (err)
+ return err;
+
+ ceq_ctrl.q_id = q_id;
+ ceq_ctrl.ctrl0 = ctrl0;
+ ceq_ctrl.ctrl1 = ctrl1;
+
+ err = hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_CEQ_CTRL_REG_WR_BY_UP,
+ &ceq_ctrl, in_size,
+ &ceq_ctrl, &out_size, 0);
+ if (err || !out_size || ceq_ctrl.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to set ceq %d ctrl reg, err: %d status: 0x%x, out_size: 0x%x\n",
+ q_id, err, ceq_ctrl.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+/**
+ * set_eq_ctrls - setting eq's ctrls registers
+ * @eq: the event queue for setting
+ * Return: 0 - success, negative - failure
+ **/
+static int set_eq_ctrls(struct hifc_eq *eq)
+{
+ enum hifc_eq_type type = eq->type;
+ struct hifc_hwif *hwif = eq->hwdev->hwif;
+ struct irq_info *eq_irq = &eq->eq_irq;
+ u32 addr, val, ctrl0, ctrl1, page_size_val, elem_size;
+ u32 pci_intf_idx = HIFC_PCI_INTF_IDX(hwif);
+ int err;
+
+ if (type == HIFC_AEQ) {
+ /* set ctrl0 */
+ addr = HIFC_CSR_AEQ_CTRL_0_ADDR(eq->q_id);
+
+ val = hifc_hwif_read_reg(hwif, addr);
+
+ val = AEQ_CTRL_0_CLEAR(val, INTR_IDX) &
+ AEQ_CTRL_0_CLEAR(val, DMA_ATTR) &
+ AEQ_CTRL_0_CLEAR(val, PCI_INTF_IDX) &
+ AEQ_CTRL_0_CLEAR(val, INTR_MODE);
+
+ ctrl0 = AEQ_CTRL_0_SET(eq_irq->msix_entry_idx, INTR_IDX) |
+ AEQ_CTRL_0_SET(AEQ_DMA_ATTR_DEFAULT, DMA_ATTR) |
+ AEQ_CTRL_0_SET(pci_intf_idx, PCI_INTF_IDX) |
+
+ AEQ_CTRL_0_SET(HIFC_INTR_MODE_ARMED, INTR_MODE);
+
+ val |= ctrl0;
+
+ hifc_hwif_write_reg(hwif, addr, val);
+
+ /* set ctrl1 */
+ addr = HIFC_CSR_AEQ_CTRL_1_ADDR(eq->q_id);
+
+ page_size_val = EQ_SET_HW_PAGE_SIZE_VAL(eq);
+ elem_size = EQ_SET_HW_ELEM_SIZE_VAL(eq);
+
+ ctrl1 = AEQ_CTRL_1_SET(eq->eq_len, LEN) |
+ AEQ_CTRL_1_SET(elem_size, ELEM_SIZE) |
+ AEQ_CTRL_1_SET(page_size_val, PAGE_SIZE);
+
+ hifc_hwif_write_reg(hwif, addr, ctrl1);
+
+ } else {
+ ctrl0 = CEQ_CTRL_0_SET(eq_irq->msix_entry_idx, INTR_IDX) |
+ CEQ_CTRL_0_SET(CEQ_DMA_ATTR_DEFAULT, DMA_ATTR) |
+ CEQ_CTRL_0_SET(CEQ_LMT_KICK_DEFAULT, LIMIT_KICK) |
+ CEQ_CTRL_0_SET(pci_intf_idx, PCI_INTF_IDX) |
+ CEQ_CTRL_0_SET(HIFC_INTR_MODE_ARMED, INTR_MODE);
+
+ page_size_val = EQ_SET_HW_PAGE_SIZE_VAL(eq);
+
+ ctrl1 = CEQ_CTRL_1_SET(eq->eq_len, LEN) |
+ CEQ_CTRL_1_SET(page_size_val, PAGE_SIZE);
+
+ /* set ceq ctrl reg through mgmt cpu */
+ err = set_ceq_ctrl_reg(eq->hwdev, eq->q_id, ctrl0, ctrl1);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+/**
+ * ceq_elements_init - Initialize all the elements in the ceq
+ * @eq: the event queue
+ * @init_val: value to init with it the elements
+ **/
+static void ceq_elements_init(struct hifc_eq *eq, u32 init_val)
+{
+ u32 i;
+ u32 *ceqe;
+
+ for (i = 0; i < eq->eq_len; i++) {
+ ceqe = GET_CEQ_ELEM(eq, i);
+ *(ceqe) = cpu_to_be32(init_val);
+ }
+
+ wmb(); /* Write the init values */
+}
+
+/**
+ * aeq_elements_init - initialize all the elements in the aeq
+ * @eq: the event queue
+ * @init_val: value to init with it the elements
+ **/
+static void aeq_elements_init(struct hifc_eq *eq, u32 init_val)
+{
+ struct hifc_aeq_elem *aeqe;
+ u32 i;
+
+ for (i = 0; i < eq->eq_len; i++) {
+ aeqe = GET_AEQ_ELEM(eq, i);
+ aeqe->desc = cpu_to_be32(init_val);
+ }
+
+ wmb(); /* Write the init values */
+}
+
+static void free_eq_pages_desc(struct hifc_eq *eq)
+{
+ kfree(eq->virt_addr_for_free);
+ kfree(eq->dma_addr_for_free);
+ kfree(eq->virt_addr);
+ kfree(eq->dma_addr);
+}
+
+static int alloc_eq_pages_desc(struct hifc_eq *eq)
+{
+ u64 dma_addr_size, virt_addr_size;
+ int err;
+
+ dma_addr_size = eq->num_pages * sizeof(*eq->dma_addr);
+ virt_addr_size = eq->num_pages * sizeof(*eq->virt_addr);
+
+ eq->dma_addr = kzalloc(dma_addr_size, GFP_KERNEL);
+ if (!eq->dma_addr)
+ return -ENOMEM;
+
+ eq->virt_addr = kzalloc(virt_addr_size, GFP_KERNEL);
+ if (!eq->virt_addr) {
+ err = -ENOMEM;
+ goto virt_addr_alloc_err;
+ }
+
+ eq->dma_addr_for_free = kzalloc(dma_addr_size, GFP_KERNEL);
+ if (!eq->dma_addr_for_free) {
+ err = -ENOMEM;
+ goto dma_addr_free_alloc_err;
+ }
+
+ eq->virt_addr_for_free = kzalloc(virt_addr_size, GFP_KERNEL);
+ if (!eq->virt_addr_for_free) {
+ err = -ENOMEM;
+ goto virt_addr_free_alloc_err;
+ }
+
+ return 0;
+
+virt_addr_free_alloc_err:
+ kfree(eq->dma_addr_for_free);
+dma_addr_free_alloc_err:
+ kfree(eq->virt_addr);
+virt_addr_alloc_err:
+ kfree(eq->dma_addr);
+ return err;
+}
+
+#define IS_ALIGN(x, a) (((x) & ((a) - 1)) == 0)
+
+static int init_eq_elements(struct hifc_eq *eq)
+{
+ u32 init_val;
+
+ eq->num_elem_in_pg = GET_EQ_NUM_ELEMS(eq, eq->page_size);
+ if (!IS_ALIGN(eq->num_elem_in_pg, eq->num_elem_in_pg)) {
+ sdk_err(eq->hwdev->dev_hdl, "Number element in eq page != power of 2\n");
+ return -EINVAL;
+ }
+
+ init_val = EQ_WRAPPED(eq);
+
+ if (eq->type == HIFC_AEQ)
+ aeq_elements_init(eq, init_val);
+ else
+ ceq_elements_init(eq, init_val);
+
+ return 0;
+}
+
+/**
+ * alloc_eq_pages - allocate the pages for the queue
+ * @eq: the event queue
+ * Return: 0 - success, negative - failure
+ **/
+static int alloc_eq_pages(struct hifc_eq *eq)
+{
+ struct hifc_hwif *hwif = eq->hwdev->hwif;
+ u16 pg_num, i;
+ u32 reg;
+ int err;
+ u8 flag = 0;
+
+ err = alloc_eq_pages_desc(eq);
+ if (err) {
+ sdk_err(eq->hwdev->dev_hdl, "Failed to alloc eq pages description\n");
+ return err;
+ }
+
+ for (pg_num = 0; pg_num < eq->num_pages; pg_num++) {
+ eq->virt_addr_for_free[pg_num] = dma_zalloc_coherent
+ (eq->hwdev->dev_hdl, eq->page_size,
+ &eq->dma_addr_for_free[pg_num], GFP_KERNEL);
+ if (!eq->virt_addr_for_free[pg_num]) {
+ err = -ENOMEM;
+ goto dma_alloc_err;
+ }
+
+ eq->dma_addr[pg_num] = eq->dma_addr_for_free[pg_num];
+ eq->virt_addr[pg_num] = eq->virt_addr_for_free[pg_num];
+ if (!IS_ALIGN(eq->dma_addr_for_free[pg_num],
+ eq->page_size)) {
+ sdk_info(eq->hwdev->dev_hdl,
+ "Address is not aligned to %u-bytes as hardware required\n",
+ eq->page_size);
+ sdk_info(eq->hwdev->dev_hdl, "Change eq's page size %u\n",
+ ((eq->page_size) >> 1));
+ eq->dma_addr[pg_num] = ALIGN
+ (eq->dma_addr_for_free[pg_num],
+ (u64)((eq->page_size) >> 1));
+ eq->virt_addr[pg_num] = eq->virt_addr_for_free[pg_num] +
+ ((u64)eq->dma_addr[pg_num]
+ - (u64)eq->dma_addr_for_free[pg_num]);
+ flag = 1;
+ }
+ reg = HIFC_EQ_HI_PHYS_ADDR_REG(eq->type, eq->q_id, pg_num);
+ hifc_hwif_write_reg(hwif, reg,
+ upper_32_bits(eq->dma_addr[pg_num]));
+
+ reg = HIFC_EQ_LO_PHYS_ADDR_REG(eq->type, eq->q_id, pg_num);
+ hifc_hwif_write_reg(hwif, reg,
+ lower_32_bits(eq->dma_addr[pg_num]));
+ }
+
+ if (flag) {
+ eq->page_size = eq->page_size >> 1;
+ eq->eq_len = eq->eq_len >> 1;
+ }
+
+ err = init_eq_elements(eq);
+ if (err) {
+ sdk_err(eq->hwdev->dev_hdl, "Failed to init eq elements\n");
+ goto dma_alloc_err;
+ }
+
+ return 0;
+
+dma_alloc_err:
+ for (i = 0; i < pg_num; i++)
+ dma_free_coherent(eq->hwdev->dev_hdl, eq->page_size,
+ eq->virt_addr_for_free[i],
+ eq->dma_addr_for_free[i]);
+ free_eq_pages_desc(eq);
+ return err;
+}
+
+/**
+ * free_eq_pages - free the pages of the queue
+ * @eq: the event queue
+ **/
+static void free_eq_pages(struct hifc_eq *eq)
+{
+ struct hifc_hwdev *hwdev = eq->hwdev;
+ u16 pg_num;
+
+ for (pg_num = 0; pg_num < eq->num_pages; pg_num++)
+ dma_free_coherent(hwdev->dev_hdl, eq->orig_page_size,
+ eq->virt_addr_for_free[pg_num],
+ eq->dma_addr_for_free[pg_num]);
+
+ free_eq_pages_desc(eq);
+}
+
+static inline u32 get_page_size(struct hifc_eq *eq)
+{
+ u32 total_size;
+ u16 count, n = 0;
+
+ total_size = ALIGN((eq->eq_len * eq->elem_size), EQ_MIN_PAGE_SIZE);
+
+ if (total_size <= (HIFC_EQ_MAX_PAGES * EQ_MIN_PAGE_SIZE))
+ return EQ_MIN_PAGE_SIZE;
+
+ count = (u16)(ALIGN((total_size / HIFC_EQ_MAX_PAGES),
+ EQ_MIN_PAGE_SIZE) / EQ_MIN_PAGE_SIZE);
+
+ if (!(count & (count - 1)))
+ return EQ_MIN_PAGE_SIZE * count;
+
+ while (count) {
+ count >>= 1;
+ n++;
+ }
+
+ return EQ_MIN_PAGE_SIZE << n;
+}
+
+static int request_eq_irq(struct hifc_eq *eq, enum hifc_eq_type type,
+ struct irq_info *entry)
+{
+ int err = 0;
+
+ if (type == HIFC_AEQ) {
+ struct hifc_eq_work *aeq_work = &eq->aeq_work;
+
+ INIT_WORK(&aeq_work->work, eq_irq_work);
+ } else {
+ tasklet_init(&eq->ceq_tasklet, ceq_tasklet,
+ (ulong)(&eq->ceq_tasklet_data));
+ }
+
+ if (type == HIFC_AEQ) {
+ snprintf(eq->irq_name, sizeof(eq->irq_name),
+ "hifc_aeq%d@pci:%s", eq->q_id,
+ pci_name(eq->hwdev->pcidev_hdl));
+
+ err = request_irq(entry->irq_id, aeq_interrupt, 0UL,
+ eq->irq_name, eq);
+ } else {
+ snprintf(eq->irq_name, sizeof(eq->irq_name),
+ "hifc_ceq%d@pci:%s", eq->q_id,
+ pci_name(eq->hwdev->pcidev_hdl));
+
+ err = request_irq(entry->irq_id, ceq_interrupt, 0UL,
+ eq->irq_name, eq);
+ }
+
+ return err;
+}
+
+/**
+ * init_eq - initialize eq
+ * @eq: the event queue
+ * @hwdev: the pointer to hw device
+ * @q_id: Queue id number
+ * @q_len: the number of EQ elements
+ * @type: the type of the event queue, ceq or aeq
+ * @entry: msix entry associated with the event queue
+ * Return: 0 - Success, Negative - failure
+ **/
+static int init_eq(struct hifc_eq *eq, struct hifc_hwdev *hwdev, u16 q_id,
+ u32 q_len, enum hifc_eq_type type, struct irq_info *entry)
+{
+ int err = 0;
+
+ eq->hwdev = hwdev;
+ eq->q_id = q_id;
+ eq->type = type;
+ eq->eq_len = q_len;
+
+ /* clear eq_len to force eqe drop in hardware */
+ if (eq->type == HIFC_AEQ)
+ hifc_hwif_write_reg(eq->hwdev->hwif,
+ HIFC_CSR_AEQ_CTRL_1_ADDR(eq->q_id), 0);
+ else
+ set_ceq_ctrl_reg(eq->hwdev, eq->q_id, 0, 0);
+
+ eq->cons_idx = 0;
+ eq->wrapped = 0;
+
+ eq->elem_size = (type == HIFC_AEQ) ?
+ HIFC_AEQE_SIZE : HIFC_CEQE_SIZE;
+
+ eq->page_size = get_page_size(eq);
+ eq->orig_page_size = eq->page_size;
+ eq->num_pages = GET_EQ_NUM_PAGES(eq, eq->page_size);
+ if (eq->num_pages > HIFC_EQ_MAX_PAGES) {
+ sdk_err(hwdev->dev_hdl, "Number pages:%d too many pages for eq\n",
+ eq->num_pages);
+ return -EINVAL;
+ }
+
+ err = alloc_eq_pages(eq);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to allocate pages for eq\n");
+ return err;
+ }
+
+ eq->eq_irq.msix_entry_idx = entry->msix_entry_idx;
+ eq->eq_irq.irq_id = entry->irq_id;
+
+ err = set_eq_ctrls(eq);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to allocate pages for eq\n");
+ goto init_eq_ctrls_err;
+ }
+
+ hifc_hwif_write_reg(eq->hwdev->hwif, EQ_PROD_IDX_REG_ADDR(eq), 0);
+ set_eq_cons_idx(eq, HIFC_EQ_ARMED);
+
+ err = request_eq_irq(eq, type, entry);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to request irq for the eq, err: %d\n",
+ err);
+ goto req_irq_err;
+ }
+
+ hifc_set_msix_state(hwdev, entry->msix_entry_idx, HIFC_MSIX_ENABLE);
+
+ return 0;
+
+init_eq_ctrls_err:
+req_irq_err:
+ free_eq_pages(eq);
+ return err;
+}
+
+/**
+ * remove_eq - remove eq
+ * @eq: the event queue
+ **/
+static void remove_eq(struct hifc_eq *eq)
+{
+ struct irq_info *entry = &eq->eq_irq;
+
+ hifc_set_msix_state(eq->hwdev, entry->msix_entry_idx,
+ HIFC_MSIX_DISABLE);
+ synchronize_irq(entry->irq_id);
+
+ free_irq(entry->irq_id, eq);
+
+ if (eq->type == HIFC_AEQ) {
+ struct hifc_eq_work *aeq_work = &eq->aeq_work;
+
+ cancel_work_sync(&aeq_work->work);
+
+ /* clear eq_len to avoid hw access host memory */
+ hifc_hwif_write_reg(eq->hwdev->hwif,
+ HIFC_CSR_AEQ_CTRL_1_ADDR(eq->q_id), 0);
+ } else {
+ tasklet_kill(&eq->ceq_tasklet);
+
+ set_ceq_ctrl_reg(eq->hwdev, eq->q_id, 0, 0);
+ }
+
+ /* update cons_idx to avoid invalid interrupt */
+ eq->cons_idx = hifc_hwif_read_reg(eq->hwdev->hwif,
+ EQ_PROD_IDX_REG_ADDR(eq));
+ set_eq_cons_idx(eq, HIFC_EQ_NOT_ARMED);
+
+ free_eq_pages(eq);
+}
+
+/**
+ * hifc_aeqs_init - init all the aeqs
+ * @hwdev: the pointer to hw device
+ * @num_ceqs: number of AEQs
+ * @msix_entries: msix entries associated with the event queues
+ * Return: 0 - Success, Negative - failure
+ **/
+int hifc_aeqs_init(struct hifc_hwdev *hwdev, u16 num_aeqs,
+ struct irq_info *msix_entries)
+{
+ struct hifc_aeqs *aeqs;
+ int err;
+ u16 i, q_id;
+
+ aeqs = kzalloc(sizeof(*aeqs), GFP_KERNEL);
+ if (!aeqs)
+ return -ENOMEM;
+
+ hwdev->aeqs = aeqs;
+ aeqs->hwdev = hwdev;
+ aeqs->num_aeqs = num_aeqs;
+
+ aeqs->workq = create_singlethread_workqueue(HIFC_EQS_WQ_NAME);
+ if (!aeqs->workq) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize aeq workqueue\n");
+ err = -ENOMEM;
+ goto create_work_err;
+ }
+
+ if (g_aeq_len < HIFC_MIN_AEQ_LEN || g_aeq_len > HIFC_MAX_AEQ_LEN) {
+ sdk_warn(hwdev->dev_hdl, "Module Parameter g_aeq_len value %d out of range, resetting to %d\n",
+ g_aeq_len, HIFC_DEFAULT_AEQ_LEN);
+ g_aeq_len = HIFC_DEFAULT_AEQ_LEN;
+ }
+
+ for (q_id = 0; q_id < num_aeqs; q_id++) {
+ err = init_eq(&aeqs->aeq[q_id], hwdev, q_id, g_aeq_len,
+ HIFC_AEQ, &msix_entries[q_id]);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init aeq %d\n",
+ q_id);
+ goto init_aeq_err;
+ }
+ }
+
+ return 0;
+
+init_aeq_err:
+ for (i = 0; i < q_id; i++)
+ remove_eq(&aeqs->aeq[i]);
+
+ destroy_workqueue(aeqs->workq);
+
+create_work_err:
+ kfree(aeqs);
+
+ return err;
+}
+
+/**
+ * hifc_aeqs_free - free all the aeqs
+ * @hwdev: the pointer to hw device
+ **/
+void hifc_aeqs_free(struct hifc_hwdev *hwdev)
+{
+ struct hifc_aeqs *aeqs = hwdev->aeqs;
+ enum hifc_aeq_type aeq_event = HIFC_HW_INTER_INT;
+ enum hifc_aeq_sw_type sw_aeq_event = HIFC_STATELESS_EVENT;
+ u16 q_id;
+
+ for (q_id = 0; q_id < aeqs->num_aeqs; q_id++)
+ remove_eq(&aeqs->aeq[q_id]);
+
+ for (; sw_aeq_event < HIFC_MAX_AEQ_SW_EVENTS; sw_aeq_event++)
+ hifc_aeq_unregister_swe_cb(hwdev, sw_aeq_event);
+
+ for (; aeq_event < HIFC_MAX_AEQ_EVENTS; aeq_event++)
+ hifc_aeq_unregister_hw_cb(hwdev, aeq_event);
+
+ destroy_workqueue(aeqs->workq);
+
+ kfree(aeqs);
+}
+
+/**
+ * hifc_ceqs_init - init all the ceqs
+ * @hwdev: the pointer to hw device
+ * @num_ceqs: number of CEQs
+ * @msix_entries: msix entries associated with the event queues
+ * Return: 0 - Success, Negative - failure
+ **/
+int hifc_ceqs_init(struct hifc_hwdev *hwdev, u16 num_ceqs,
+ struct irq_info *msix_entries)
+{
+ struct hifc_ceqs *ceqs;
+ int err;
+ u16 i, q_id;
+
+ ceqs = kzalloc(sizeof(*ceqs), GFP_KERNEL);
+ if (!ceqs)
+ return -ENOMEM;
+
+ hwdev->ceqs = ceqs;
+
+ ceqs->hwdev = hwdev;
+ ceqs->num_ceqs = num_ceqs;
+
+ if (g_ceq_len < HIFC_MIN_CEQ_LEN || g_ceq_len > HIFC_MAX_CEQ_LEN) {
+ sdk_warn(hwdev->dev_hdl, "Module Parameter g_ceq_len value %d out of range, resetting to %d\n",
+ g_ceq_len, HIFC_DEFAULT_CEQ_LEN);
+ g_ceq_len = HIFC_DEFAULT_CEQ_LEN;
+ }
+
+ if (!g_num_ceqe_in_tasklet) {
+ sdk_warn(hwdev->dev_hdl, "Module Parameter g_num_ceqe_in_tasklet can not be zero, resetting to %d\n",
+ HIFC_TASK_PROCESS_EQE_LIMIT);
+ g_num_ceqe_in_tasklet = HIFC_TASK_PROCESS_EQE_LIMIT;
+ }
+
+ for (q_id = 0; q_id < num_ceqs; q_id++) {
+ err = init_eq(&ceqs->ceq[q_id], hwdev, q_id, g_ceq_len,
+ HIFC_CEQ, &msix_entries[q_id]);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init ceq %d\n",
+ q_id);
+ goto init_ceq_err;
+ }
+ }
+
+ return 0;
+
+init_ceq_err:
+ for (i = 0; i < q_id; i++)
+ remove_eq(&ceqs->ceq[i]);
+
+ kfree(ceqs);
+
+ return err;
+}
+
+/**
+ * hifc_ceqs_free - free all the ceqs
+ * @hwdev: the pointer to hw device
+ **/
+void hifc_ceqs_free(struct hifc_hwdev *hwdev)
+{
+ struct hifc_ceqs *ceqs = hwdev->ceqs;
+ enum hifc_ceq_event ceq_event = HIFC_CMDQ;
+ u16 q_id;
+
+ for (q_id = 0; q_id < ceqs->num_ceqs; q_id++)
+ remove_eq(&ceqs->ceq[q_id]);
+
+ for (; ceq_event < HIFC_MAX_CEQ_EVENTS; ceq_event++)
+ hifc_ceq_unregister_cb(hwdev, ceq_event);
+
+ kfree(ceqs);
+}
+
+void hifc_get_ceq_irqs(struct hifc_hwdev *hwdev, struct irq_info *irqs,
+ u16 *num_irqs)
+{
+ struct hifc_ceqs *ceqs = hwdev->ceqs;
+ u16 q_id;
+
+ for (q_id = 0; q_id < ceqs->num_ceqs; q_id++) {
+ irqs[q_id].irq_id = ceqs->ceq[q_id].eq_irq.irq_id;
+ irqs[q_id].msix_entry_idx =
+ ceqs->ceq[q_id].eq_irq.msix_entry_idx;
+ }
+
+ *num_irqs = ceqs->num_ceqs;
+}
+
+void hifc_get_aeq_irqs(struct hifc_hwdev *hwdev, struct irq_info *irqs,
+ u16 *num_irqs)
+{
+ struct hifc_aeqs *aeqs = hwdev->aeqs;
+ u16 q_id;
+
+ for (q_id = 0; q_id < aeqs->num_aeqs; q_id++) {
+ irqs[q_id].irq_id = aeqs->aeq[q_id].eq_irq.irq_id;
+ irqs[q_id].msix_entry_idx =
+ aeqs->aeq[q_id].eq_irq.msix_entry_idx;
+ }
+
+ *num_irqs = aeqs->num_aeqs;
+}
+
+void hifc_dump_aeq_info(struct hifc_hwdev *hwdev)
+{
+ struct hifc_aeq_elem *aeqe_pos;
+ struct hifc_eq *eq;
+ u32 addr, ci, pi;
+ int q_id;
+
+ for (q_id = 0; q_id < hwdev->aeqs->num_aeqs; q_id++) {
+ eq = &hwdev->aeqs->aeq[q_id];
+ addr = EQ_CONS_IDX_REG_ADDR(eq);
+ ci = hifc_hwif_read_reg(hwdev->hwif, addr);
+ addr = EQ_PROD_IDX_REG_ADDR(eq);
+ pi = hifc_hwif_read_reg(hwdev->hwif, addr);
+ aeqe_pos = GET_CURR_AEQ_ELEM(eq);
+ sdk_err(hwdev->dev_hdl, "Aeq id: %d, ci: 0x%08x, pi: 0x%x, work_state: 0x%x, wrap: %d, desc: 0x%x\n",
+ q_id, ci, pi, work_busy(&eq->aeq_work.work),
+ eq->wrapped, be32_to_cpu(aeqe_pos->desc));
+ }
+}
+
diff --git a/drivers/scsi/huawei/hifc/hifc_eqs.h b/drivers/scsi/huawei/hifc/hifc_eqs.h
new file mode 100644
index 000000000000..2dcfc432c8f2
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_eqs.h
@@ -0,0 +1,233 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ *
+ */
+
+#ifndef HIFC_EQS_H
+#define HIFC_EQS_H
+
+#define HIFC_MAX_AEQS 3
+#define HIFC_MAX_CEQS 32
+
+#define HIFC_EQ_MAX_PAGES 8
+
+#define HIFC_AEQE_SIZE 64
+#define HIFC_CEQE_SIZE 4
+
+#define HIFC_AEQE_DESC_SIZE 4
+#define HIFC_AEQE_DATA_SIZE \
+ (HIFC_AEQE_SIZE - HIFC_AEQE_DESC_SIZE)
+
+#define HIFC_DEFAULT_AEQ_LEN 4096
+#define HIFC_DEFAULT_CEQ_LEN 8192
+
+#define HIFC_MIN_AEQ_LEN 64
+#define HIFC_MAX_AEQ_LEN (512 * 1024)
+#define HIFC_MIN_CEQ_LEN 64
+#define HIFC_MAX_CEQ_LEN (1024 * 1024)
+
+#define HIFC_CEQ_ID_CMDQ 0
+#define EQ_IRQ_NAME_LEN 64
+
+/* EQ registers */
+#define HIFC_AEQ_MTT_OFF_BASE_ADDR 0x200
+#define HIFC_CEQ_MTT_OFF_BASE_ADDR 0x400
+
+#define HIFC_EQ_MTT_OFF_STRIDE 0x40
+
+#define HIFC_CSR_AEQ_MTT_OFF(id) \
+ (HIFC_AEQ_MTT_OFF_BASE_ADDR + (id) * HIFC_EQ_MTT_OFF_STRIDE)
+
+#define HIFC_CSR_CEQ_MTT_OFF(id) \
+ (HIFC_CEQ_MTT_OFF_BASE_ADDR + (id) * HIFC_EQ_MTT_OFF_STRIDE)
+
+#define HIFC_CSR_EQ_PAGE_OFF_STRIDE 8
+
+#define HIFC_AEQ_HI_PHYS_ADDR_REG(q_id, pg_num) \
+ (HIFC_CSR_AEQ_MTT_OFF(q_id) + \
+ (pg_num) * HIFC_CSR_EQ_PAGE_OFF_STRIDE)
+
+#define HIFC_AEQ_LO_PHYS_ADDR_REG(q_id, pg_num) \
+ (HIFC_CSR_AEQ_MTT_OFF(q_id) + \
+ (pg_num) * HIFC_CSR_EQ_PAGE_OFF_STRIDE + 4)
+
+#define HIFC_CEQ_HI_PHYS_ADDR_REG(q_id, pg_num) \
+ (HIFC_CSR_CEQ_MTT_OFF(q_id) + \
+ (pg_num) * HIFC_CSR_EQ_PAGE_OFF_STRIDE)
+
+#define HIFC_CEQ_LO_PHYS_ADDR_REG(q_id, pg_num) \
+ (HIFC_CSR_CEQ_MTT_OFF(q_id) + \
+ (pg_num) * HIFC_CSR_EQ_PAGE_OFF_STRIDE + 4)
+
+#define HIFC_EQ_HI_PHYS_ADDR_REG(type, q_id, pg_num) \
+ ((u32)((type == HIFC_AEQ) ? \
+ HIFC_AEQ_HI_PHYS_ADDR_REG(q_id, pg_num) : \
+ HIFC_CEQ_HI_PHYS_ADDR_REG(q_id, pg_num)))
+
+#define HIFC_EQ_LO_PHYS_ADDR_REG(type, q_id, pg_num) \
+ ((u32)((type == HIFC_AEQ) ? \
+ HIFC_AEQ_LO_PHYS_ADDR_REG(q_id, pg_num) : \
+ HIFC_CEQ_LO_PHYS_ADDR_REG(q_id, pg_num)))
+
+#define HIFC_AEQ_CTRL_0_ADDR_BASE 0xE00
+#define HIFC_AEQ_CTRL_1_ADDR_BASE 0xE04
+#define HIFC_AEQ_CONS_IDX_0_ADDR_BASE 0xE08
+#define HIFC_AEQ_CONS_IDX_1_ADDR_BASE 0xE0C
+
+#define HIFC_EQ_OFF_STRIDE 0x80
+
+#define HIFC_CSR_AEQ_CTRL_0_ADDR(idx) \
+ (HIFC_AEQ_CTRL_0_ADDR_BASE + (idx) * HIFC_EQ_OFF_STRIDE)
+
+#define HIFC_CSR_AEQ_CTRL_1_ADDR(idx) \
+ (HIFC_AEQ_CTRL_1_ADDR_BASE + (idx) * HIFC_EQ_OFF_STRIDE)
+
+#define HIFC_CSR_AEQ_CONS_IDX_ADDR(idx) \
+ (HIFC_AEQ_CONS_IDX_0_ADDR_BASE + (idx) * HIFC_EQ_OFF_STRIDE)
+
+#define HIFC_CSR_AEQ_PROD_IDX_ADDR(idx) \
+ (HIFC_AEQ_CONS_IDX_1_ADDR_BASE + (idx) * HIFC_EQ_OFF_STRIDE)
+
+#define HIFC_CEQ_CTRL_0_ADDR_BASE 0x1000
+#define HIFC_CEQ_CTRL_1_ADDR_BASE 0x1004
+#define HIFC_CEQ_CONS_IDX_0_ADDR_BASE 0x1008
+#define HIFC_CEQ_CONS_IDX_1_ADDR_BASE 0x100C
+
+#define HIFC_CSR_CEQ_CTRL_0_ADDR(idx) \
+ (HIFC_CEQ_CTRL_0_ADDR_BASE + (idx) * HIFC_EQ_OFF_STRIDE)
+
+#define HIFC_CSR_CEQ_CTRL_1_ADDR(idx) \
+ (HIFC_CEQ_CTRL_1_ADDR_BASE + (idx) * HIFC_EQ_OFF_STRIDE)
+
+#define HIFC_CSR_CEQ_CONS_IDX_ADDR(idx) \
+ (HIFC_CEQ_CONS_IDX_0_ADDR_BASE + (idx) * HIFC_EQ_OFF_STRIDE)
+
+#define HIFC_CSR_CEQ_PROD_IDX_ADDR(idx) \
+ (HIFC_CEQ_CONS_IDX_1_ADDR_BASE + (idx) * HIFC_EQ_OFF_STRIDE)
+
+enum hifc_eq_type {
+ HIFC_AEQ,
+ HIFC_CEQ
+};
+
+enum hifc_eq_intr_mode {
+ HIFC_INTR_MODE_ARMED,
+ HIFC_INTR_MODE_ALWAYS,
+};
+
+enum hifc_eq_ci_arm_state {
+ HIFC_EQ_NOT_ARMED,
+ HIFC_EQ_ARMED,
+};
+
+struct hifc_eq_work {
+ struct work_struct work;
+ void *data;
+};
+
+struct hifc_ceq_tasklet_data {
+ void *data;
+};
+
+struct hifc_eq {
+ struct hifc_hwdev *hwdev;
+ u16 q_id;
+ enum hifc_eq_type type;
+ u32 page_size;
+ u32 orig_page_size;
+ u32 eq_len;
+
+ u32 cons_idx;
+ u16 wrapped;
+
+ u16 elem_size;
+ u16 num_pages;
+ u32 num_elem_in_pg;
+
+ struct irq_info eq_irq;
+ char irq_name[EQ_IRQ_NAME_LEN];
+
+ dma_addr_t *dma_addr;
+ u8 **virt_addr;
+ dma_addr_t *dma_addr_for_free;
+ u8 **virt_addr_for_free;
+
+ struct hifc_eq_work aeq_work;
+ struct tasklet_struct ceq_tasklet;
+ struct hifc_ceq_tasklet_data ceq_tasklet_data;
+
+ u64 hard_intr_jif;
+ u64 soft_intr_jif;
+};
+
+struct hifc_aeq_elem {
+ u8 aeqe_data[HIFC_AEQE_DATA_SIZE];
+ u32 desc;
+};
+
+enum hifc_aeq_cb_state {
+ HIFC_AEQ_HW_CB_REG = 0,
+ HIFC_AEQ_HW_CB_RUNNING,
+ HIFC_AEQ_SW_CB_REG,
+ HIFC_AEQ_SW_CB_RUNNING,
+};
+
+struct hifc_aeqs {
+ struct hifc_hwdev *hwdev;
+
+ hifc_aeq_hwe_cb aeq_hwe_cb[HIFC_MAX_AEQ_EVENTS];
+ hifc_aeq_swe_cb aeq_swe_cb[HIFC_MAX_AEQ_SW_EVENTS];
+ unsigned long aeq_hw_cb_state[HIFC_MAX_AEQ_EVENTS];
+ unsigned long aeq_sw_cb_state[HIFC_MAX_AEQ_SW_EVENTS];
+
+ struct hifc_eq aeq[HIFC_MAX_AEQS];
+ u16 num_aeqs;
+
+ struct workqueue_struct *workq;
+};
+
+enum hifc_ceq_cb_state {
+ HIFC_CEQ_CB_REG = 0,
+ HIFC_CEQ_CB_RUNNING,
+};
+
+struct hifc_ceqs {
+ struct hifc_hwdev *hwdev;
+
+ hifc_ceq_event_cb ceq_cb[HIFC_MAX_CEQ_EVENTS];
+ void *ceq_data[HIFC_MAX_CEQ_EVENTS];
+ unsigned long ceq_cb_state[HIFC_MAX_CEQ_EVENTS];
+
+ struct hifc_eq ceq[HIFC_MAX_CEQS];
+ u16 num_ceqs;
+};
+
+int hifc_aeqs_init(struct hifc_hwdev *hwdev, u16 num_aeqs,
+ struct irq_info *msix_entries);
+
+void hifc_aeqs_free(struct hifc_hwdev *hwdev);
+
+int hifc_ceqs_init(struct hifc_hwdev *hwdev, u16 num_ceqs,
+ struct irq_info *msix_entries);
+
+void hifc_ceqs_free(struct hifc_hwdev *hwdev);
+
+void hifc_get_ceq_irqs(struct hifc_hwdev *hwdev, struct irq_info *irqs,
+ u16 *num_irqs);
+
+void hifc_get_aeq_irqs(struct hifc_hwdev *hwdev, struct irq_info *irqs,
+ u16 *num_irqs);
+
+void hifc_dump_aeq_info(struct hifc_hwdev *hwdev);
+
+#endif
diff --git a/drivers/scsi/huawei/hifc/hifc_hw.h b/drivers/scsi/huawei/hifc/hifc_hw.h
new file mode 100644
index 000000000000..49b2edd2bac6
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_hw.h
@@ -0,0 +1,611 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+
+#ifndef HIFC_HW_H_
+#define HIFC_HW_H_
+
+#ifndef __BIG_ENDIAN__
+#define __BIG_ENDIAN__ 0x4321
+#endif
+
+#ifndef __LITTLE_ENDIAN__
+#define __LITTLE_ENDIAN__ 0x1234
+#endif
+
+enum hifc_mod_type {
+ HIFC_MOD_COMM = 0, /* HW communication module */
+ HIFC_MOD_L2NIC = 1, /* L2NIC module*/
+ HIFC_MOD_FCOE = 6,
+ HIFC_MOD_CFGM = 7, /* Configuration module */
+ HIFC_MOD_FC = 10,
+ HIFC_MOD_HILINK = 14,
+ HIFC_MOD_HW_MAX = 16, /* hardware max module id */
+
+ /* Software module id, for PF/VF and multi-host */
+ HIFC_MOD_MAX,
+};
+
+struct hifc_cmd_buf {
+ void *buf;
+ dma_addr_t dma_addr;
+ u16 size;
+};
+
+enum hifc_ack_type {
+ HIFC_ACK_TYPE_CMDQ,
+ HIFC_ACK_TYPE_SHARE_CQN,
+ HIFC_ACK_TYPE_APP_CQN,
+ HIFC_MOD_ACK_MAX = 15,
+};
+
+int hifc_msg_to_mgmt_sync(void *hwdev, enum hifc_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u32 timeout);
+
+/* PF/VF send msg to uP by api cmd, and return immediately */
+int hifc_msg_to_mgmt_async(void *hwdev, enum hifc_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size);
+
+int hifc_api_cmd_write_nack(void *hwdev, u8 dest,
+ void *cmd, u16 size);
+
+int hifc_api_cmd_read_ack(void *hwdev, u8 dest,
+ void *cmd, u16 size, void *ack, u16 ack_size);
+/* PF/VF send cmd to ucode by cmdq, and return if success.
+ * timeout=0, use default timeout.
+ */
+int hifc_cmdq_direct_resp(void *hwdev, enum hifc_ack_type ack_type,
+ enum hifc_mod_type mod, u8 cmd,
+ struct hifc_cmd_buf *buf_in,
+ u64 *out_param, u32 timeout);
+/* 1. whether need the timeout parameter
+ * 2. out_param indicates the status of the microcode processing command
+ */
+
+/* PF/VF send cmd to ucode by cmdq, and return detailed result.
+ * timeout=0, use default timeout.
+ */
+int hifc_cmdq_detail_resp(void *hwdev, enum hifc_ack_type ack_type,
+ enum hifc_mod_type mod, u8 cmd,
+ struct hifc_cmd_buf *buf_in,
+ struct hifc_cmd_buf *buf_out, u32 timeout);
+
+/* PF/VF send cmd to ucode by cmdq, and return immediately
+ */
+int hifc_cmdq_async(void *hwdev, enum hifc_ack_type ack_type,
+ enum hifc_mod_type mod, u8 cmd,
+ struct hifc_cmd_buf *buf_in);
+
+int hifc_ppf_tmr_start(void *hwdev);
+int hifc_ppf_tmr_stop(void *hwdev);
+
+enum hifc_ceq_event {
+ HIFC_CMDQ = 3,
+ HIFC_MAX_CEQ_EVENTS = 6,
+};
+
+typedef void (*hifc_ceq_event_cb)(void *handle, u32 ceqe_data);
+int hifc_ceq_register_cb(void *hwdev, enum hifc_ceq_event event,
+ hifc_ceq_event_cb callback);
+void hifc_ceq_unregister_cb(void *hwdev, enum hifc_ceq_event event);
+
+enum hifc_aeq_type {
+ HIFC_HW_INTER_INT = 0,
+ HIFC_MBX_FROM_FUNC = 1,
+ HIFC_MSG_FROM_MGMT_CPU = 2,
+ HIFC_API_RSP = 3,
+ HIFC_API_CHAIN_STS = 4,
+ HIFC_MBX_SEND_RSLT = 5,
+ HIFC_MAX_AEQ_EVENTS
+};
+
+enum hifc_aeq_sw_type {
+ HIFC_STATELESS_EVENT = 0,
+ HIFC_STATEFULL_EVENT = 1,
+ HIFC_MAX_AEQ_SW_EVENTS
+};
+
+typedef void (*hifc_aeq_hwe_cb)(void *handle, u8 *data, u8 size);
+int hifc_aeq_register_hw_cb(void *hwdev, enum hifc_aeq_type event,
+ hifc_aeq_hwe_cb hwe_cb);
+void hifc_aeq_unregister_hw_cb(void *hwdev, enum hifc_aeq_type event);
+
+typedef u8 (*hifc_aeq_swe_cb)(void *handle, u8 event, u64 data);
+int hifc_aeq_register_swe_cb(void *hwdev, enum hifc_aeq_sw_type event,
+ hifc_aeq_swe_cb aeq_swe_cb);
+void hifc_aeq_unregister_swe_cb(void *hwdev, enum hifc_aeq_sw_type event);
+
+typedef void (*hifc_mgmt_msg_cb)(void *hwdev, void *pri_handle,
+ u8 cmd, void *buf_in, u16 in_size, void *buf_out, u16 *out_size);
+
+int hifc_register_mgmt_msg_cb(void *hwdev,
+ enum hifc_mod_type mod, void *pri_handle,
+ hifc_mgmt_msg_cb callback);
+void hifc_unregister_mgmt_msg_cb(void *hwdev, enum hifc_mod_type mod);
+
+struct hifc_cmd_buf *hifc_alloc_cmd_buf(void *hwdev);
+void hifc_free_cmd_buf(void *hwdev, struct hifc_cmd_buf *buf);
+
+int hifc_alloc_db_addr(void *hwdev, void __iomem **db_base,
+ void __iomem **dwqe_base);
+void hifc_free_db_addr(void *hwdev, void __iomem *db_base,
+ void __iomem *dwqe_base);
+
+struct nic_interrupt_info {
+ u32 lli_set;
+ u32 interrupt_coalesc_set;
+ u16 msix_index;
+ u8 lli_credit_limit;
+ u8 lli_timer_cfg;
+ u8 pending_limt;
+ u8 coalesc_timer_cfg;
+ u8 resend_timer_cfg;
+};
+
+int hifc_get_interrupt_cfg(void *hwdev,
+ struct nic_interrupt_info *interrupt_info);
+
+int hifc_set_interrupt_cfg(void *hwdev,
+ struct nic_interrupt_info interrupt_info);
+
+/* The driver code implementation interface*/
+void hifc_misx_intr_clear_resend_bit(void *hwdev,
+ u16 msix_idx, u8 clear_resend_en);
+
+struct hifc_sq_attr {
+ u8 dma_attr_off;
+ u8 pending_limit;
+ u8 coalescing_time;
+ u8 intr_en;
+ u16 intr_idx;
+ u32 l2nic_sqn;
+ u64 ci_dma_base;
+};
+
+int hifc_set_ci_table(void *hwdev, u16 q_id, struct hifc_sq_attr *attr);
+
+int hifc_set_root_ctxt(void *hwdev, u16 rq_depth, u16 sq_depth, int rx_buf_sz);
+int hifc_clean_root_ctxt(void *hwdev);
+void hifc_record_pcie_error(void *hwdev);
+
+int hifc_func_rx_tx_flush(void *hwdev);
+
+int hifc_func_tmr_bitmap_set(void *hwdev, bool enable);
+
+struct hifc_init_para {
+ /* Record hifc_pcidev or NDIS_Adapter pointer address*/
+ void *adapter_hdl;
+ /* Record pcidev or Handler pointer address
+ * for example: ioremap interface input parameter
+ */
+ void *pcidev_hdl;
+ /* Record pcidev->dev or Handler pointer address which used to
+ * dma address application or dev_err print the parameter
+ */
+ void *dev_hdl;
+
+ void *cfg_reg_base; /* Configure virtual address, bar0/1*/
+ /* interrupt configuration register address, bar2/3 */
+ void *intr_reg_base;
+ u64 db_base_phy;
+ void *db_base; /* the doorbell address, bar4/5 higher 4M space*/
+ void *dwqe_mapping;/* direct wqe 4M, follow the doorbell address space*/
+ void **hwdev;
+ void *chip_node;
+ /* In bmgw x86 host, driver can't send message to mgmt cpu directly,
+ * need to trasmit message ppf mbox to bmgw arm host.
+ */
+ void *ppf_hwdev;
+};
+
+#ifndef IFNAMSIZ
+#define IFNAMSIZ 16
+#endif
+#define MAX_FUNCTION_NUM 512
+#define HIFC_MAX_PF_NUM 16
+#define HIFC_MAX_COS 8
+#define INIT_FAILED 0
+#define INIT_SUCCESS 1
+#define MAX_DRV_BUF_SIZE 4096
+
+struct hifc_cmd_get_light_module_abs {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u8 port_id;
+ u8 abs_status; /* 0:present, 1:absent */
+ u8 rsv[2];
+};
+
+#define SFP_INFO_MAX_SIZE 512
+struct hifc_cmd_get_sfp_qsfp_info {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u8 port_id;
+ u8 wire_type;
+ u16 out_len;
+ u8 sfp_qsfp_info[SFP_INFO_MAX_SIZE];
+};
+
+#define HIFC_MAX_PORT_ID 4
+
+struct hifc_port_routine_cmd {
+ bool up_send_sfp_info;
+ bool up_send_sfp_abs;
+
+ struct hifc_cmd_get_sfp_qsfp_info sfp_info;
+ struct hifc_cmd_get_light_module_abs abs;
+};
+
+struct card_node {
+ struct list_head node;
+ struct list_head func_list;
+ char chip_name[IFNAMSIZ];
+ void *log_info;
+ void *dbgtool_info;
+ void *func_handle_array[MAX_FUNCTION_NUM];
+ unsigned char dp_bus_num;
+ u8 func_num;
+ struct attribute dbgtool_attr_file;
+
+ bool cos_up_setted;
+ u8 cos_up[HIFC_MAX_COS];
+ bool ppf_state;
+ u8 pf_bus_num[HIFC_MAX_PF_NUM];
+
+ struct hifc_port_routine_cmd rt_cmd[HIFC_MAX_PORT_ID];
+
+ /* mutex used for copy sfp info */
+ struct mutex sfp_mutex;
+};
+
+enum hifc_hwdev_init_state {
+ HIFC_HWDEV_NONE_INITED = 0,
+ HIFC_HWDEV_CLP_INITED,
+ HIFC_HWDEV_AEQ_INITED,
+ HIFC_HWDEV_MGMT_INITED,
+ HIFC_HWDEV_MBOX_INITED,
+ HIFC_HWDEV_CMDQ_INITED,
+ HIFC_HWDEV_COMM_CH_INITED,
+ HIFC_HWDEV_ALL_INITED,
+ HIFC_HWDEV_MAX_INVAL_INITED
+};
+
+enum hifc_func_cap {
+ /* send message to mgmt cpu directly */
+ HIFC_FUNC_MGMT = 1 << 0,
+ /* setting port attribute, pause/speed etc. */
+ HIFC_FUNC_PORT = 1 << 1,
+ /* Enable SR-IOV in default */
+ HIFC_FUNC_SRIOV_EN_DFLT = 1 << 2,
+ /* Can't change VF num */
+ HIFC_FUNC_SRIOV_NUM_FIX = 1 << 3,
+ /* Fcorce pf/vf link up */
+ HIFC_FUNC_FORCE_LINK_UP = 1 << 4,
+ /* Support rate limit */
+ HIFC_FUNC_SUPP_RATE_LIMIT = 1 << 5,
+ HIFC_FUNC_SUPP_DFX_REG = 1 << 6,
+ /* Support promisc/multicast/all-multi */
+ HIFC_FUNC_SUPP_RX_MODE = 1 << 7,
+ /* Set vf mac and vlan by ip link */
+ HIFC_FUNC_SUPP_SET_VF_MAC_VLAN = 1 << 8,
+ /* Support set mac by ifconfig */
+ HIFC_FUNC_SUPP_CHANGE_MAC = 1 << 9,
+ /* OVS don't support SCTP_CRC/HW_VLAN/LRO */
+ HIFC_FUNC_OFFLOAD_OVS_UNSUPP = 1 << 10,
+};
+
+#define FUNC_SUPPORT_MGMT(hwdev) \
+ (!!(hifc_get_func_feature_cap(hwdev) & HIFC_FUNC_MGMT))
+#define FUNC_SUPPORT_PORT_SETTING(hwdev) \
+ (!!(hifc_get_func_feature_cap(hwdev) & HIFC_FUNC_PORT))
+#define FUNC_SUPPORT_DCB(hwdev) \
+ (FUNC_SUPPORT_PORT_SETTING(hwdev))
+#define FUNC_ENABLE_SRIOV_IN_DEFAULT(hwdev) \
+ (!!(hifc_get_func_feature_cap(hwdev) & \
+ HIFC_FUNC_SRIOV_EN_DFLT))
+#define FUNC_SRIOV_FIX_NUM_VF(hwdev) \
+ (!!(hifc_get_func_feature_cap(hwdev) & \
+ HIFC_FUNC_SRIOV_NUM_FIX))
+#define FUNC_SUPPORT_RX_MODE(hwdev) \
+ (!!(hifc_get_func_feature_cap(hwdev) & \
+ HIFC_FUNC_SUPP_RX_MODE))
+#define FUNC_SUPPORT_RATE_LIMIT(hwdev) \
+ (!!(hifc_get_func_feature_cap(hwdev) & \
+ HIFC_FUNC_SUPP_RATE_LIMIT))
+#define FUNC_SUPPORT_SET_VF_MAC_VLAN(hwdev) \
+ (!!(hifc_get_func_feature_cap(hwdev) & \
+ HIFC_FUNC_SUPP_SET_VF_MAC_VLAN))
+#define FUNC_SUPPORT_CHANGE_MAC(hwdev) \
+ (!!(hifc_get_func_feature_cap(hwdev) & \
+ HIFC_FUNC_SUPP_CHANGE_MAC))
+#define FUNC_FORCE_LINK_UP(hwdev) \
+ (!!(hifc_get_func_feature_cap(hwdev) & \
+ HIFC_FUNC_FORCE_LINK_UP))
+#define FUNC_SUPPORT_SCTP_CRC(hwdev) \
+ (!(hifc_get_func_feature_cap(hwdev) & \
+ HIFC_FUNC_OFFLOAD_OVS_UNSUPP))
+#define FUNC_SUPPORT_HW_VLAN(hwdev) \
+ (!(hifc_get_func_feature_cap(hwdev) & \
+ HIFC_FUNC_OFFLOAD_OVS_UNSUPP))
+#define FUNC_SUPPORT_LRO(hwdev) \
+ (!(hifc_get_func_feature_cap(hwdev) & \
+ HIFC_FUNC_OFFLOAD_OVS_UNSUPP))
+
+int hifc_init_hwdev(struct hifc_init_para *para);
+void hifc_free_hwdev(void *hwdev);
+int hifc_stateful_init(void *hwdev);
+void hifc_stateful_deinit(void *hwdev);
+bool hifc_is_hwdev_mod_inited(void *hwdev, enum hifc_hwdev_init_state state);
+u64 hifc_get_func_feature_cap(void *hwdev);
+int hifc_slq_init(void *dev, int num_wqs);
+void hifc_slq_uninit(void *dev);
+int hifc_slq_alloc(void *dev, u16 wqebb_size, u16 q_depth,
+ u16 page_size, u64 *cla_addr, void **handle);
+void hifc_slq_free(void *dev, void *handle);
+u64 hifc_slq_get_addr(void *handle, u16 index);
+u64 hifc_slq_get_first_pageaddr(void *handle);
+
+typedef void (*comm_up_self_msg_proc)(void *handle, void *buf_in,
+ u16 in_size, void *buf_out,
+ u16 *out_size);
+void hifc_comm_recv_mgmt_self_cmd_reg(void *hwdev, u8 cmd,
+ comm_up_self_msg_proc proc);
+void hifc_comm_recv_up_self_cmd_unreg(void *hwdev, u8 cmd);
+
+/* defined by chip */
+enum hifc_fault_type {
+ FAULT_TYPE_CHIP,
+ FAULT_TYPE_UCODE,
+ FAULT_TYPE_MEM_RD_TIMEOUT,
+ FAULT_TYPE_MEM_WR_TIMEOUT,
+ FAULT_TYPE_REG_RD_TIMEOUT,
+ FAULT_TYPE_REG_WR_TIMEOUT,
+ FAULT_TYPE_PHY_FAULT,
+ FAULT_TYPE_MAX,
+};
+
+/* defined by chip */
+enum hifc_fault_err_level {
+ /* default err_level=FAULT_LEVEL_FATAL if
+ * type==FAULT_TYPE_MEM_RD_TIMEOUT || FAULT_TYPE_MEM_WR_TIMEOUT ||
+ * FAULT_TYPE_REG_RD_TIMEOUT || FAULT_TYPE_REG_WR_TIMEOUT ||
+ * FAULT_TYPE_UCODE
+ * other: err_level in event.chip.err_level if type==FAULT_TYPE_CHIP
+ */
+ FAULT_LEVEL_FATAL,
+ FAULT_LEVEL_SERIOUS_RESET,
+ FAULT_LEVEL_SERIOUS_FLR,
+ FAULT_LEVEL_GENERAL,
+ FAULT_LEVEL_SUGGESTION,
+ FAULT_LEVEL_MAX
+};
+
+enum hifc_fault_source_type {
+ /* same as FAULT_TYPE_CHIP */
+ HIFC_FAULT_SRC_HW_MGMT_CHIP = 0,
+ /* same as FAULT_TYPE_UCODE */
+ HIFC_FAULT_SRC_HW_MGMT_UCODE,
+ /* same as FAULT_TYPE_MEM_RD_TIMEOUT */
+ HIFC_FAULT_SRC_HW_MGMT_MEM_RD_TIMEOUT,
+ /* same as FAULT_TYPE_MEM_WR_TIMEOUT */
+ HIFC_FAULT_SRC_HW_MGMT_MEM_WR_TIMEOUT,
+ /* same as FAULT_TYPE_REG_RD_TIMEOUT */
+ HIFC_FAULT_SRC_HW_MGMT_REG_RD_TIMEOUT,
+ /* same as FAULT_TYPE_REG_WR_TIMEOUT */
+ HIFC_FAULT_SRC_HW_MGMT_REG_WR_TIMEOUT,
+ HIFC_FAULT_SRC_SW_MGMT_UCODE,
+ HIFC_FAULT_SRC_MGMT_WATCHDOG,
+ HIFC_FAULT_SRC_MGMT_RESET = 8,
+ HIFC_FAULT_SRC_HW_PHY_FAULT,
+ HIFC_FAULT_SRC_HOST_HEARTBEAT_LOST = 20,
+ HIFC_FAULT_SRC_TYPE_MAX,
+};
+
+struct hifc_fault_sw_mgmt {
+ u8 event_id;
+ u64 event_data;
+};
+
+union hifc_fault_hw_mgmt {
+ u32 val[4];
+ /* valid only type==FAULT_TYPE_CHIP */
+ struct {
+ u8 node_id;
+ /* enum hifc_fault_err_level */
+ u8 err_level;
+ u16 err_type;
+ u32 err_csr_addr;
+ u32 err_csr_value;
+ /* func_id valid only err_level==FAULT_LEVEL_SERIOUS_FLR
+ */
+ u16 func_id;
+ u16 rsvd2;
+ } chip;
+
+ /* valid only type==FAULT_TYPE_UCODE */
+ struct {
+ u8 cause_id;
+ u8 core_id;
+ u8 c_id;
+ u8 rsvd3;
+ u32 epc;
+ u32 rsvd4;
+ u32 rsvd5;
+ } ucode;
+
+ /* valid only type==FAULT_TYPE_MEM_RD_TIMEOUT ||
+ * FAULT_TYPE_MEM_WR_TIMEOUT
+ */
+ struct {
+ u32 err_csr_ctrl;
+ u32 err_csr_data;
+ u32 ctrl_tab;
+ u32 mem_index;
+ } mem_timeout;
+
+ /* valid only type==FAULT_TYPE_REG_RD_TIMEOUT ||
+ * FAULT_TYPE_REG_WR_TIMEOUT
+ */
+ struct {
+ u32 err_csr;
+ u32 rsvd6;
+ u32 rsvd7;
+ u32 rsvd8;
+ } reg_timeout;
+
+ struct {
+ /* 0: read; 1: write */
+ u8 op_type;
+ u8 port_id;
+ u8 dev_ad;
+ u8 rsvd9;
+ u32 csr_addr;
+ u32 op_data;
+ u32 rsvd10;
+ } phy_fault;
+};
+
+/* defined by chip */
+struct hifc_fault_event {
+ /* enum hifc_fault_type */
+ u8 type;
+ u8 rsvd0[3];
+ union hifc_fault_hw_mgmt event;
+};
+
+struct hifc_fault_recover_info {
+ u8 fault_src; /* enum hifc_fault_source_type */
+ u8 fault_lev; /* enum hifc_fault_err_level */
+ u8 rsvd0[2];
+ union {
+ union hifc_fault_hw_mgmt hw_mgmt;
+ struct hifc_fault_sw_mgmt sw_mgmt;
+ u32 mgmt_rsvd[4];
+ u32 host_rsvd[4];
+ } fault_data;
+};
+
+struct hifc_dcb_state {
+ u8 dcb_on;
+ u8 default_cos;
+ u8 up_cos[8];
+};
+
+enum link_err_type {
+ LINK_ERR_MODULE_UNRECOGENIZED,
+ LINK_ERR_NUM,
+};
+
+enum port_module_event_type {
+ HIFC_PORT_MODULE_CABLE_PLUGGED,
+ HIFC_PORT_MODULE_CABLE_UNPLUGGED,
+ HIFC_PORT_MODULE_LINK_ERR,
+ HIFC_PORT_MODULE_MAX_EVENT,
+};
+
+struct hifc_port_module_event {
+ enum port_module_event_type type;
+ enum link_err_type err_type;
+};
+
+struct hifc_event_link_info {
+ u8 valid;
+ u8 port_type;
+ u8 autoneg_cap;
+ u8 autoneg_state;
+ u8 duplex;
+ u8 speed;
+};
+
+struct hifc_mctp_host_info {
+ u8 major_cmd;
+ u8 sub_cmd;
+ u8 rsvd[2];
+
+ u32 data_len;
+ void *data;
+};
+
+enum hifc_event_type {
+ HIFC_EVENT_LINK_DOWN = 0,
+ HIFC_EVENT_LINK_UP = 1,
+ HIFC_EVENT_HEART_LOST = 2,
+ HIFC_EVENT_FAULT = 3,
+ HIFC_EVENT_NOTIFY_VF_DCB_STATE = 4,
+ HIFC_EVENT_DCB_STATE_CHANGE = 5,
+ HIFC_EVENT_FMW_ACT_NTC = 6,
+ HIFC_EVENT_PORT_MODULE_EVENT = 7,
+ HIFC_EVENT_MCTP_GET_HOST_INFO,
+ HIFC_EVENT_MULTI_HOST_MGMT,
+ HIFC_EVENT_INIT_MIGRATE_PF,
+};
+
+struct hifc_event_info {
+ enum hifc_event_type type;
+ union {
+ struct hifc_event_link_info link_info;
+ struct hifc_fault_event info;
+ struct hifc_dcb_state dcb_state;
+ struct hifc_port_module_event module_event;
+ u8 vf_default_cos;
+ struct hifc_mctp_host_info mctp_info;
+ };
+};
+
+enum hifc_ucode_event_type {
+ HIFC_INTERNAL_TSO_FATAL_ERROR = 0x0,
+ HIFC_INTERNAL_LRO_FATAL_ERROR = 0x1,
+ HIFC_INTERNAL_TX_FATAL_ERROR = 0x2,
+ HIFC_INTERNAL_RX_FATAL_ERROR = 0x3,
+ HIFC_INTERNAL_OTHER_FATAL_ERROR = 0x4,
+ HIFC_NIC_FATAL_ERROR_MAX = 0x8,
+};
+
+typedef void (*hifc_event_handler)(void *handle,
+ struct hifc_event_info *event);
+/* only register once */
+void hifc_event_register(void *dev, void *pri_handle,
+ hifc_event_handler callback);
+void hifc_event_unregister(void *dev);
+
+void hifc_detect_hw_present(void *hwdev);
+
+void hifc_set_chip_absent(void *hwdev);
+
+int hifc_get_chip_present_flag(void *hwdev);
+
+void hifc_set_pcie_order_cfg(void *handle);
+
+int hifc_get_mgmt_channel_status(void *handle);
+
+struct hifc_board_info {
+ u32 board_type;
+ u32 port_num;
+ u32 port_speed;
+ u32 pcie_width;
+ u32 host_num;
+ u32 pf_num;
+ u32 vf_total_num;
+ u32 tile_num;
+ u32 qcm_num;
+ u32 core_num;
+ u32 work_mode;
+ u32 service_mode;
+ u32 pcie_mode;
+ u32 cfg_addr;
+ u32 boot_sel;
+ u32 board_id;
+};
+
+int hifc_get_board_info(void *hwdev, struct hifc_board_info *info);
+
+int hifc_get_card_present_state(void *hwdev, bool *card_present_state);
+
+#endif
diff --git a/drivers/scsi/huawei/hifc/hifc_hwdev.c b/drivers/scsi/huawei/hifc/hifc_hwdev.c
new file mode 100644
index 000000000000..760e02394b05
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_hwdev.c
@@ -0,0 +1,3675 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/pci_regs.h>
+#include <linux/types.h>
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/completion.h>
+#include <linux/semaphore.h>
+#include <linux/interrupt.h>
+
+#include "hifc_knl_adp.h"
+#include "hifc_hw.h"
+#include "hifc_hwdev.h"
+#include "hifc_hwif.h"
+#include "hifc_api_cmd.h"
+#include "hifc_mgmt.h"
+#include "hifc_eqs.h"
+#include "hifc_wq.h"
+#include "hifc_cmdq.h"
+#include "hifc_hwif.h"
+
+#define HIFC_DEAULT_EQ_MSIX_PENDING_LIMIT 0
+#define HIFC_DEAULT_EQ_MSIX_COALESC_TIMER_CFG 0xFF
+#define HIFC_DEAULT_EQ_MSIX_RESEND_TIMER_CFG 7
+#define HIFC_FLR_TIMEOUT 1000
+#define HIFC_HT_GPA_PAGE_SIZE 4096UL
+#define HIFC_PPF_HT_GPA_SET_RETRY_TIMES 10
+#define HIFC_GET_SFP_INFO_REAL_TIME 0x1
+#define HIFC_GLB_SO_RO_CFG_SHIFT 0x0
+#define HIFC_GLB_SO_RO_CFG_MASK 0x1
+#define HIFC_DISABLE_ORDER 0
+#define HIFC_GLB_DMA_SO_RO_GET(val, member) \
+ (((val) >> HIFC_GLB_##member##_SHIFT) & HIFC_GLB_##member##_MASK)
+
+#define HIFC_GLB_DMA_SO_R0_CLEAR(val, member) \
+ ((val) & (~(HIFC_GLB_##member##_MASK << HIFC_GLB_##member##_SHIFT)))
+
+#define HIFC_GLB_DMA_SO_R0_SET(val, member) \
+ (((val) & HIFC_GLB_##member##_MASK) << HIFC_GLB_##member##_SHIFT)
+
+#define HIFC_MGMT_CHANNEL_STATUS_SHIFT 0x0
+#define HIFC_MGMT_CHANNEL_STATUS_MASK 0x1
+#define HIFC_ACTIVE_STATUS_MASK 0x80000000
+#define HIFC_ACTIVE_STATUS_CLEAR 0x7FFFFFFF
+
+#define HIFC_GET_MGMT_CHANNEL_STATUS(val, member) \
+ (((val) >> HIFC_##member##_SHIFT) & HIFC_##member##_MASK)
+
+#define HIFC_CLEAR_MGMT_CHANNEL_STATUS(val, member) \
+ ((val) & (~(HIFC_##member##_MASK << HIFC_##member##_SHIFT)))
+
+#define HIFC_SET_MGMT_CHANNEL_STATUS(val, member) \
+ (((val) & HIFC_##member##_MASK) << HIFC_##member##_SHIFT)
+
+#define HIFC_BOARD_IS_PHY(hwdev) \
+ ((hwdev)->board_info.board_type == 4 && \
+ (hwdev)->board_info.board_id == 24)
+
+struct comm_info_ht_gpa_set {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+ u32 rsvd1;
+ u32 rsvd2;
+ u64 page_pa0;
+ u64 page_pa1;
+};
+
+struct hifc_cons_idx_attr {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_idx;
+ u8 dma_attr_off;
+ u8 pending_limit;
+ u8 coalescing_time;
+ u8 intr_en;
+ u16 intr_idx;
+ u32 l2nic_sqn;
+ u32 sq_id;
+ u64 ci_addr;
+};
+
+struct hifc_clear_doorbell {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_idx;
+ u8 ppf_idx;
+ u8 rsvd1;
+};
+
+struct hifc_clear_resource {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_idx;
+ u8 ppf_idx;
+ u8 rsvd1;
+};
+
+struct hifc_msix_config {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u16 msix_index;
+ u8 pending_cnt;
+ u8 coalesct_timer_cnt;
+ u8 lli_tmier_cnt;
+ u8 lli_credit_cnt;
+ u8 resend_timer_cnt;
+ u8 rsvd1[3];
+};
+
+enum func_tmr_bitmap_status {
+ FUNC_TMR_BITMAP_DISABLE,
+ FUNC_TMR_BITMAP_ENABLE,
+};
+
+struct hifc_func_tmr_bitmap_op {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_idx;
+ u8 op_id; /* 0:start; 1:stop */
+ u8 ppf_idx;
+ u32 rsvd1;
+};
+
+struct hifc_ppf_tmr_op {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u8 ppf_idx;
+ u8 op_id; /* 0: stop timer; 1:start timer */
+ u8 rsvd1[2];
+ u32 rsvd2;
+};
+
+struct hifc_cmd_set_res_state {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_idx;
+ u8 state;
+ u8 rsvd1;
+ u32 rsvd2;
+};
+
+int hifc_hw_rx_buf_size[] = {
+ HIFC_RX_BUF_SIZE_32B,
+ HIFC_RX_BUF_SIZE_64B,
+ HIFC_RX_BUF_SIZE_96B,
+ HIFC_RX_BUF_SIZE_128B,
+ HIFC_RX_BUF_SIZE_192B,
+ HIFC_RX_BUF_SIZE_256B,
+ HIFC_RX_BUF_SIZE_384B,
+ HIFC_RX_BUF_SIZE_512B,
+ HIFC_RX_BUF_SIZE_768B,
+ HIFC_RX_BUF_SIZE_1K,
+ HIFC_RX_BUF_SIZE_1_5K,
+ HIFC_RX_BUF_SIZE_2K,
+ HIFC_RX_BUF_SIZE_3K,
+ HIFC_RX_BUF_SIZE_4K,
+ HIFC_RX_BUF_SIZE_8K,
+ HIFC_RX_BUF_SIZE_16K,
+};
+
+struct hifc_comm_board_info {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ struct hifc_board_info info;
+
+ u32 rsvd1[4];
+};
+
+#define PHY_DOING_INIT_TIMEOUT (15 * 1000)
+
+struct hifc_phy_init_status {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u8 init_status;
+ u8 rsvd1[3];
+};
+
+enum phy_init_status_type {
+ PHY_INIT_DOING = 0,
+ PHY_INIT_SUCCESS = 1,
+ PHY_INIT_FAIL = 2,
+ PHY_NONSUPPORT = 3,
+};
+
+struct hifc_update_active {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u32 update_flag;
+ u32 update_status;
+};
+
+struct hifc_mgmt_watchdog_info {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u32 curr_time_h;
+ u32 curr_time_l;
+ u32 task_id;
+ u32 rsv;
+
+ u32 reg[13];
+ u32 pc;
+ u32 lr;
+ u32 cpsr;
+
+ u32 stack_top;
+ u32 stack_bottom;
+ u32 sp;
+ u32 curr_used;
+ u32 peak_used;
+ u32 is_overflow;
+
+ u32 stack_actlen;
+ u8 data[1024];
+};
+
+struct hifc_fmw_act_ntc {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u32 rsvd1[5];
+};
+
+#define HIFC_PAGE_SIZE_HW(pg_size) ((u8)ilog2((u32)((pg_size) >> 12)))
+
+struct hifc_wq_page_size {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_idx;
+ u8 ppf_idx;
+ /* real_size=4KB*2^page_size, range(0~20) must be checked by driver */
+ u8 page_size;
+
+ u32 rsvd1;
+};
+
+#define MAX_PCIE_DFX_BUF_SIZE (1024)
+
+struct hifc_pcie_dfx_ntc {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ int len;
+ u32 rsvd;
+};
+
+struct hifc_pcie_dfx_info {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u8 host_id;
+ u8 last;
+ u8 rsvd[2];
+ u32 offset;
+
+ u8 data[MAX_PCIE_DFX_BUF_SIZE];
+};
+
+struct hifc_reg_info {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u32 reg_addr;
+ u32 val_length;
+
+ u32 data[2];
+};
+
+#define HIFC_DMA_ATTR_ENTRY_ST_SHIFT 0
+#define HIFC_DMA_ATTR_ENTRY_AT_SHIFT 8
+#define HIFC_DMA_ATTR_ENTRY_PH_SHIFT 10
+#define HIFC_DMA_ATTR_ENTRY_NO_SNOOPING_SHIFT 12
+#define HIFC_DMA_ATTR_ENTRY_TPH_EN_SHIFT 13
+
+#define HIFC_DMA_ATTR_ENTRY_ST_MASK 0xFF
+#define HIFC_DMA_ATTR_ENTRY_AT_MASK 0x3
+#define HIFC_DMA_ATTR_ENTRY_PH_MASK 0x3
+#define HIFC_DMA_ATTR_ENTRY_NO_SNOOPING_MASK 0x1
+#define HIFC_DMA_ATTR_ENTRY_TPH_EN_MASK 0x1
+
+#define HIFC_DMA_ATTR_ENTRY_SET(val, member) \
+ (((u32)(val) & HIFC_DMA_ATTR_ENTRY_##member##_MASK) << \
+ HIFC_DMA_ATTR_ENTRY_##member##_SHIFT)
+
+#define HIFC_DMA_ATTR_ENTRY_CLEAR(val, member) \
+ ((val) & (~(HIFC_DMA_ATTR_ENTRY_##member##_MASK \
+ << HIFC_DMA_ATTR_ENTRY_##member##_SHIFT)))
+
+#define HIFC_PCIE_ST_DISABLE 0
+#define HIFC_PCIE_AT_DISABLE 0
+#define HIFC_PCIE_PH_DISABLE 0
+
+#define PCIE_MSIX_ATTR_ENTRY 0
+
+#define HIFC_CHIP_PRESENT 1
+#define HIFC_CHIP_ABSENT 0
+
+struct hifc_cmd_fault_event {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ struct hifc_fault_event event;
+};
+
+#define HEARTBEAT_DRV_MAGIC_ACK 0x5A5A5A5A
+
+struct hifc_heartbeat_event {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u8 mgmt_init_state;
+ u8 rsvd1[3];
+ u32 heart; /* increased every event */
+ u32 drv_heart;
+};
+
+static void hifc_set_mgmt_channel_status(void *handle, bool state)
+{
+ struct hifc_hwdev *hwdev = handle;
+ u32 val;
+
+ if (!hwdev || hifc_func_type(hwdev) == TYPE_VF ||
+ !(hwdev->feature_cap & HIFC_FUNC_SUPP_DFX_REG))
+ return;
+
+ val = hifc_hwif_read_reg(hwdev->hwif, HIFC_ICPL_RESERVD_ADDR);
+ val = HIFC_CLEAR_MGMT_CHANNEL_STATUS(val, MGMT_CHANNEL_STATUS);
+ val |= HIFC_SET_MGMT_CHANNEL_STATUS((u32)state, MGMT_CHANNEL_STATUS);
+
+ hifc_hwif_write_reg(hwdev->hwif, HIFC_ICPL_RESERVD_ADDR, val);
+}
+
+static void hifc_enable_mgmt_channel(void *hwdev, void *buf_out)
+{
+ struct hifc_hwdev *dev = hwdev;
+ struct hifc_update_active *active_info = buf_out;
+
+ if (!active_info || hifc_func_type(hwdev) == TYPE_VF ||
+ !(dev->feature_cap & HIFC_FUNC_SUPP_DFX_REG))
+ return;
+
+ if ((!active_info->status) &&
+ (active_info->update_status & HIFC_ACTIVE_STATUS_MASK)) {
+ active_info->update_status &= HIFC_ACTIVE_STATUS_CLEAR;
+ return;
+ }
+
+ hifc_set_mgmt_channel_status(hwdev, false);
+}
+
+int hifc_set_wq_page_size(struct hifc_hwdev *hwdev, u16 func_idx,
+ u32 page_size);
+
+#define HIFC_QUEUE_MIN_DEPTH 6
+#define HIFC_QUEUE_MAX_DEPTH 12
+#define HIFC_MAX_RX_BUFFER_SIZE 15
+
+#define ROOT_CTX_QPS_VALID(root_ctxt) \
+ ((root_ctxt)->rq_depth >= HIFC_QUEUE_MIN_DEPTH && \
+ (root_ctxt)->rq_depth <= HIFC_QUEUE_MAX_DEPTH && \
+ (root_ctxt)->sq_depth >= HIFC_QUEUE_MIN_DEPTH && \
+ (root_ctxt)->sq_depth <= HIFC_QUEUE_MAX_DEPTH && \
+ (root_ctxt)->rx_buf_sz <= HIFC_MAX_RX_BUFFER_SIZE)
+
+struct hifc_mgmt_status_log {
+ u8 status;
+ const char *log;
+};
+
+struct hifc_mgmt_status_log mgmt_status_log[] = {
+ {HIFC_MGMT_STATUS_ERR_PARAM, "Invalid parameter"},
+ {HIFC_MGMT_STATUS_ERR_FAILED, "Operation failed"},
+ {HIFC_MGMT_STATUS_ERR_PORT, "Invalid port"},
+ {HIFC_MGMT_STATUS_ERR_TIMEOUT, "Operation time out"},
+ {HIFC_MGMT_STATUS_ERR_NOMATCH, "Version not match"},
+ {HIFC_MGMT_STATUS_ERR_EXIST, "Entry exists"},
+ {HIFC_MGMT_STATUS_ERR_NOMEM, "Out of memory"},
+ {HIFC_MGMT_STATUS_ERR_INIT, "Feature not initialized"},
+ {HIFC_MGMT_STATUS_ERR_FAULT, "Invalid address"},
+ {HIFC_MGMT_STATUS_ERR_PERM, "Operation not permitted"},
+ {HIFC_MGMT_STATUS_ERR_EMPTY, "Table empty"},
+ {HIFC_MGMT_STATUS_ERR_FULL, "Table full"},
+ {HIFC_MGMT_STATUS_ERR_NOT_FOUND, "Not found"},
+ {HIFC_MGMT_STATUS_ERR_BUSY, "Device or resource busy "},
+ {HIFC_MGMT_STATUS_ERR_RESOURCE, "No resources for operation "},
+ {HIFC_MGMT_STATUS_ERR_CONFIG, "Invalid configuration"},
+ {HIFC_MGMT_STATUS_ERR_UNAVAIL, "Feature unavailable"},
+ {HIFC_MGMT_STATUS_ERR_CRC, "CRC check failed"},
+ {HIFC_MGMT_STATUS_ERR_NXIO, "No such device or address"},
+ {HIFC_MGMT_STATUS_ERR_ROLLBACK, "Chip rollback fail"},
+ {HIFC_MGMT_STATUS_ERR_LEN, "Length too short or too long"},
+ {HIFC_MGMT_STATUS_ERR_UNSUPPORT, "Feature not supported"},
+};
+
+static void __print_status_info(struct hifc_hwdev *dev,
+ enum hifc_mod_type mod, u8 cmd, int index)
+{
+ if (mod == HIFC_MOD_COMM) {
+ sdk_err(dev->dev_hdl, "Mgmt process mod(0x%x) cmd(0x%x) fail: %s",
+ mod, cmd, mgmt_status_log[index].log);
+ } else if (mod == HIFC_MOD_L2NIC ||
+ mod == HIFC_MOD_HILINK) {
+ sdk_err(dev->dev_hdl, "Mgmt process mod(0x%x) cmd(0x%x) fail: %s",
+ mod, cmd, mgmt_status_log[index].log);
+ }
+}
+
+static bool hifc_status_need_special_handle(enum hifc_mod_type mod,
+ u8 cmd, u8 status)
+{
+ if (mod == HIFC_MOD_L2NIC) {
+ /* optical module isn't plugged in */
+ if (((cmd == HIFC_PORT_CMD_GET_STD_SFP_INFO) ||
+ (cmd == HIFC_PORT_CMD_GET_SFP_INFO)) &&
+ (status == HIFC_MGMT_STATUS_ERR_NXIO))
+ return true;
+
+ if ((cmd == HIFC_PORT_CMD_SET_MAC ||
+ cmd == HIFC_PORT_CMD_UPDATE_MAC) &&
+ status == HIFC_MGMT_STATUS_ERR_EXIST)
+ return true;
+ }
+
+ return false;
+}
+
+static bool print_status_info_valid(enum hifc_mod_type mod,
+ const void *buf_out)
+{
+ if (!buf_out)
+ return false;
+
+ if (mod != HIFC_MOD_COMM && mod != HIFC_MOD_L2NIC &&
+ mod != HIFC_MOD_HILINK)
+ return false;
+
+ return true;
+}
+
+static void hifc_print_status_info(void *hwdev, enum hifc_mod_type mod,
+ u8 cmd, const void *buf_out)
+{
+ struct hifc_hwdev *dev = hwdev;
+ int i, size;
+ u8 status;
+
+ if (!print_status_info_valid(mod, buf_out))
+ return;
+
+ status = *(u8 *)buf_out;
+
+ if (!status)
+ return;
+
+ if (hifc_status_need_special_handle(mod, cmd, status))
+ return;
+
+ size = sizeof(mgmt_status_log) / sizeof(mgmt_status_log[0]);
+ for (i = 0; i < size; i++) {
+ if (status == mgmt_status_log[i].status) {
+ __print_status_info(dev, mod, cmd, i);
+ return;
+ }
+ }
+
+ if (mod == HIFC_MOD_COMM) {
+ sdk_err(dev->dev_hdl, "Mgmt process mod(0x%x) cmd(0x%x) return driver unknown status(0x%x)\n",
+ mod, cmd, status);
+ } else if (mod == HIFC_MOD_L2NIC || mod == HIFC_MOD_HILINK) {
+ sdk_err(dev->dev_hdl, "Mgmt process mod(0x%x) cmd(0x%x) return driver unknown status(0x%x)\n",
+ mod, cmd, status);
+ }
+}
+
+void hifc_set_chip_present(void *hwdev)
+{
+ ((struct hifc_hwdev *)hwdev)->chip_present_flag = HIFC_CHIP_PRESENT;
+}
+
+void hifc_set_chip_absent(void *hwdev)
+{
+ struct hifc_hwdev *dev = hwdev;
+
+ sdk_err(dev->dev_hdl, "Card not present\n");
+ dev->chip_present_flag = HIFC_CHIP_ABSENT;
+}
+
+int hifc_get_chip_present_flag(void *hwdev)
+{
+ int flag;
+
+ if (!hwdev)
+ return -EINVAL;
+ flag = ((struct hifc_hwdev *)hwdev)->chip_present_flag;
+ return flag;
+}
+
+void hifc_force_complete_all(void *hwdev)
+{
+ struct hifc_hwdev *dev = (struct hifc_hwdev *)hwdev;
+ struct hifc_recv_msg *recv_resp_msg;
+
+ set_bit(HIFC_HWDEV_STATE_BUSY, &dev->func_state);
+
+ if (hifc_func_type(dev) != TYPE_VF &&
+ hifc_is_hwdev_mod_inited(dev, HIFC_HWDEV_MGMT_INITED)) {
+ recv_resp_msg = &dev->pf_to_mgmt->recv_resp_msg_from_mgmt;
+ if (dev->pf_to_mgmt->event_flag == SEND_EVENT_START) {
+ complete(&recv_resp_msg->recv_done);
+ dev->pf_to_mgmt->event_flag = SEND_EVENT_TIMEOUT;
+ }
+ }
+
+ /* only flush sync cmdq to avoid blocking remove */
+ if (hifc_is_hwdev_mod_inited(dev, HIFC_HWDEV_CMDQ_INITED))
+ hifc_cmdq_flush_cmd(hwdev,
+ &dev->cmdqs->cmdq[HIFC_CMDQ_SYNC]);
+
+ clear_bit(HIFC_HWDEV_STATE_BUSY, &dev->func_state);
+}
+
+void hifc_detect_hw_present(void *hwdev)
+{
+ u32 addr, attr1;
+
+ addr = HIFC_CSR_FUNC_ATTR1_ADDR;
+ attr1 = hifc_hwif_read_reg(((struct hifc_hwdev *)hwdev)->hwif, addr);
+ if (attr1 == HIFC_PCIE_LINK_DOWN) {
+ hifc_set_chip_absent(hwdev);
+ hifc_force_complete_all(hwdev);
+ }
+}
+
+void hifc_record_pcie_error(void *hwdev)
+{
+ struct hifc_hwdev *dev = (struct hifc_hwdev *)hwdev;
+
+ if (!hwdev)
+ return;
+
+ atomic_inc(&dev->hw_stats.fault_event_stats.pcie_fault_stats);
+}
+
+static inline void __set_heartbeat_ehd_detect_delay(struct hifc_hwdev *hwdev,
+ u32 delay_ms)
+{
+ hwdev->heartbeat_ehd.start_detect_jiffies =
+ jiffies + msecs_to_jiffies(delay_ms);
+}
+
+static int __pf_to_mgmt_pre_handle(struct hifc_hwdev *hwdev,
+ enum hifc_mod_type mod, u8 cmd)
+{
+ if (hifc_get_mgmt_channel_status(hwdev)) {
+ if (mod == HIFC_MOD_COMM || mod == HIFC_MOD_L2NIC)
+ return HIFC_DEV_BUSY_ACTIVE_FW;
+ else
+ return -EBUSY;
+ }
+
+ /* Set channel invalid, don't allowed to send other cmd */
+ if (mod == HIFC_MOD_COMM && cmd == HIFC_MGMT_CMD_ACTIVATE_FW) {
+ hifc_set_mgmt_channel_status(hwdev, true);
+ /* stop heartbeat enhanced detection temporary, and will
+ * restart in firmware active event when mgmt is resetted
+ */
+ __set_heartbeat_ehd_detect_delay(hwdev,
+ HIFC_DEV_ACTIVE_FW_TIMEOUT);
+ }
+
+ return 0;
+}
+
+static void __pf_to_mgmt_after_handle(struct hifc_hwdev *hwdev,
+ enum hifc_mod_type mod, u8 cmd,
+ int sw_status, void *mgmt_status)
+{
+ /* if activate fw is failed, set channel valid */
+ if (mod == HIFC_MOD_COMM &&
+ cmd == HIFC_MGMT_CMD_ACTIVATE_FW) {
+ if (sw_status)
+ hifc_set_mgmt_channel_status(hwdev, false);
+ else
+ hifc_enable_mgmt_channel(hwdev, mgmt_status);
+ }
+}
+
+int hifc_pf_msg_to_mgmt_sync(void *hwdev, enum hifc_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u32 timeout)
+{
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (!((struct hifc_hwdev *)hwdev)->chip_present_flag)
+ return -EPERM;
+
+ if (!hifc_is_hwdev_mod_inited(hwdev, HIFC_HWDEV_MGMT_INITED))
+ return -EPERM;
+
+ if (in_size > HIFC_MSG_TO_MGMT_MAX_LEN)
+ return -EINVAL;
+
+ err = __pf_to_mgmt_pre_handle(hwdev, mod, cmd);
+ if (err)
+ return err;
+
+ err = hifc_pf_to_mgmt_sync(hwdev, mod, cmd, buf_in, in_size,
+ buf_out, out_size, timeout);
+ __pf_to_mgmt_after_handle(hwdev, mod, cmd, err, buf_out);
+
+ return err;
+}
+
+static bool is_sfp_info_cmd_cached(struct hifc_hwdev *hwdev,
+ enum hifc_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hifc_cmd_get_sfp_qsfp_info *sfp_info;
+ struct hifc_port_routine_cmd *rt_cmd;
+ struct card_node *chip_node = hwdev->chip_node;
+
+ sfp_info = buf_in;
+ if (!chip_node->rt_cmd || sfp_info->port_id >= HIFC_MAX_PORT_ID ||
+ *out_size < sizeof(*sfp_info))
+ return false;
+
+ if (sfp_info->version == HIFC_GET_SFP_INFO_REAL_TIME)
+ return false;
+
+ rt_cmd = &chip_node->rt_cmd[sfp_info->port_id];
+ mutex_lock(&chip_node->sfp_mutex);
+ memcpy(buf_out, &rt_cmd->sfp_info, sizeof(*sfp_info));
+ mutex_unlock(&chip_node->sfp_mutex);
+
+ return true;
+}
+
+static bool is_sfp_abs_cmd_cached(struct hifc_hwdev *hwdev,
+ enum hifc_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hifc_cmd_get_light_module_abs *abs;
+ struct hifc_port_routine_cmd *rt_cmd;
+ struct card_node *chip_node = hwdev->chip_node;
+
+ abs = buf_in;
+ if (!chip_node->rt_cmd || abs->port_id >= HIFC_MAX_PORT_ID ||
+ *out_size < sizeof(*abs))
+ return false;
+
+ if (abs->version == HIFC_GET_SFP_INFO_REAL_TIME)
+ return false;
+
+ rt_cmd = &chip_node->rt_cmd[abs->port_id];
+ mutex_lock(&chip_node->sfp_mutex);
+ memcpy(buf_out, &rt_cmd->abs, sizeof(*abs));
+ mutex_unlock(&chip_node->sfp_mutex);
+
+ return true;
+}
+
+static bool driver_processed_cmd(struct hifc_hwdev *hwdev,
+ enum hifc_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct card_node *chip_node = hwdev->chip_node;
+
+ if (mod == HIFC_MOD_L2NIC) {
+ if (cmd == HIFC_PORT_CMD_GET_SFP_INFO &&
+ chip_node->rt_cmd->up_send_sfp_info) {
+ return is_sfp_info_cmd_cached(hwdev, mod, cmd, buf_in,
+ in_size, buf_out,
+ out_size);
+ } else if (cmd == HIFC_PORT_CMD_GET_SFP_ABS &&
+ chip_node->rt_cmd->up_send_sfp_abs) {
+ return is_sfp_abs_cmd_cached(hwdev, mod, cmd, buf_in,
+ in_size, buf_out,
+ out_size);
+ }
+ }
+
+ return false;
+}
+
+static int send_sync_mgmt_msg(void *hwdev, enum hifc_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u32 timeout)
+{
+ unsigned long end;
+
+ end = jiffies + msecs_to_jiffies(HIFC_DEV_ACTIVE_FW_TIMEOUT);
+ do {
+ if (!hifc_get_mgmt_channel_status(hwdev) ||
+ !hifc_get_chip_present_flag(hwdev))
+ break;
+
+ msleep(1000);
+ } while (time_before(jiffies, end));
+
+ if (driver_processed_cmd(hwdev, mod, cmd, buf_in, in_size, buf_out,
+ out_size))
+ return 0;
+
+ return hifc_pf_msg_to_mgmt_sync(hwdev, mod, cmd, buf_in, in_size,
+ buf_out, out_size, timeout);
+}
+
+int hifc_msg_to_mgmt_sync(void *hwdev, enum hifc_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u32 timeout)
+{
+ struct hifc_hwdev *dev = hwdev;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (!(dev->chip_present_flag))
+ return -EPERM;
+
+ err = send_sync_mgmt_msg(hwdev, mod, cmd, buf_in, in_size,
+ buf_out, out_size, timeout);
+
+ hifc_print_status_info(hwdev, mod, cmd, buf_out);
+
+ return err;
+}
+
+/* PF/VF send msg to uP by api cmd, and return immediately */
+int hifc_msg_to_mgmt_async(void *hwdev, enum hifc_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size)
+{
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (!(((struct hifc_hwdev *)hwdev)->chip_present_flag) ||
+ !hifc_is_hwdev_mod_inited(hwdev, HIFC_HWDEV_MGMT_INITED) ||
+ hifc_get_mgmt_channel_status(hwdev))
+ return -EPERM;
+
+ if (hifc_func_type(hwdev) == TYPE_VF) {
+ err = -EFAULT;
+ sdk_err(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Mailbox don't support async cmd\n");
+ } else {
+ err = hifc_pf_to_mgmt_async(hwdev, mod, cmd, buf_in, in_size);
+ }
+
+ return err;
+}
+
+int hifc_msg_to_mgmt_no_ack(void *hwdev, enum hifc_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size)
+{
+ struct hifc_hwdev *dev = hwdev;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (!(dev->chip_present_flag))
+ return -EPERM;
+
+ err = hifc_pf_to_mgmt_no_ack(hwdev, mod, cmd, buf_in, in_size);
+
+ return err;
+}
+
+/**
+ * hifc_cpu_to_be32 - convert data to big endian 32 bit format
+ * @data: the data to convert
+ * @len: length of data to convert, must be Multiple of 4B
+ **/
+void hifc_cpu_to_be32(void *data, int len)
+{
+ int i, chunk_sz = sizeof(u32);
+ u32 *mem = data;
+
+ if (!data)
+ return;
+
+ len = len / chunk_sz;
+
+ for (i = 0; i < len; i++) {
+ *mem = cpu_to_be32(*mem);
+ mem++;
+ }
+}
+
+/**
+ * hifc_cpu_to_be32 - convert data from big endian 32 bit format
+ * @data: the data to convert
+ * @len: length of data to convert
+ **/
+void hifc_be32_to_cpu(void *data, int len)
+{
+ int i, chunk_sz = sizeof(u32);
+ u32 *mem = data;
+
+ if (!data)
+ return;
+
+ len = len / chunk_sz;
+
+ for (i = 0; i < len; i++) {
+ *mem = be32_to_cpu(*mem);
+ mem++;
+ }
+}
+
+/**
+ * hifc_set_sge - set dma area in scatter gather entry
+ * @sge: scatter gather entry
+ * @addr: dma address
+ * @len: length of relevant data in the dma address
+ **/
+void hifc_set_sge(struct hifc_sge *sge, dma_addr_t addr, u32 len)
+{
+ sge->hi_addr = upper_32_bits(addr);
+ sge->lo_addr = lower_32_bits(addr);
+ sge->len = len;
+}
+
+int hifc_set_ci_table(void *hwdev, u16 q_id, struct hifc_sq_attr *attr)
+{
+ struct hifc_cons_idx_attr cons_idx_attr = {0};
+ u16 out_size = sizeof(cons_idx_attr);
+ int err;
+
+ if (!hwdev || !attr)
+ return -EINVAL;
+
+ err = hifc_global_func_id_get(hwdev, &cons_idx_attr.func_idx);
+ if (err)
+ return err;
+
+ cons_idx_attr.dma_attr_off = attr->dma_attr_off;
+ cons_idx_attr.pending_limit = attr->pending_limit;
+ cons_idx_attr.coalescing_time = attr->coalescing_time;
+
+ if (attr->intr_en) {
+ cons_idx_attr.intr_en = attr->intr_en;
+ cons_idx_attr.intr_idx = attr->intr_idx;
+ }
+
+ cons_idx_attr.l2nic_sqn = attr->l2nic_sqn;
+ cons_idx_attr.sq_id = q_id;
+
+ cons_idx_attr.ci_addr = attr->ci_dma_base;
+
+ err = hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_L2NIC_SQ_CI_ATTR_SET,
+ &cons_idx_attr, sizeof(cons_idx_attr),
+ &cons_idx_attr, &out_size, 0);
+ if (err || !out_size || cons_idx_attr.status) {
+ sdk_err(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Failed to set ci attribute table, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, cons_idx_attr.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int hifc_set_cmdq_depth(struct hifc_hwdev *hwdev, u16 cmdq_depth)
+{
+ struct hifc_root_ctxt root_ctxt = {0};
+ u16 out_size = sizeof(root_ctxt);
+ int err;
+
+ err = hifc_global_func_id_get(hwdev, &root_ctxt.func_idx);
+ if (err)
+ return err;
+
+ root_ctxt.ppf_idx = hifc_ppf_idx(hwdev);
+
+ root_ctxt.set_cmdq_depth = 1;
+ root_ctxt.cmdq_depth = (u8)ilog2(cmdq_depth);
+
+ err = hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_VAT_SET,
+ &root_ctxt, sizeof(root_ctxt),
+ &root_ctxt, &out_size, 0);
+ if (err || !out_size || root_ctxt.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to set cmdq depth, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, root_ctxt.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static u16 get_hw_rx_buf_size(int rx_buf_sz)
+{
+#define DEFAULT_RX_BUF_SIZE ((u16)0xB)
+ u16 num_hw_types =
+ sizeof(hifc_hw_rx_buf_size) /
+ sizeof(hifc_hw_rx_buf_size[0]);
+ u16 i;
+
+ for (i = 0; i < num_hw_types; i++) {
+ if (hifc_hw_rx_buf_size[i] == rx_buf_sz)
+ return i;
+ }
+
+ pr_err("Chip can't support rx buf size of %d\n", rx_buf_sz);
+
+ return DEFAULT_RX_BUF_SIZE;
+}
+
+int hifc_set_root_ctxt(void *hwdev, u16 rq_depth, u16 sq_depth, int rx_buf_sz)
+{
+ struct hifc_root_ctxt root_ctxt = {0};
+ u16 out_size = sizeof(root_ctxt);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ err = hifc_global_func_id_get(hwdev, &root_ctxt.func_idx);
+ if (err)
+ return err;
+
+ root_ctxt.ppf_idx = hifc_ppf_idx(hwdev);
+
+ root_ctxt.set_cmdq_depth = 0;
+ root_ctxt.cmdq_depth = 0;
+
+ root_ctxt.lro_en = 1;
+
+ root_ctxt.rq_depth = (u16)ilog2(rq_depth);
+ root_ctxt.rx_buf_sz = get_hw_rx_buf_size(rx_buf_sz);
+ root_ctxt.sq_depth = (u16)ilog2(sq_depth);
+
+ err = hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_VAT_SET,
+ &root_ctxt, sizeof(root_ctxt),
+ &root_ctxt, &out_size, 0);
+ if (err || !out_size || root_ctxt.status) {
+ sdk_err(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Failed to set root context, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, root_ctxt.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hifc_clean_root_ctxt(void *hwdev)
+{
+ struct hifc_root_ctxt root_ctxt = {0};
+ u16 out_size = sizeof(root_ctxt);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ err = hifc_global_func_id_get(hwdev, &root_ctxt.func_idx);
+ if (err)
+ return err;
+
+ root_ctxt.ppf_idx = hifc_ppf_idx(hwdev);
+
+ err = hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_VAT_SET,
+ &root_ctxt, sizeof(root_ctxt),
+ &root_ctxt, &out_size, 0);
+ if (err || !out_size || root_ctxt.status) {
+ sdk_err(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Failed to set root context, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, root_ctxt.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int wait_for_flr_finish(struct hifc_hwif *hwif)
+{
+ u32 cnt = 0;
+ enum hifc_pf_status status;
+
+ while (cnt < HIFC_FLR_TIMEOUT) {
+ status = hifc_get_pf_status(hwif);
+ if (status == HIFC_PF_STATUS_FLR_FINISH_FLAG) {
+ hifc_set_pf_status(hwif, HIFC_PF_STATUS_ACTIVE_FLAG);
+ return 0;
+ }
+
+ usleep_range(9900, 10000);
+ cnt++;
+ }
+
+ return -EFAULT;
+}
+
+#define HIFC_WAIT_CMDQ_IDLE_TIMEOUT 5000
+
+static int wait_cmdq_stop(struct hifc_hwdev *hwdev)
+{
+ enum hifc_cmdq_type cmdq_type;
+ struct hifc_cmdqs *cmdqs = hwdev->cmdqs;
+ u32 cnt = 0;
+ int err = 0;
+
+ if (!(cmdqs->status & HIFC_CMDQ_ENABLE))
+ return 0;
+
+ cmdqs->status &= ~HIFC_CMDQ_ENABLE;
+
+ while (cnt < HIFC_WAIT_CMDQ_IDLE_TIMEOUT && hwdev->chip_present_flag) {
+ err = 0;
+ cmdq_type = HIFC_CMDQ_SYNC;
+ for (; cmdq_type < HIFC_MAX_CMDQ_TYPES; cmdq_type++) {
+ if (!hifc_cmdq_idle(&cmdqs->cmdq[cmdq_type])) {
+ err = -EBUSY;
+ break;
+ }
+ }
+
+ if (!err)
+ return 0;
+
+ usleep_range(500, 1000);
+ cnt++;
+ }
+
+ cmdq_type = HIFC_CMDQ_SYNC;
+ for (; cmdq_type < HIFC_MAX_CMDQ_TYPES; cmdq_type++) {
+ if (!hifc_cmdq_idle(&cmdqs->cmdq[cmdq_type]))
+ sdk_err(hwdev->dev_hdl, "Cmdq %d busy\n", cmdq_type);
+ }
+
+ cmdqs->status |= HIFC_CMDQ_ENABLE;
+
+ return err;
+}
+
+static int hifc_pf_rx_tx_flush(struct hifc_hwdev *hwdev)
+{
+ struct hifc_hwif *hwif = hwdev->hwif;
+ struct hifc_clear_doorbell clear_db = {0};
+ struct hifc_clear_resource clr_res = {0};
+ u16 out_size, func_id;
+ int err;
+ int ret = 0;
+
+ /* wait ucode stop I/O */
+ msleep(100);
+
+ err = wait_cmdq_stop(hwdev);
+ if (err) {
+ sdk_warn(hwdev->dev_hdl, "CMDQ is still working, please check CMDQ timeout value is reasonable\n");
+ ret = err;
+ }
+
+ hifc_disable_doorbell(hwif);
+
+ out_size = sizeof(clear_db);
+ func_id = hifc_global_func_id_hw(hwdev);
+ clear_db.func_idx = func_id;
+ clear_db.ppf_idx = HIFC_HWIF_PPF_IDX(hwif);
+
+ err = hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_FLUSH_DOORBELL, &clear_db,
+ sizeof(clear_db), &clear_db, &out_size, 0);
+ if (err || !out_size || clear_db.status) {
+ sdk_warn(hwdev->dev_hdl, "Failed to flush doorbell, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, clear_db.status, out_size);
+ if (err)
+ ret = err;
+ else
+ ret = -EFAULT;
+ }
+
+ hifc_set_pf_status(hwif, HIFC_PF_STATUS_FLR_START_FLAG);
+
+ clr_res.func_idx = func_id;
+ clr_res.ppf_idx = HIFC_HWIF_PPF_IDX(hwif);
+
+ err = hifc_msg_to_mgmt_no_ack(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_START_FLR, &clr_res,
+ sizeof(clr_res));
+ if (err) {
+ sdk_warn(hwdev->dev_hdl, "Failed to notice flush message\n");
+ ret = err;
+ }
+
+ err = wait_for_flr_finish(hwif);
+ if (err) {
+ sdk_warn(hwdev->dev_hdl, "Wait firmware FLR timeout\n");
+ ret = err;
+ }
+
+ hifc_enable_doorbell(hwif);
+
+ err = hifc_reinit_cmdq_ctxts(hwdev);
+ if (err) {
+ sdk_warn(hwdev->dev_hdl, "Failed to reinit cmdq\n");
+ ret = err;
+ }
+
+ return ret;
+}
+
+int hifc_func_rx_tx_flush(void *hwdev)
+{
+ struct hifc_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (!dev->chip_present_flag)
+ return 0;
+
+ return hifc_pf_rx_tx_flush(dev);
+}
+
+int hifc_get_interrupt_cfg(void *hwdev,
+ struct nic_interrupt_info *interrupt_info)
+{
+ struct hifc_hwdev *nic_hwdev = hwdev;
+ struct hifc_msix_config msix_cfg = {0};
+ u16 out_size = sizeof(msix_cfg);
+ int err;
+
+ if (!hwdev || !interrupt_info)
+ return -EINVAL;
+
+ err = hifc_global_func_id_get(hwdev, &msix_cfg.func_id);
+ if (err)
+ return err;
+
+ msix_cfg.msix_index = interrupt_info->msix_index;
+
+ err = hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_MSI_CTRL_REG_RD_BY_UP,
+ &msix_cfg, sizeof(msix_cfg),
+ &msix_cfg, &out_size, 0);
+ if (err || !out_size || msix_cfg.status) {
+ sdk_err(nic_hwdev->dev_hdl, "Failed to get interrupt config, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, msix_cfg.status, out_size);
+ return -EINVAL;
+ }
+
+ interrupt_info->lli_credit_limit = msix_cfg.lli_credit_cnt;
+ interrupt_info->lli_timer_cfg = msix_cfg.lli_tmier_cnt;
+ interrupt_info->pending_limt = msix_cfg.pending_cnt;
+ interrupt_info->coalesc_timer_cfg = msix_cfg.coalesct_timer_cnt;
+ interrupt_info->resend_timer_cfg = msix_cfg.resend_timer_cnt;
+
+ return 0;
+}
+
+int hifc_set_interrupt_cfg(void *hwdev,
+ struct nic_interrupt_info interrupt_info)
+{
+ struct hifc_hwdev *nic_hwdev = hwdev;
+ struct hifc_msix_config msix_cfg = {0};
+ struct nic_interrupt_info temp_info;
+ u16 out_size = sizeof(msix_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ temp_info.msix_index = interrupt_info.msix_index;
+
+ err = hifc_get_interrupt_cfg(hwdev, &temp_info);
+ if (err)
+ return -EINVAL;
+
+ err = hifc_global_func_id_get(hwdev, &msix_cfg.func_id);
+ if (err)
+ return err;
+
+ msix_cfg.msix_index = (u16)interrupt_info.msix_index;
+ msix_cfg.lli_credit_cnt = temp_info.lli_credit_limit;
+ msix_cfg.lli_tmier_cnt = temp_info.lli_timer_cfg;
+ msix_cfg.pending_cnt = temp_info.pending_limt;
+ msix_cfg.coalesct_timer_cnt = temp_info.coalesc_timer_cfg;
+ msix_cfg.resend_timer_cnt = temp_info.resend_timer_cfg;
+
+ if (interrupt_info.lli_set) {
+ msix_cfg.lli_credit_cnt = interrupt_info.lli_credit_limit;
+ msix_cfg.lli_tmier_cnt = interrupt_info.lli_timer_cfg;
+ }
+
+ if (interrupt_info.interrupt_coalesc_set) {
+ msix_cfg.pending_cnt = interrupt_info.pending_limt;
+ msix_cfg.coalesct_timer_cnt = interrupt_info.coalesc_timer_cfg;
+ msix_cfg.resend_timer_cnt = interrupt_info.resend_timer_cfg;
+ }
+
+ err = hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_MSI_CTRL_REG_WR_BY_UP,
+ &msix_cfg, sizeof(msix_cfg),
+ &msix_cfg, &out_size, 0);
+ if (err || !out_size || msix_cfg.status) {
+ sdk_err(nic_hwdev->dev_hdl, "Failed to set interrupt config, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, msix_cfg.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+#define HIFC_MSIX_CNT_RESEND_TIMER_SHIFT 29
+#define HIFC_MSIX_CNT_RESEND_TIMER_MASK 0x7U
+
+#define HIFC_MSIX_CNT_SET(val, member) \
+ (((val) & HIFC_MSIX_CNT_##member##_MASK) << \
+ HIFC_MSIX_CNT_##member##_SHIFT)
+
+void hifc_misx_intr_clear_resend_bit(void *hwdev, u16 msix_idx,
+ u8 clear_resend_en)
+{
+ struct hifc_hwif *hwif;
+ u32 msix_ctrl = 0, addr;
+
+ if (!hwdev)
+ return;
+
+ hwif = ((struct hifc_hwdev *)hwdev)->hwif;
+
+ msix_ctrl = HIFC_MSIX_CNT_SET(clear_resend_en, RESEND_TIMER);
+
+ addr = HIFC_CSR_MSIX_CNT_ADDR(msix_idx);
+
+ hifc_hwif_write_reg(hwif, addr, msix_ctrl);
+}
+
+static int init_aeqs_msix_attr(struct hifc_hwdev *hwdev)
+{
+ struct hifc_aeqs *aeqs = hwdev->aeqs;
+ struct nic_interrupt_info info = {0};
+ struct hifc_eq *eq;
+ u16 q_id;
+ int err;
+
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.pending_limt = HIFC_DEAULT_EQ_MSIX_PENDING_LIMIT;
+ info.coalesc_timer_cfg = HIFC_DEAULT_EQ_MSIX_COALESC_TIMER_CFG;
+ info.resend_timer_cfg = HIFC_DEAULT_EQ_MSIX_RESEND_TIMER_CFG;
+
+ for (q_id = 0; q_id < aeqs->num_aeqs; q_id++) {
+ eq = &aeqs->aeq[q_id];
+ info.msix_index = eq->eq_irq.msix_entry_idx;
+ err = hifc_set_interrupt_cfg(hwdev, info);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Set msix attr for aeq %d failed\n",
+ q_id);
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+static int init_ceqs_msix_attr(struct hifc_hwdev *hwdev)
+{
+ struct hifc_ceqs *ceqs = hwdev->ceqs;
+ struct nic_interrupt_info info = {0};
+ struct hifc_eq *eq;
+ u16 q_id;
+ int err;
+
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.pending_limt = HIFC_DEAULT_EQ_MSIX_PENDING_LIMIT;
+ info.coalesc_timer_cfg = HIFC_DEAULT_EQ_MSIX_COALESC_TIMER_CFG;
+ info.resend_timer_cfg = HIFC_DEAULT_EQ_MSIX_RESEND_TIMER_CFG;
+
+ for (q_id = 0; q_id < ceqs->num_ceqs; q_id++) {
+ eq = &ceqs->ceq[q_id];
+ info.msix_index = eq->eq_irq.msix_entry_idx;
+ err = hifc_set_interrupt_cfg(hwdev, info);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Set msix attr for ceq %d failed\n",
+ q_id);
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * set_pf_dma_attr_entry - set the dma attributes for entry
+ * @hwdev: the pointer to hw device
+ * @entry_idx: the entry index in the dma table
+ * @st: PCIE TLP steering tag
+ * @at: PCIE TLP AT field
+ * @ph: PCIE TLP Processing Hint field
+ * @no_snooping: PCIE TLP No snooping
+ * @tph_en: PCIE TLP Processing Hint Enable
+ **/
+static void set_pf_dma_attr_entry(struct hifc_hwdev *hwdev, u32 entry_idx,
+ u8 st, u8 at, u8 ph,
+ enum hifc_pcie_nosnoop no_snooping,
+ enum hifc_pcie_tph tph_en)
+{
+ u32 addr, val, dma_attr_entry;
+
+ /* Read Modify Write */
+ addr = HIFC_CSR_DMA_ATTR_TBL_ADDR(entry_idx);
+
+ val = hifc_hwif_read_reg(hwdev->hwif, addr);
+ val = HIFC_DMA_ATTR_ENTRY_CLEAR(val, ST) &
+ HIFC_DMA_ATTR_ENTRY_CLEAR(val, AT) &
+ HIFC_DMA_ATTR_ENTRY_CLEAR(val, PH) &
+ HIFC_DMA_ATTR_ENTRY_CLEAR(val, NO_SNOOPING) &
+ HIFC_DMA_ATTR_ENTRY_CLEAR(val, TPH_EN);
+
+ dma_attr_entry = HIFC_DMA_ATTR_ENTRY_SET(st, ST) |
+ HIFC_DMA_ATTR_ENTRY_SET(at, AT) |
+ HIFC_DMA_ATTR_ENTRY_SET(ph, PH) |
+ HIFC_DMA_ATTR_ENTRY_SET(no_snooping, NO_SNOOPING) |
+ HIFC_DMA_ATTR_ENTRY_SET(tph_en, TPH_EN);
+
+ val |= dma_attr_entry;
+ hifc_hwif_write_reg(hwdev->hwif, addr, val);
+}
+
+/**
+ * dma_attr_table_init - initialize the the default dma attributes
+ * @hwdev: the pointer to hw device
+ * Return: 0 - success, negative - failure
+ **/
+static int dma_attr_table_init(struct hifc_hwdev *hwdev)
+{
+ int err = 0;
+
+ set_pf_dma_attr_entry(hwdev, PCIE_MSIX_ATTR_ENTRY,
+ HIFC_PCIE_ST_DISABLE,
+ HIFC_PCIE_AT_DISABLE,
+ HIFC_PCIE_PH_DISABLE,
+ HIFC_PCIE_SNOOP,
+ HIFC_PCIE_TPH_DISABLE);
+
+ return err;
+}
+
+static int resources_state_set(struct hifc_hwdev *hwdev,
+ enum hifc_res_state state)
+{
+ struct hifc_cmd_set_res_state res_state = {0};
+ u16 out_size = sizeof(res_state);
+ int err;
+
+ err = hifc_global_func_id_get(hwdev, &res_state.func_idx);
+ if (err)
+ return err;
+
+ res_state.state = state;
+
+ err = hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_RES_STATE_SET,
+ &res_state, sizeof(res_state),
+ &res_state, &out_size, 0);
+ if (err || !out_size || res_state.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to set resources state, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, res_state.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static void comm_mgmt_msg_handler(void *hwdev, void *pri_handle, u8 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct hifc_msg_pf_to_mgmt *pf_to_mgmt = pri_handle;
+ u8 cmd_idx;
+ u32 *mem;
+ u16 i;
+
+ for (cmd_idx = 0; cmd_idx < pf_to_mgmt->proc.cmd_num; cmd_idx++) {
+ if (cmd == pf_to_mgmt->proc.info[cmd_idx].cmd) {
+ if (!pf_to_mgmt->proc.info[cmd_idx].proc) {
+ sdk_warn(pf_to_mgmt->hwdev->dev_hdl,
+ "PF recv up comm msg handle null, cmd(0x%x)\n",
+ cmd);
+ } else {
+ pf_to_mgmt->proc.info[cmd_idx].proc(hwdev,
+ buf_in, in_size, buf_out, out_size);
+ }
+
+ return;
+ }
+ }
+
+ sdk_warn(pf_to_mgmt->hwdev->dev_hdl, "Received mgmt cpu event: 0x%x\n",
+ cmd);
+
+ mem = buf_in;
+ for (i = 0; i < (in_size / sizeof(u32)); i++) {
+ pr_info("0x%x\n", *mem);
+ mem++;
+ }
+
+ *out_size = 0;
+}
+
+static int hifc_comm_aeqs_init(struct hifc_hwdev *hwdev)
+{
+ struct irq_info aeq_irqs[HIFC_MAX_AEQS] = {{0} };
+ u16 num_aeqs, resp_num_irq = 0, i;
+ int err;
+
+ num_aeqs = HIFC_HWIF_NUM_AEQS(hwdev->hwif);
+ if (num_aeqs > HIFC_MAX_AEQS) {
+ sdk_warn(hwdev->dev_hdl, "Adjust aeq num to %d\n",
+ HIFC_MAX_AEQS);
+ num_aeqs = HIFC_MAX_AEQS;
+ }
+ err = hifc_alloc_irqs(hwdev, SERVICE_T_INTF, num_aeqs, aeq_irqs,
+ &resp_num_irq);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc aeq irqs, num_aeqs: %d\n",
+ num_aeqs);
+ return err;
+ }
+
+ if (resp_num_irq < num_aeqs) {
+ sdk_warn(hwdev->dev_hdl, "Adjust aeq num to %d\n",
+ resp_num_irq);
+ num_aeqs = resp_num_irq;
+ }
+
+ err = hifc_aeqs_init(hwdev, num_aeqs, aeq_irqs);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init aeqs\n");
+ goto aeqs_init_err;
+ }
+
+ set_bit(HIFC_HWDEV_AEQ_INITED, &hwdev->func_state);
+
+ return 0;
+
+aeqs_init_err:
+ for (i = 0; i < num_aeqs; i++)
+ hifc_free_irq(hwdev, SERVICE_T_INTF, aeq_irqs[i].irq_id);
+
+ return err;
+}
+
+static void hifc_comm_aeqs_free(struct hifc_hwdev *hwdev)
+{
+ struct irq_info aeq_irqs[HIFC_MAX_AEQS] = {{0} };
+ u16 num_irqs, i;
+
+ clear_bit(HIFC_HWDEV_AEQ_INITED, &hwdev->func_state);
+
+ hifc_get_aeq_irqs(hwdev, aeq_irqs, &num_irqs);
+ hifc_aeqs_free(hwdev);
+ for (i = 0; i < num_irqs; i++)
+ hifc_free_irq(hwdev, SERVICE_T_INTF, aeq_irqs[i].irq_id);
+}
+
+static int hifc_comm_ceqs_init(struct hifc_hwdev *hwdev)
+{
+ struct irq_info ceq_irqs[HIFC_MAX_CEQS] = {{0} };
+ u16 num_ceqs, resp_num_irq = 0, i;
+ int err;
+
+ num_ceqs = HIFC_HWIF_NUM_CEQS(hwdev->hwif);
+ if (num_ceqs > HIFC_MAX_CEQS) {
+ sdk_warn(hwdev->dev_hdl, "Adjust ceq num to %d\n",
+ HIFC_MAX_CEQS);
+ num_ceqs = HIFC_MAX_CEQS;
+ }
+
+ err = hifc_alloc_irqs(hwdev, SERVICE_T_INTF, num_ceqs, ceq_irqs,
+ &resp_num_irq);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc ceq irqs, num_ceqs: %d\n",
+ num_ceqs);
+ return err;
+ }
+
+ if (resp_num_irq < num_ceqs) {
+ sdk_warn(hwdev->dev_hdl, "Adjust ceq num to %d\n",
+ resp_num_irq);
+ num_ceqs = resp_num_irq;
+ }
+
+ err = hifc_ceqs_init(hwdev, num_ceqs, ceq_irqs);
+ if (err) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to init ceqs, err:%d\n", err);
+ goto ceqs_init_err;
+ }
+
+ return 0;
+
+ceqs_init_err:
+ for (i = 0; i < num_ceqs; i++)
+ hifc_free_irq(hwdev, SERVICE_T_INTF, ceq_irqs[i].irq_id);
+
+ return err;
+}
+
+static void hifc_comm_ceqs_free(struct hifc_hwdev *hwdev)
+{
+ struct irq_info ceq_irqs[HIFC_MAX_CEQS] = {{0} };
+ u16 num_irqs;
+ int i;
+
+ hifc_get_ceq_irqs(hwdev, ceq_irqs, &num_irqs);
+ hifc_ceqs_free(hwdev);
+ for (i = 0; i < num_irqs; i++)
+ hifc_free_irq(hwdev, SERVICE_T_INTF, ceq_irqs[i].irq_id);
+}
+
+static int hifc_comm_pf_to_mgmt_init(struct hifc_hwdev *hwdev)
+{
+ int err;
+
+ if (hifc_func_type(hwdev) == TYPE_VF ||
+ !FUNC_SUPPORT_MGMT(hwdev))
+ return 0; /* VF do not support send msg to mgmt directly */
+
+ err = hifc_pf_to_mgmt_init(hwdev);
+ if (err)
+ return err;
+
+ hifc_aeq_register_hw_cb(hwdev, HIFC_MSG_FROM_MGMT_CPU,
+ hifc_mgmt_msg_aeqe_handler);
+
+ hifc_register_mgmt_msg_cb(hwdev, HIFC_MOD_COMM,
+ hwdev->pf_to_mgmt, comm_mgmt_msg_handler);
+
+ set_bit(HIFC_HWDEV_MGMT_INITED, &hwdev->func_state);
+
+ return 0;
+}
+
+static void hifc_comm_pf_to_mgmt_free(struct hifc_hwdev *hwdev)
+{
+ if (hifc_func_type(hwdev) == TYPE_VF ||
+ !FUNC_SUPPORT_MGMT(hwdev))
+ return; /* VF do not support send msg to mgmt directly */
+
+ hifc_unregister_mgmt_msg_cb(hwdev, HIFC_MOD_COMM);
+
+ hifc_aeq_unregister_hw_cb(hwdev, HIFC_MSG_FROM_MGMT_CPU);
+
+ hifc_pf_to_mgmt_free(hwdev);
+}
+
+static int hifc_comm_clp_to_mgmt_init(struct hifc_hwdev *hwdev)
+{
+ int err;
+
+ if (hifc_func_type(hwdev) == TYPE_VF ||
+ !FUNC_SUPPORT_MGMT(hwdev))
+ return 0;
+
+ err = hifc_clp_pf_to_mgmt_init(hwdev);
+ if (err)
+ return err;
+
+ set_bit(HIFC_HWDEV_CLP_INITED, &hwdev->func_state);
+
+ return 0;
+}
+
+static void hifc_comm_clp_to_mgmt_free(struct hifc_hwdev *hwdev)
+{
+ if (hifc_func_type(hwdev) == TYPE_VF ||
+ !FUNC_SUPPORT_MGMT(hwdev))
+ return;
+
+ clear_bit(HIFC_HWDEV_CLP_INITED, &hwdev->func_state);
+ hifc_clp_pf_to_mgmt_free(hwdev);
+}
+
+static int hifc_comm_cmdqs_init(struct hifc_hwdev *hwdev)
+{
+ int err;
+
+ err = hifc_cmdqs_init(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init cmd queues\n");
+ return err;
+ }
+
+ hifc_ceq_register_cb(hwdev, HIFC_CMDQ, hifc_cmdq_ceq_handler);
+
+ err = hifc_set_cmdq_depth(hwdev, HIFC_CMDQ_DEPTH);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to set cmdq depth\n");
+ goto set_cmdq_depth_err;
+ }
+
+ return 0;
+
+set_cmdq_depth_err:
+ hifc_cmdqs_free(hwdev);
+
+ return err;
+}
+
+static void hifc_comm_cmdqs_free(struct hifc_hwdev *hwdev)
+{
+ hifc_ceq_unregister_cb(hwdev, HIFC_CMDQ);
+ hifc_cmdqs_free(hwdev);
+}
+
+static int hifc_sync_mgmt_func_state(struct hifc_hwdev *hwdev)
+{
+ int err;
+
+ hifc_set_pf_status(hwdev->hwif, HIFC_PF_STATUS_ACTIVE_FLAG);
+
+ err = resources_state_set(hwdev, HIFC_RES_ACTIVE);
+ if (err) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to set function resources state\n");
+ goto resources_state_set_err;
+ }
+
+ hwdev->heartbeat_ehd.en = false;
+ if (HIFC_FUNC_TYPE(hwdev) == TYPE_PPF) {
+ /* heartbeat synchronize must be after set pf active status */
+ hifc_comm_recv_mgmt_self_cmd_reg(
+ hwdev, HIFC_MGMT_CMD_HEARTBEAT_EVENT,
+ mgmt_heartbeat_event_handler);
+ }
+
+ return 0;
+
+resources_state_set_err:
+ hifc_set_pf_status(hwdev->hwif, HIFC_PF_STATUS_INIT);
+
+ return err;
+}
+
+static void hifc_unsync_mgmt_func_state(struct hifc_hwdev *hwdev)
+{
+ hifc_set_pf_status(hwdev->hwif, HIFC_PF_STATUS_INIT);
+
+ hwdev->heartbeat_ehd.en = false;
+ if (HIFC_FUNC_TYPE(hwdev) == TYPE_PPF) {
+ hifc_comm_recv_up_self_cmd_unreg(
+ hwdev, HIFC_MGMT_CMD_HEARTBEAT_EVENT);
+ }
+
+ resources_state_set(hwdev, HIFC_RES_CLEAN);
+}
+
+int hifc_set_vport_enable(void *hwdev, bool enable)
+{
+ struct hifc_hwdev *nic_hwdev = (struct hifc_hwdev *)hwdev;
+ struct hifc_vport_state en_state = {0};
+ u16 out_size = sizeof(en_state);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ err = hifc_global_func_id_get(hwdev, &en_state.func_id);
+ if (err)
+ return err;
+
+ en_state.state = enable ? 1 : 0;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HIFC_PORT_CMD_SET_VPORT_ENABLE,
+ &en_state, sizeof(en_state),
+ &en_state, &out_size);
+ if (err || !out_size || en_state.status) {
+ sdk_err(nic_hwdev->dev_hdl, "Failed to set vport state, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, en_state.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hifc_l2nic_reset_base(struct hifc_hwdev *hwdev, u16 reset_flag)
+{
+ struct hifc_l2nic_reset l2nic_reset = {0};
+ u16 out_size = sizeof(l2nic_reset);
+ int err = 0;
+
+ err = hifc_set_vport_enable(hwdev, false);
+ if (err)
+ return err;
+
+ msleep(100);
+
+ sdk_info(hwdev->dev_hdl, "L2nic reset flag 0x%x\n", reset_flag);
+
+ err = hifc_global_func_id_get(hwdev, &l2nic_reset.func_id);
+ if (err)
+ return err;
+
+ l2nic_reset.reset_flag = reset_flag;
+ err = hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_L2NIC_RESET, &l2nic_reset,
+ sizeof(l2nic_reset), &l2nic_reset,
+ &out_size, 0);
+ if (err || !out_size || l2nic_reset.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to reset L2NIC resources, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, l2nic_reset.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int hifc_l2nic_reset(struct hifc_hwdev *hwdev)
+{
+ return hifc_l2nic_reset_base(hwdev, 0);
+}
+
+static int __get_func_misc_info(struct hifc_hwdev *hwdev)
+{
+ int err;
+
+ err = hifc_get_board_info(hwdev, &hwdev->board_info);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Get board info failed\n");
+ return err;
+ }
+
+ return 0;
+}
+
+static int init_func_mode(struct hifc_hwdev *hwdev)
+{
+ int err;
+
+ err = __get_func_misc_info(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to get function msic information\n");
+ return err;
+ }
+
+ err = hifc_l2nic_reset(hwdev);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static int __init_eqs_msix_attr(struct hifc_hwdev *hwdev)
+{
+ int err;
+
+ err = init_aeqs_msix_attr(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init aeqs msix attr\n");
+ return err;
+ }
+
+ err = init_ceqs_msix_attr(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init ceqs msix attr\n");
+ return err;
+ }
+
+ return 0;
+}
+
+static int init_cmdqs_channel(struct hifc_hwdev *hwdev)
+{
+ u16 func_id;
+ int err;
+
+ dma_attr_table_init(hwdev);
+
+ err = hifc_comm_ceqs_init(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init completion event queues\n");
+ return err;
+ }
+
+ err = __init_eqs_msix_attr(hwdev);
+ if (err)
+ goto init_eqs_msix_err;
+
+ /* set default wq page_size */
+ hwdev->wq_page_size = HIFC_DEFAULT_WQ_PAGE_SIZE;
+
+ err = hifc_global_func_id_get(hwdev, &func_id);
+ if (err)
+ goto get_func_id_err;
+
+ err = hifc_set_wq_page_size(hwdev, func_id, hwdev->wq_page_size);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to set wq page size\n");
+ goto init_wq_pg_size_err;
+ }
+
+ err = hifc_comm_cmdqs_init(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init cmd queues\n");
+ goto cmdq_init_err;
+ }
+
+ set_bit(HIFC_HWDEV_CMDQ_INITED, &hwdev->func_state);
+
+ return 0;
+
+cmdq_init_err:
+ if (HIFC_FUNC_TYPE(hwdev) != TYPE_VF)
+ hifc_set_wq_page_size(hwdev, func_id, HIFC_HW_WQ_PAGE_SIZE);
+init_wq_pg_size_err:
+get_func_id_err:
+init_eqs_msix_err:
+ hifc_comm_ceqs_free(hwdev);
+
+ return err;
+}
+
+static int init_mgmt_channel(struct hifc_hwdev *hwdev)
+{
+ int err;
+
+ err = hifc_comm_clp_to_mgmt_init(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init clp\n");
+ return err;
+ }
+
+ err = hifc_comm_aeqs_init(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init async event queues\n");
+ goto aeqs_init_err;
+ }
+
+ err = hifc_comm_pf_to_mgmt_init(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init msg\n");
+ goto msg_init_err;
+ }
+
+ return err;
+
+msg_init_err:
+ hifc_comm_aeqs_free(hwdev);
+
+aeqs_init_err:
+ hifc_comm_clp_to_mgmt_free(hwdev);
+
+ return err;
+}
+
+/* initialize communication channel */
+int hifc_init_comm_ch(struct hifc_hwdev *hwdev)
+{
+ int err;
+
+ err = init_mgmt_channel(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init mgmt channel\n");
+ return err;
+ }
+
+ err = init_func_mode(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init function mode\n");
+ goto func_mode_err;
+ }
+
+ err = init_cmdqs_channel(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init cmdq channel\n");
+ goto init_cmdqs_channel_err;
+ }
+
+ err = hifc_sync_mgmt_func_state(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to synchronize mgmt function state\n");
+ goto sync_mgmt_func_err;
+ }
+
+ err = hifc_aeq_register_swe_cb(hwdev, HIFC_STATELESS_EVENT,
+ hifc_nic_sw_aeqe_handler);
+ if (err) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to register ucode aeqe handler\n");
+ goto register_ucode_aeqe_err;
+ }
+
+ set_bit(HIFC_HWDEV_COMM_CH_INITED, &hwdev->func_state);
+
+ return 0;
+
+register_ucode_aeqe_err:
+ hifc_unsync_mgmt_func_state(hwdev);
+sync_mgmt_func_err:
+ return err;
+
+init_cmdqs_channel_err:
+
+func_mode_err:
+ return err;
+}
+
+static void __uninit_comm_module(struct hifc_hwdev *hwdev,
+ enum hifc_hwdev_init_state init_state)
+{
+ u16 func_id;
+
+ switch (init_state) {
+ case HIFC_HWDEV_COMM_CH_INITED:
+ hifc_aeq_unregister_swe_cb(hwdev,
+ HIFC_STATELESS_EVENT);
+ hifc_unsync_mgmt_func_state(hwdev);
+ break;
+ case HIFC_HWDEV_CMDQ_INITED:
+ hifc_comm_cmdqs_free(hwdev);
+ /* VF can set page size of 256K only, any other value
+ * will return error in pf, pf will set all vf's page
+ * size to 4K when disable sriov
+ */
+ if (HIFC_FUNC_TYPE(hwdev) != TYPE_VF) {
+ func_id = hifc_global_func_id_hw(hwdev);
+ hifc_set_wq_page_size(hwdev, func_id,
+ HIFC_HW_WQ_PAGE_SIZE);
+ }
+
+ hifc_comm_ceqs_free(hwdev);
+
+ break;
+ case HIFC_HWDEV_MBOX_INITED:
+ break;
+ case HIFC_HWDEV_MGMT_INITED:
+ hifc_comm_pf_to_mgmt_free(hwdev);
+ break;
+ case HIFC_HWDEV_AEQ_INITED:
+ hifc_comm_aeqs_free(hwdev);
+ break;
+ case HIFC_HWDEV_CLP_INITED:
+ hifc_comm_clp_to_mgmt_free(hwdev);
+ break;
+ default:
+ break;
+ }
+}
+
+#define HIFC_FUNC_STATE_BUSY_TIMEOUT 300
+void hifc_uninit_comm_ch(struct hifc_hwdev *hwdev)
+{
+ enum hifc_hwdev_init_state init_state = HIFC_HWDEV_COMM_CH_INITED;
+ int cnt;
+
+ while (init_state > HIFC_HWDEV_NONE_INITED) {
+ if (!test_bit(init_state, &hwdev->func_state)) {
+ init_state--;
+ continue;
+ }
+ clear_bit(init_state, &hwdev->func_state);
+
+ cnt = 0;
+ while (test_bit(HIFC_HWDEV_STATE_BUSY, &hwdev->func_state) &&
+ cnt++ <= HIFC_FUNC_STATE_BUSY_TIMEOUT)
+ usleep_range(900, 1000);
+
+ __uninit_comm_module(hwdev, init_state);
+
+ init_state--;
+ }
+}
+
+int hifc_slq_init(void *dev, int num_wqs)
+{
+ struct hifc_hwdev *hwdev = dev;
+ int err;
+
+ if (!dev)
+ return -EINVAL;
+
+ hwdev->wqs = kzalloc(sizeof(*hwdev->wqs), GFP_KERNEL);
+ if (!hwdev->wqs)
+ return -ENOMEM;
+
+ err = hifc_wqs_alloc(hwdev->wqs, num_wqs, hwdev->dev_hdl);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc wqs\n");
+ kfree(hwdev->wqs);
+ hwdev->wqs = NULL;
+ }
+
+ return err;
+}
+
+void hifc_slq_uninit(void *dev)
+{
+ struct hifc_hwdev *hwdev = dev;
+
+ if (!hwdev)
+ return;
+
+ hifc_wqs_free(hwdev->wqs);
+
+ kfree(hwdev->wqs);
+}
+
+int hifc_slq_alloc(void *dev, u16 wqebb_size, u16 q_depth, u16 page_size,
+ u64 *cla_addr, void **handle)
+{
+ struct hifc_hwdev *hwdev = dev;
+ struct hifc_wq *wq;
+ int err;
+
+ if (!dev || !cla_addr || !handle)
+ return -EINVAL;
+
+ wq = kzalloc(sizeof(*wq), GFP_KERNEL);
+ if (!wq)
+ return -ENOMEM;
+
+ err = hifc_wq_allocate(hwdev->wqs, wq, wqebb_size, hwdev->wq_page_size,
+ q_depth, 0);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc wq\n");
+ kfree(wq);
+ return -EFAULT;
+ }
+
+ *cla_addr = wq->block_paddr;
+ *handle = wq;
+
+ return 0;
+}
+
+void hifc_slq_free(void *dev, void *handle)
+{
+ struct hifc_hwdev *hwdev = dev;
+
+ if (!hwdev || !handle)
+ return;
+
+ hifc_wq_free(hwdev->wqs, handle);
+ kfree(handle);
+}
+
+u64 hifc_slq_get_addr(void *handle, u16 index)
+{
+ if (!handle)
+ return 0; /* NULL of wqe addr */
+
+ return (u64)hifc_get_wqebb_addr(handle, index);
+}
+
+u64 hifc_slq_get_first_pageaddr(void *handle)
+{
+ struct hifc_wq *wq = handle;
+
+ if (!handle)
+ return 0; /* NULL of wqe addr */
+
+ return hifc_get_first_wqe_page_addr(wq);
+}
+
+int hifc_func_tmr_bitmap_set(void *hwdev, bool en)
+{
+ struct hifc_func_tmr_bitmap_op bitmap_op = {0};
+ u16 out_size = sizeof(bitmap_op);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ err = hifc_global_func_id_get(hwdev, &bitmap_op.func_idx);
+ if (err)
+ return err;
+
+ bitmap_op.ppf_idx = hifc_ppf_idx(hwdev);
+ if (en)
+ bitmap_op.op_id = FUNC_TMR_BITMAP_ENABLE;
+ else
+ bitmap_op.op_id = FUNC_TMR_BITMAP_DISABLE;
+
+ err = hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_FUNC_TMR_BITMAT_SET,
+ &bitmap_op, sizeof(bitmap_op),
+ &bitmap_op, &out_size, 0);
+ if (err || !out_size || bitmap_op.status) {
+ sdk_err(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Failed to set timer bitmap, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, bitmap_op.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int ppf_ht_gpa_set(struct hifc_hwdev *hwdev, struct hifc_page_addr *pg0,
+ struct hifc_page_addr *pg1)
+{
+ struct comm_info_ht_gpa_set ht_gpa_set = {0};
+ u16 out_size = sizeof(ht_gpa_set);
+ int ret;
+
+ pg0->virt_addr = dma_zalloc_coherent(hwdev->dev_hdl,
+ HIFC_HT_GPA_PAGE_SIZE,
+ &pg0->phys_addr, GFP_KERNEL);
+ if (!pg0->virt_addr) {
+ sdk_err(hwdev->dev_hdl, "Alloc pg0 page addr failed\n");
+ return -EFAULT;
+ }
+
+ pg1->virt_addr = dma_zalloc_coherent(hwdev->dev_hdl,
+ HIFC_HT_GPA_PAGE_SIZE,
+ &pg1->phys_addr, GFP_KERNEL);
+ if (!pg1->virt_addr) {
+ sdk_err(hwdev->dev_hdl, "Alloc pg1 page addr failed\n");
+ return -EFAULT;
+ }
+
+ ht_gpa_set.page_pa0 = pg0->phys_addr;
+ ht_gpa_set.page_pa1 = pg1->phys_addr;
+ sdk_info(hwdev->dev_hdl, "PPF ht gpa set: page_addr0.pa=0x%llx, page_addr1.pa=0x%llx\n",
+ pg0->phys_addr, pg1->phys_addr);
+ ret = hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_PPF_HT_GPA_SET,
+ &ht_gpa_set, sizeof(ht_gpa_set),
+ &ht_gpa_set, &out_size, 0);
+ if (ret || !out_size || ht_gpa_set.status) {
+ sdk_warn(hwdev->dev_hdl, "PPF ht gpa set failed, ret: %d, status: 0x%x, out_size: 0x%x\n",
+ ret, ht_gpa_set.status, out_size);
+ return -EFAULT;
+ }
+
+ hwdev->page_pa0.phys_addr = pg0->phys_addr;
+ hwdev->page_pa0.virt_addr = pg0->virt_addr;
+
+ hwdev->page_pa1.phys_addr = pg1->phys_addr;
+ hwdev->page_pa1.virt_addr = pg1->virt_addr;
+
+ return 0;
+}
+
+int hifc_ppf_ht_gpa_init(struct hifc_hwdev *hwdev)
+{
+ int ret;
+ int i;
+ int j;
+ int size;
+
+ struct hifc_page_addr page_addr0[HIFC_PPF_HT_GPA_SET_RETRY_TIMES];
+ struct hifc_page_addr page_addr1[HIFC_PPF_HT_GPA_SET_RETRY_TIMES];
+
+ size = HIFC_PPF_HT_GPA_SET_RETRY_TIMES * sizeof(page_addr0[0]);
+ memset(page_addr0, 0, size);
+ memset(page_addr1, 0, size);
+
+ for (i = 0; i < HIFC_PPF_HT_GPA_SET_RETRY_TIMES; i++) {
+ ret = ppf_ht_gpa_set(hwdev, &page_addr0[i], &page_addr1[i]);
+ if (!ret)
+ break;
+ }
+
+ for (j = 0; j < i; j++) {
+ if (page_addr0[j].virt_addr) {
+ dma_free_coherent(hwdev->dev_hdl,
+ HIFC_HT_GPA_PAGE_SIZE,
+ page_addr0[j].virt_addr,
+ page_addr0[j].phys_addr);
+ page_addr0[j].virt_addr = NULL;
+ }
+ if (page_addr1[j].virt_addr) {
+ dma_free_coherent(hwdev->dev_hdl,
+ HIFC_HT_GPA_PAGE_SIZE,
+ page_addr1[j].virt_addr,
+ page_addr1[j].phys_addr);
+ page_addr1[j].virt_addr = NULL;
+ }
+ }
+
+ if (i >= HIFC_PPF_HT_GPA_SET_RETRY_TIMES) {
+ sdk_err(hwdev->dev_hdl, "PPF ht gpa init failed, retry times: %d\n",
+ i);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+void hifc_ppf_ht_gpa_deinit(struct hifc_hwdev *hwdev)
+{
+ if (hwdev->page_pa0.virt_addr) {
+ dma_free_coherent(hwdev->dev_hdl, HIFC_HT_GPA_PAGE_SIZE,
+ hwdev->page_pa0.virt_addr,
+ hwdev->page_pa0.phys_addr);
+ hwdev->page_pa0.virt_addr = NULL;
+ }
+
+ if (hwdev->page_pa1.virt_addr) {
+ dma_free_coherent(hwdev->dev_hdl, HIFC_HT_GPA_PAGE_SIZE,
+ hwdev->page_pa1.virt_addr,
+ hwdev->page_pa1.phys_addr);
+ hwdev->page_pa1.virt_addr = NULL;
+ }
+}
+
+static int set_ppf_tmr_status(struct hifc_hwdev *hwdev,
+ enum ppf_tmr_status status)
+{
+ struct hifc_ppf_tmr_op op = {0};
+ u16 out_size = sizeof(op);
+ int err = 0;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hifc_func_type(hwdev) != TYPE_PPF)
+ return -EFAULT;
+
+ if (status == HIFC_PPF_TMR_FLAG_START) {
+ err = hifc_ppf_ht_gpa_init(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "PPF ht gpa init fail!\n");
+ return -EFAULT;
+ }
+ } else {
+ hifc_ppf_ht_gpa_deinit(hwdev);
+ }
+
+ op.op_id = status;
+ op.ppf_idx = hifc_ppf_idx(hwdev);
+
+ err = hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_PPF_TMR_SET, &op,
+ sizeof(op), &op, &out_size, 0);
+ if (err || !out_size || op.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to set ppf timer, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, op.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hifc_ppf_tmr_start(void *hwdev)
+{
+ if (!hwdev) {
+ pr_err("Hwdev pointer is NULL for starting ppf timer\n");
+ return -EINVAL;
+ }
+
+ return set_ppf_tmr_status(hwdev, HIFC_PPF_TMR_FLAG_START);
+}
+
+int hifc_ppf_tmr_stop(void *hwdev)
+{
+ if (!hwdev) {
+ pr_err("Hwdev pointer is NULL for stop ppf timer\n");
+ return -EINVAL;
+ }
+
+ return set_ppf_tmr_status(hwdev, HIFC_PPF_TMR_FLAG_STOP);
+}
+
+int hifc_set_wq_page_size(struct hifc_hwdev *hwdev, u16 func_idx,
+ u32 page_size)
+{
+ struct hifc_wq_page_size page_size_info = {0};
+ u16 out_size = sizeof(page_size_info);
+ int err;
+
+ page_size_info.func_idx = func_idx;
+ page_size_info.ppf_idx = hifc_ppf_idx(hwdev);
+ page_size_info.page_size = HIFC_PAGE_SIZE_HW(page_size);
+
+ err = hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_PAGESIZE_SET,
+ &page_size_info, sizeof(page_size_info),
+ &page_size_info, &out_size, 0);
+ if (err || !out_size || page_size_info.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to set wq page size, err: %d, status: 0x%x, out_size: 0x%0x\n",
+ err, page_size_info.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+bool hifc_mgmt_event_ack_first(u8 mod, u8 cmd)
+{
+ if ((mod == HIFC_MOD_COMM && cmd == HIFC_MGMT_CMD_GET_HOST_INFO) ||
+ (mod == HIFC_MOD_COMM && cmd == HIFC_MGMT_CMD_HEARTBEAT_EVENT))
+ return false;
+
+ if (mod == HIFC_MOD_COMM || mod == HIFC_MOD_L2NIC ||
+ mod == HIFC_MOD_HILINK)
+ return true;
+
+ return false;
+}
+
+#define FAULT_SHOW_STR_LEN 16
+
+static void chip_fault_show(struct hifc_hwdev *hwdev,
+ struct hifc_fault_event *event)
+{
+ char fault_level[FAULT_LEVEL_MAX][FAULT_SHOW_STR_LEN + 1] = {
+ "fatal", "reset", "flr", "general", "suggestion"};
+ char level_str[FAULT_SHOW_STR_LEN + 1];
+ struct hifc_fault_event_stats *fault;
+ u8 node_id, level;
+ u32 pos, base;
+
+ fault = &hwdev->hw_stats.fault_event_stats;
+
+ memset(level_str, 0, FAULT_SHOW_STR_LEN + 1);
+ level = event->event.chip.err_level;
+ if (level < FAULT_LEVEL_MAX)
+ strncpy(level_str, fault_level[level],
+ FAULT_SHOW_STR_LEN);
+ else
+ strncpy(level_str, "Unknown", FAULT_SHOW_STR_LEN);
+
+ if (level == FAULT_LEVEL_SERIOUS_FLR) {
+ sdk_err(hwdev->dev_hdl, "err_level: %d [%s], flr func_id: %d\n",
+ level, level_str, event->event.chip.func_id);
+ atomic_inc(&fault->fault_type_stat[event->type]);
+ }
+ sdk_err(hwdev->dev_hdl, "module_id: 0x%x, err_type: 0x%x, err_level: %d[%s], err_csr_addr: 0x%08x, err_csr_value: 0x%08x\n",
+ event->event.chip.node_id,
+ event->event.chip.err_type, level, level_str,
+ event->event.chip.err_csr_addr,
+ event->event.chip.err_csr_value);
+
+ node_id = event->event.chip.node_id;
+ atomic_inc(&fault->chip_fault_stats[node_id][level]);
+
+ base = event->event.chip.node_id * FAULT_LEVEL_MAX *
+ HIFC_CHIP_ERROR_TYPE_MAX;
+ pos = base + HIFC_CHIP_ERROR_TYPE_MAX * level +
+ event->event.chip.err_type;
+ if (pos < HIFC_CHIP_FAULT_SIZE)
+ hwdev->chip_fault_stats[pos]++;
+}
+
+static void fault_report_show(struct hifc_hwdev *hwdev,
+ struct hifc_fault_event *event)
+{
+ char fault_type[FAULT_TYPE_MAX][FAULT_SHOW_STR_LEN + 1] = {
+ "chip", "ucode", "mem rd timeout", "mem wr timeout",
+ "reg rd timeout", "reg wr timeout", "phy fault"};
+ char type_str[FAULT_SHOW_STR_LEN + 1];
+ struct hifc_fault_event_stats *fault;
+
+ sdk_err(hwdev->dev_hdl, "Fault event report received, func_id: %d.\n",
+ hifc_global_func_id(hwdev));
+
+ memset(type_str, 0, FAULT_SHOW_STR_LEN + 1);
+ if (event->type < FAULT_TYPE_MAX)
+ strncpy(type_str, fault_type[event->type], FAULT_SHOW_STR_LEN);
+ else
+ strncpy(type_str, "Unknown", FAULT_SHOW_STR_LEN);
+
+ sdk_err(hwdev->dev_hdl, "Fault type: %d [%s]\n", event->type, type_str);
+ sdk_err(hwdev->dev_hdl, "Fault val[0]: 0x%08x, val[1]: 0x%08x, val[2]: 0x%08x, val[3]: 0x%08x\n",
+ event->event.val[0], event->event.val[1], event->event.val[2],
+ event->event.val[3]);
+
+ fault = &hwdev->hw_stats.fault_event_stats;
+
+ switch (event->type) {
+ case FAULT_TYPE_CHIP:
+ chip_fault_show(hwdev, event);
+ break;
+ case FAULT_TYPE_UCODE:
+ atomic_inc(&fault->fault_type_stat[event->type]);
+
+ sdk_err(hwdev->dev_hdl, "cause_id: %d, core_id: %d, c_id: %d, epc: 0x%08x\n",
+ event->event.ucode.cause_id, event->event.ucode.core_id,
+ event->event.ucode.c_id, event->event.ucode.epc);
+ break;
+ case FAULT_TYPE_MEM_RD_TIMEOUT:
+ case FAULT_TYPE_MEM_WR_TIMEOUT:
+ atomic_inc(&fault->fault_type_stat[event->type]);
+
+ sdk_err(hwdev->dev_hdl, "err_csr_ctrl: 0x%08x, err_csr_data: 0x%08x, ctrl_tab: 0x%08x, mem_index: 0x%08x\n",
+ event->event.mem_timeout.err_csr_ctrl,
+ event->event.mem_timeout.err_csr_data,
+ event->event.mem_timeout.ctrl_tab,
+ event->event.mem_timeout.mem_index);
+ break;
+ case FAULT_TYPE_REG_RD_TIMEOUT:
+ case FAULT_TYPE_REG_WR_TIMEOUT:
+ atomic_inc(&fault->fault_type_stat[event->type]);
+ sdk_err(hwdev->dev_hdl, "err_csr: 0x%08x\n",
+ event->event.reg_timeout.err_csr);
+ break;
+ case FAULT_TYPE_PHY_FAULT:
+ atomic_inc(&fault->fault_type_stat[event->type]);
+ sdk_err(hwdev->dev_hdl, "op_type: %u, port_id: %u, dev_ad: %u, csr_addr: 0x%08x, op_data: 0x%08x\n",
+ event->event.phy_fault.op_type,
+ event->event.phy_fault.port_id,
+ event->event.phy_fault.dev_ad,
+ event->event.phy_fault.csr_addr,
+ event->event.phy_fault.op_data);
+ break;
+ default:
+ break;
+ }
+}
+
+static void hifc_refresh_history_fault(struct hifc_hwdev *hwdev,
+ struct hifc_fault_recover_info *info)
+{
+ if (!hwdev->history_fault_flag) {
+ hwdev->history_fault_flag = true;
+ memcpy(&hwdev->history_fault, info,
+ sizeof(struct hifc_fault_recover_info));
+ } else {
+ if (hwdev->history_fault.fault_lev >= info->fault_lev)
+ memcpy(&hwdev->history_fault, info,
+ sizeof(struct hifc_fault_recover_info));
+ }
+}
+
+static void fault_event_handler(struct hifc_hwdev *hwdev, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct hifc_cmd_fault_event *fault_event;
+ struct hifc_event_info event_info;
+ struct hifc_fault_info_node *fault_node;
+
+ if (in_size != sizeof(*fault_event)) {
+ sdk_err(hwdev->dev_hdl, "Invalid fault event report, length: %d, should be %ld.\n",
+ in_size, sizeof(*fault_event));
+ return;
+ }
+
+ fault_event = buf_in;
+ fault_report_show(hwdev, &fault_event->event);
+
+ if (hwdev->event_callback) {
+ event_info.type = HIFC_EVENT_FAULT;
+ memcpy(&event_info.info, &fault_event->event,
+ sizeof(event_info.info));
+
+ hwdev->event_callback(hwdev->event_pri_handle, &event_info);
+ }
+
+ /* refresh history fault info */
+ fault_node = kzalloc(sizeof(*fault_node), GFP_KERNEL);
+ if (!fault_node) {
+ sdk_err(hwdev->dev_hdl, "Malloc fault node memory failed\n");
+ return;
+ }
+
+ if (fault_event->event.type <= FAULT_TYPE_REG_WR_TIMEOUT)
+ fault_node->info.fault_src = fault_event->event.type;
+ else if (fault_event->event.type == FAULT_TYPE_PHY_FAULT)
+ fault_node->info.fault_src = HIFC_FAULT_SRC_HW_PHY_FAULT;
+
+ if (fault_node->info.fault_src == HIFC_FAULT_SRC_HW_MGMT_CHIP)
+ fault_node->info.fault_lev =
+ fault_event->event.event.chip.err_level;
+ else
+ fault_node->info.fault_lev = FAULT_LEVEL_FATAL;
+
+ memcpy(&fault_node->info.fault_data.hw_mgmt, &fault_event->event.event,
+ sizeof(union hifc_fault_hw_mgmt));
+ hifc_refresh_history_fault(hwdev, &fault_node->info);
+
+ down(&hwdev->fault_list_sem);
+ kfree(fault_node);
+ up(&hwdev->fault_list_sem);
+}
+
+static void heartbeat_lost_event_handler(struct hifc_hwdev *hwdev,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hifc_fault_info_node *fault_node;
+ struct hifc_event_info event_info = {0};
+
+ atomic_inc(&hwdev->hw_stats.heart_lost_stats);
+ sdk_err(hwdev->dev_hdl, "Heart lost report received, func_id: %d\n",
+ hifc_global_func_id(hwdev));
+
+ if (hwdev->event_callback) {
+ event_info.type = HIFC_EVENT_HEART_LOST;
+ hwdev->event_callback(hwdev->event_pri_handle, &event_info);
+ }
+
+ /* refresh history fault info */
+ fault_node = kzalloc(sizeof(*fault_node), GFP_KERNEL);
+ if (!fault_node) {
+ sdk_err(hwdev->dev_hdl, "Malloc fault node memory failed\n");
+ return;
+ }
+
+ fault_node->info.fault_src = HIFC_FAULT_SRC_HOST_HEARTBEAT_LOST;
+ fault_node->info.fault_lev = FAULT_LEVEL_FATAL;
+ hifc_refresh_history_fault(hwdev, &fault_node->info);
+
+ down(&hwdev->fault_list_sem);
+ kfree(fault_node);
+ up(&hwdev->fault_list_sem);
+}
+
+static void sw_watchdog_timeout_info_show(struct hifc_hwdev *hwdev,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hifc_mgmt_watchdog_info *watchdog_info;
+ u32 *dump_addr, *reg, stack_len, i, j;
+
+ if (in_size != sizeof(*watchdog_info)) {
+ sdk_err(hwdev->dev_hdl, "Invalid mgmt watchdog report, length: %d, should be %ld.\n",
+ in_size, sizeof(*watchdog_info));
+ return;
+ }
+
+ watchdog_info = buf_in;
+
+ sdk_err(hwdev->dev_hdl, "Mgmt deadloop time: 0x%x 0x%x, task id: 0x%x, sp: 0x%x\n",
+ watchdog_info->curr_time_h, watchdog_info->curr_time_l,
+ watchdog_info->task_id, watchdog_info->sp);
+ sdk_err(hwdev->dev_hdl, "Stack current used: 0x%x, peak used: 0x%x, overflow flag: 0x%x, top: 0x%x, bottom: 0x%x\n",
+ watchdog_info->curr_used, watchdog_info->peak_used,
+ watchdog_info->is_overflow, watchdog_info->stack_top,
+ watchdog_info->stack_bottom);
+
+ sdk_err(hwdev->dev_hdl, "Mgmt pc: 0x%08x, lr: 0x%08x, cpsr:0x%08x\n",
+ watchdog_info->pc, watchdog_info->lr, watchdog_info->cpsr);
+
+ sdk_err(hwdev->dev_hdl, "Mgmt register info\n");
+
+ for (i = 0; i < 3; i++) {
+ reg = watchdog_info->reg + (u64)(u32)(4 * i);
+ sdk_err(hwdev->dev_hdl, "0x%08x 0x%08x 0x%08x 0x%08x\n",
+ *(reg), *(reg + 1), *(reg + 2), *(reg + 3));
+ }
+
+ sdk_err(hwdev->dev_hdl, "0x%08x\n", watchdog_info->reg[12]);
+
+ if (watchdog_info->stack_actlen <= 1024) {
+ stack_len = watchdog_info->stack_actlen;
+ } else {
+ sdk_err(hwdev->dev_hdl, "Oops stack length: 0x%x is wrong\n",
+ watchdog_info->stack_actlen);
+ stack_len = 1024;
+ }
+
+ sdk_err(hwdev->dev_hdl, "Mgmt dump stack, 16Bytes per line(start from sp)\n");
+ for (i = 0; i < (stack_len / 16); i++) {
+ dump_addr = (u32 *)(watchdog_info->data + ((u64)(u32)(i * 16)));
+ sdk_err(hwdev->dev_hdl, "0x%08x 0x%08x 0x%08x 0x%08x\n",
+ *dump_addr, *(dump_addr + 1), *(dump_addr + 2),
+ *(dump_addr + 3));
+ }
+
+ for (j = 0; j < ((stack_len % 16) / 4); j++) {
+ dump_addr = (u32 *)(watchdog_info->data +
+ ((u64)(u32)(i * 16 + j * 4)));
+ sdk_err(hwdev->dev_hdl, "0x%08x ", *dump_addr);
+ }
+
+ *out_size = sizeof(*watchdog_info);
+ watchdog_info = buf_out;
+ watchdog_info->status = 0;
+}
+
+static void mgmt_watchdog_timeout_event_handler(struct hifc_hwdev *hwdev,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hifc_fault_info_node *fault_node;
+
+ sw_watchdog_timeout_info_show(hwdev, buf_in, in_size,
+ buf_out, out_size);
+
+ /* refresh history fault info */
+ fault_node = kzalloc(sizeof(*fault_node), GFP_KERNEL);
+ if (!fault_node) {
+ sdk_err(hwdev->dev_hdl, "Malloc fault node memory failed\n");
+ return;
+ }
+
+ fault_node->info.fault_src = HIFC_FAULT_SRC_MGMT_WATCHDOG;
+ fault_node->info.fault_lev = FAULT_LEVEL_FATAL;
+ hifc_refresh_history_fault(hwdev, &fault_node->info);
+
+ down(&hwdev->fault_list_sem);
+ kfree(fault_node);
+ up(&hwdev->fault_list_sem);
+}
+
+static void mgmt_reset_event_handler(struct hifc_hwdev *hwdev,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ sdk_info(hwdev->dev_hdl, "Mgmt is reset\n");
+
+ /* mgmt reset only occurred when hot update or Mgmt deadloop,
+ * if Mgmt deadloop, mgmt will report an event with
+ * mod=0, cmd=0x56, and will reported fault to os,
+ * so mgmt reset event don't need to report fault
+ */
+}
+
+static void hifc_fmw_act_ntc_handler(struct hifc_hwdev *hwdev,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hifc_event_info event_info = {0};
+ struct hifc_fmw_act_ntc *notice_info;
+
+ if (in_size != sizeof(*notice_info)) {
+ sdk_err(hwdev->dev_hdl, "Invalid mgmt firmware active notice, length: %d, should be %ld.\n",
+ in_size, sizeof(*notice_info));
+ return;
+ }
+
+ /* mgmt is active now, restart heartbeat enhanced detection */
+ __set_heartbeat_ehd_detect_delay(hwdev, 0);
+
+ if (!hwdev->event_callback)
+ return;
+
+ event_info.type = HIFC_EVENT_FMW_ACT_NTC;
+ hwdev->event_callback(hwdev->event_pri_handle, &event_info);
+
+ *out_size = sizeof(*notice_info);
+ notice_info = buf_out;
+ notice_info->status = 0;
+}
+
+static void hifc_pcie_dfx_event_handler(struct hifc_hwdev *hwdev,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hifc_pcie_dfx_ntc *notice_info = buf_in;
+ struct hifc_pcie_dfx_info *dfx_info;
+ u16 size = 0;
+ u16 cnt = 0;
+ u32 num = 0;
+ u32 i, j;
+ int err;
+ u32 *reg;
+
+ if (in_size != sizeof(*notice_info)) {
+ sdk_err(hwdev->dev_hdl, "Invalid mgmt firmware active notice, length: %d, should be %ld.\n",
+ in_size, sizeof(*notice_info));
+ return;
+ }
+
+ dfx_info = kzalloc(sizeof(*dfx_info), GFP_KERNEL);
+ if (!dfx_info) {
+ sdk_err(hwdev->dev_hdl, "Malloc dfx_info memory failed\n");
+ return;
+ }
+
+ ((struct hifc_pcie_dfx_ntc *)buf_out)->status = 0;
+ *out_size = sizeof(*notice_info);
+ num = (u32)(notice_info->len / 1024);
+ sdk_info(hwdev->dev_hdl, "INFO LEN: %d\n", notice_info->len);
+ sdk_info(hwdev->dev_hdl, "PCIE DFX:\n");
+ dfx_info->host_id = 0;
+ for (i = 0; i < num; i++) {
+ dfx_info->offset = i * MAX_PCIE_DFX_BUF_SIZE;
+ if (i == (num - 1))
+ dfx_info->last = 1;
+ size = sizeof(*dfx_info);
+ err = hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_PCIE_DFX_GET,
+ dfx_info, sizeof(*dfx_info),
+ dfx_info, &size, 0);
+ if (err || dfx_info->status || !size) {
+ sdk_err(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Failed to get pcie dfx info, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, dfx_info->status, size);
+ kfree(dfx_info);
+ return;
+ }
+
+ reg = (u32 *)dfx_info->data;
+ for (j = 0; j < 256; j = j + 8) {
+ /*lint -save -e661 -e662*/
+ sdk_info(hwdev->dev_hdl, "0x%04x: 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x\n",
+ cnt, reg[j], reg[(u32)(j + 1)],
+ reg[(u32)(j + 2)], reg[(u32)(j + 3)],
+ reg[(u32)(j + 4)], reg[(u32)(j + 5)],
+ reg[(u32)(j + 6)], reg[(u32)(j + 7)]);
+ /*lint -restore*/
+ cnt = cnt + 32;
+ }
+ memset(dfx_info->data, 0, MAX_PCIE_DFX_BUF_SIZE);
+ }
+ kfree(dfx_info);
+}
+
+struct hifc_mctp_get_host_info {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u8 huawei_cmd;
+ u8 sub_cmd;
+ u8 rsvd[2];
+
+ u32 actual_len;
+
+ u8 data[1024];
+};
+
+static void hifc_mctp_get_host_info_event_handler(struct hifc_hwdev *hwdev,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hifc_event_info event_info = {0};
+ struct hifc_mctp_get_host_info *mctp_out, *mctp_in;
+ struct hifc_mctp_host_info *host_info;
+
+ if (in_size != sizeof(*mctp_in)) {
+ sdk_err(hwdev->dev_hdl, "Invalid mgmt mctp info, length: %d, should be %ld\n",
+ in_size, sizeof(*mctp_in));
+ return;
+ }
+
+ *out_size = sizeof(*mctp_out);
+ mctp_out = buf_out;
+ mctp_out->status = 0;
+
+ if (!hwdev->event_callback) {
+ mctp_out->status = HIFC_MGMT_STATUS_ERR_INIT;
+ return;
+ }
+
+ mctp_in = buf_in;
+ host_info = &event_info.mctp_info;
+ host_info->major_cmd = mctp_in->huawei_cmd;
+ host_info->sub_cmd = mctp_in->sub_cmd;
+ host_info->data = mctp_out->data;
+
+ event_info.type = HIFC_EVENT_MCTP_GET_HOST_INFO;
+ hwdev->event_callback(hwdev->event_pri_handle, &event_info);
+
+ mctp_out->actual_len = host_info->data_len;
+}
+
+char *__hw_to_char_fec[HILINK_FEC_MAX_TYPE] = {"RS-FEC", "BASE-FEC", "NO-FEC"};
+
+char *__hw_to_char_port_type[LINK_PORT_MAX_TYPE] = {
+ "Unknown", "Fibre", "Electric", "Direct Attach Copper", "AOC",
+ "Back plane", "BaseT"
+};
+
+static void __get_port_type(struct hifc_hwdev *hwdev,
+ struct hifc_link_info *info, char **port_type)
+{
+ if (info->cable_absent) {
+ sdk_info(hwdev->dev_hdl, "Cable unpresent\n");
+ return;
+ }
+
+ if (info->port_type < LINK_PORT_MAX_TYPE)
+ *port_type = __hw_to_char_port_type[info->port_type];
+ else
+ sdk_info(hwdev->dev_hdl, "Unknown port type: %u\n",
+ info->port_type);
+ if (info->port_type == LINK_PORT_FIBRE) {
+ if (info->port_sub_type == FIBRE_SUBTYPE_SR)
+ *port_type = "Fibre-SR";
+ else if (info->port_sub_type == FIBRE_SUBTYPE_LR)
+ *port_type = "Fibre-LR";
+ }
+}
+
+static void __print_cable_info(struct hifc_hwdev *hwdev,
+ struct hifc_link_info *info)
+{
+ char tmp_str[512] = {0};
+ char tmp_vendor[17] = {0};
+ char *port_type = "Unknown port type";
+ int i;
+
+ __get_port_type(hwdev, info, &port_type);
+
+ for (i = sizeof(info->vendor_name) - 1; i >= 0; i--) {
+ if (info->vendor_name[i] == ' ')
+ info->vendor_name[i] = '\0';
+ else
+ break;
+ }
+
+ memcpy(tmp_vendor, info->vendor_name,
+ sizeof(info->vendor_name));
+ snprintf(tmp_str, sizeof(tmp_str) - 1,
+ "Vendor: %s, %s, length: %um, max_speed: %uGbps",
+ tmp_vendor, port_type, info->cable_length,
+ info->cable_max_speed);
+ if (info->port_type == LINK_PORT_FIBRE ||
+ info->port_type == LINK_PORT_AOC) {
+ snprintf(tmp_str, sizeof(tmp_str) - 1,
+ "%s, %s, Temperature: %u", tmp_str,
+ info->sfp_type ? "SFP" : "QSFP", info->cable_temp);
+ if (info->sfp_type) {
+ snprintf(tmp_str, sizeof(tmp_str) - 1,
+ "%s, rx power: %uuW, tx power: %uuW",
+ tmp_str, info->power[0], info->power[1]);
+ } else {
+ snprintf(tmp_str, sizeof(tmp_str) - 1,
+ "%s, rx power: %uuw %uuW %uuW %uuW",
+ tmp_str, info->power[0], info->power[1],
+ info->power[2], info->power[3]);
+ }
+ }
+
+ sdk_info(hwdev->dev_hdl, "Cable information: %s\n",
+ tmp_str);
+}
+
+static void __hi30_lane_info(struct hifc_hwdev *hwdev,
+ struct hilink_lane *lane)
+{
+ struct hi30_ffe_data *ffe_data;
+ struct hi30_ctle_data *ctle_data;
+
+ ffe_data = (struct hi30_ffe_data *)lane->hi30_ffe;
+ ctle_data = (struct hi30_ctle_data *)lane->hi30_ctle;
+
+ sdk_info(hwdev->dev_hdl, "TX_FFE: PRE1=%s%d; PRE2=%s%d; MAIN=%d; POST1=%s%d; POST1X=%s%d\n",
+ (ffe_data->PRE1 & 0x10) ? "-" : "",
+ (int)(ffe_data->PRE1 & 0xf),
+ (ffe_data->PRE2 & 0x10) ? "-" : "",
+ (int)(ffe_data->PRE2 & 0xf),
+ (int)ffe_data->MAIN,
+ (ffe_data->POST1 & 0x10) ? "-" : "",
+ (int)(ffe_data->POST1 & 0xf),
+ (ffe_data->POST2 & 0x10) ? "-" : "",
+ (int)(ffe_data->POST2 & 0xf));
+ sdk_info(hwdev->dev_hdl, "RX_CTLE: Gain1~3=%u %u %u; Boost1~3=%u %u %u; Zero1~3=%u %u %u; Squelch1~3=%u %u %u\n",
+ ctle_data->ctlebst[0], ctle_data->ctlebst[1],
+ ctle_data->ctlebst[2], ctle_data->ctlecmband[0],
+ ctle_data->ctlecmband[1], ctle_data->ctlecmband[2],
+ ctle_data->ctlermband[0], ctle_data->ctlermband[1],
+ ctle_data->ctlermband[2], ctle_data->ctleza[0],
+ ctle_data->ctleza[1], ctle_data->ctleza[2]);
+}
+
+static void __print_hi30_status(struct hifc_hwdev *hwdev,
+ struct hifc_link_info *info)
+{
+ struct hilink_lane *lane;
+ int lane_used_num = 0, i;
+
+ for (i = 0; i < HILINK_MAX_LANE; i++) {
+ lane = (struct hilink_lane *)(info->lane2 + i * sizeof(*lane));
+ if (!lane->lane_used)
+ continue;
+
+ __hi30_lane_info(hwdev, lane);
+ lane_used_num++;
+ }
+
+ /* in new firmware, all lane info setted in lane2 */
+ if (lane_used_num)
+ return;
+
+ /* compatible old firmware */
+ __hi30_lane_info(hwdev, (struct hilink_lane *)info->lane1);
+}
+
+static void __print_link_info(struct hifc_hwdev *hwdev,
+ struct hifc_link_info *info,
+ enum hilink_info_print_event type)
+{
+ char *fec = "None";
+
+ if (info->fec < HILINK_FEC_MAX_TYPE)
+ fec = __hw_to_char_fec[info->fec];
+ else
+ sdk_info(hwdev->dev_hdl, "Unknown fec type: %u\n",
+ info->fec);
+
+ if (type == HILINK_EVENT_LINK_UP || !info->an_state) {
+ sdk_info(hwdev->dev_hdl, "Link information: speed %dGbps, %s, autoneg %s\n",
+ info->speed, fec, info->an_state ? "on" : "off");
+ } else {
+ sdk_info(hwdev->dev_hdl, "Link information: antoneg: %s\n",
+ info->an_state ? "on" : "off");
+ }
+}
+
+static char *hilink_info_report_type[HILINK_EVENT_MAX_TYPE] = {
+ "", "link up", "link down", "cable plugged"
+};
+
+void print_hilink_info(struct hifc_hwdev *hwdev,
+ enum hilink_info_print_event type,
+ struct hifc_link_info *info)
+{
+ __print_cable_info(hwdev, info);
+
+ __print_link_info(hwdev, info, type);
+
+ __print_hi30_status(hwdev, info);
+
+ if (type == HILINK_EVENT_LINK_UP)
+ return;
+
+ if (type == HILINK_EVENT_CABLE_PLUGGED) {
+ sdk_info(hwdev->dev_hdl, "alos: %u, rx_los: %u\n",
+ info->alos, info->rx_los);
+ return;
+ }
+
+ sdk_info(hwdev->dev_hdl, "PMA ctrl: %s, MAC tx %s, MAC rx %s, PMA debug info reg: 0x%x, PMA signal ok reg: 0x%x, RF/LF status reg: 0x%x\n",
+ info->pma_status == 1 ? "off" : "on",
+ info->mac_tx_en ? "enable" : "disable",
+ info->mac_rx_en ? "enable" : "disable", info->pma_dbg_info_reg,
+ info->pma_signal_ok_reg, info->rf_lf_status_reg);
+ sdk_info(hwdev->dev_hdl, "alos: %u, rx_los: %u, PCS block counter reg: 0x%x, PCS link: 0x%x, MAC link: 0x%x PCS_err_cnt: 0x%x\n",
+ info->alos, info->rx_los, info->pcs_err_blk_cnt_reg,
+ info->pcs_link_reg, info->mac_link_reg, info->pcs_err_cnt);
+}
+
+static void hifc_print_hilink_info(struct hifc_hwdev *hwdev, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct hifc_hilink_link_info *hilink_info = buf_in;
+ struct hifc_link_info *info;
+ enum hilink_info_print_event type;
+
+ if (in_size != sizeof(*hilink_info)) {
+ sdk_err(hwdev->dev_hdl, "Invalid hilink info message size %d, should be %ld\n",
+ in_size, sizeof(*hilink_info));
+ return;
+ }
+
+ ((struct hifc_hilink_link_info *)buf_out)->status = 0;
+ *out_size = sizeof(*hilink_info);
+
+ info = &hilink_info->info;
+ type = hilink_info->info_type;
+
+ if (type < HILINK_EVENT_LINK_UP || type >= HILINK_EVENT_MAX_TYPE) {
+ sdk_info(hwdev->dev_hdl, "Invalid hilink info report, type: %d\n",
+ type);
+ return;
+ }
+
+ sdk_info(hwdev->dev_hdl, "Hilink info report after %s\n",
+ hilink_info_report_type[type]);
+
+ print_hilink_info(hwdev, type, info);
+}
+
+static void __port_sfp_info_event(struct hifc_hwdev *hwdev,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hifc_cmd_get_sfp_qsfp_info *sfp_info = buf_in;
+ struct hifc_port_routine_cmd *rt_cmd;
+ struct card_node *chip_node = hwdev->chip_node;
+
+ if (in_size != sizeof(*sfp_info)) {
+ sdk_err(hwdev->dev_hdl, "Invalid sfp info cmd, length: %d, should be %ld\n",
+ in_size, sizeof(*sfp_info));
+ return;
+ }
+
+ if (sfp_info->port_id >= HIFC_MAX_PORT_ID) {
+ sdk_err(hwdev->dev_hdl, "Invalid sfp port id: %d, max port is %d\n",
+ sfp_info->port_id, HIFC_MAX_PORT_ID - 1);
+ return;
+ }
+
+ if (!chip_node->rt_cmd)
+ return;
+
+ rt_cmd = &chip_node->rt_cmd[sfp_info->port_id];
+ mutex_lock(&chip_node->sfp_mutex);
+ memcpy(&rt_cmd->sfp_info, sfp_info, sizeof(rt_cmd->sfp_info));
+ rt_cmd->up_send_sfp_info = true;
+ mutex_unlock(&chip_node->sfp_mutex);
+}
+
+static void __port_sfp_abs_event(struct hifc_hwdev *hwdev,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hifc_cmd_get_light_module_abs *sfp_abs = buf_in;
+ struct hifc_port_routine_cmd *rt_cmd;
+ struct card_node *chip_node = hwdev->chip_node;
+
+ if (in_size != sizeof(*sfp_abs)) {
+ sdk_err(hwdev->dev_hdl, "Invalid sfp absent cmd, length: %d, should be %ld\n",
+ in_size, sizeof(*sfp_abs));
+ return;
+ }
+
+ if (sfp_abs->port_id >= HIFC_MAX_PORT_ID) {
+ sdk_err(hwdev->dev_hdl, "Invalid sfp port id: %d, max port is %d\n",
+ sfp_abs->port_id, HIFC_MAX_PORT_ID - 1);
+ return;
+ }
+
+ if (!chip_node->rt_cmd)
+ return;
+
+ rt_cmd = &chip_node->rt_cmd[sfp_abs->port_id];
+ mutex_lock(&chip_node->sfp_mutex);
+ memcpy(&rt_cmd->abs, sfp_abs, sizeof(rt_cmd->abs));
+ rt_cmd->up_send_sfp_abs = true;
+ mutex_unlock(&chip_node->sfp_mutex);
+}
+
+static void mgmt_heartbeat_enhanced_event(struct hifc_hwdev *hwdev,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hifc_heartbeat_event *hb_event = buf_in;
+ struct hifc_heartbeat_event *hb_event_out = buf_out;
+ struct hifc_hwdev *dev = hwdev;
+
+ if (in_size != sizeof(*hb_event)) {
+ sdk_err(dev->dev_hdl, "Invalid data size from mgmt for heartbeat event: %d\n",
+ in_size);
+ return;
+ }
+
+ if (dev->heartbeat_ehd.last_heartbeat != hb_event->heart) {
+ dev->heartbeat_ehd.last_update_jiffies = jiffies;
+ dev->heartbeat_ehd.last_heartbeat = hb_event->heart;
+ }
+
+ hb_event_out->drv_heart = HEARTBEAT_DRV_MAGIC_ACK;
+
+ hb_event_out->status = 0;
+ *out_size = sizeof(*hb_event_out);
+}
+
+struct dev_event_handler {
+ u8 mod;
+ u8 cmd;
+ void (*handler)(struct hifc_hwdev *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+};
+
+struct dev_event_handler dev_cmd_handler[] = {
+ {
+ .mod = HIFC_MOD_L2NIC,
+ .cmd = HIFC_PORT_CMD_GET_SFP_INFO,
+ .handler = __port_sfp_info_event,
+ },
+
+ {
+ .mod = HIFC_MOD_L2NIC,
+ .cmd = HIFC_PORT_CMD_GET_SFP_ABS,
+ .handler = __port_sfp_abs_event,
+ },
+
+ {
+ .mod = HIFC_MOD_HILINK,
+ .cmd = HIFC_HILINK_CMD_GET_LINK_INFO,
+ .handler = hifc_print_hilink_info,
+ },
+
+ {
+ .mod = HIFC_MOD_COMM,
+ .cmd = HIFC_MGMT_CMD_FAULT_REPORT,
+ .handler = fault_event_handler,
+ },
+
+ {
+ .mod = HIFC_MOD_L2NIC,
+ .cmd = HIFC_MGMT_CMD_HEART_LOST_REPORT,
+ .handler = heartbeat_lost_event_handler,
+ },
+
+ {
+ .mod = HIFC_MOD_COMM,
+ .cmd = HIFC_MGMT_CMD_WATCHDOG_INFO,
+ .handler = mgmt_watchdog_timeout_event_handler,
+ },
+
+ {
+ .mod = HIFC_MOD_L2NIC,
+ .cmd = HIFC_PORT_CMD_MGMT_RESET,
+ .handler = mgmt_reset_event_handler,
+ },
+
+ {
+ .mod = HIFC_MOD_COMM,
+ .cmd = HIFC_MGMT_CMD_FMW_ACT_NTC,
+ .handler = hifc_fmw_act_ntc_handler,
+ },
+
+ {
+ .mod = HIFC_MOD_COMM,
+ .cmd = HIFC_MGMT_CMD_PCIE_DFX_NTC,
+ .handler = hifc_pcie_dfx_event_handler,
+ },
+
+ {
+ .mod = HIFC_MOD_COMM,
+ .cmd = HIFC_MGMT_CMD_GET_HOST_INFO,
+ .handler = hifc_mctp_get_host_info_event_handler,
+ },
+
+ {
+ .mod = HIFC_MOD_COMM,
+ .cmd = HIFC_MGMT_CMD_HEARTBEAT_EVENT,
+ .handler = mgmt_heartbeat_enhanced_event,
+ },
+};
+
+/* public process for this event:
+ * pf link change event
+ * pf heart lost event ,TBD
+ * pf fault report event
+ * vf link change event
+ * vf heart lost event, TBD
+ * vf fault report event, TBD
+ */
+static void _event_handler(struct hifc_hwdev *hwdev, enum hifc_mod_type mod,
+ u8 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ u32 i, size = sizeof(dev_cmd_handler) / sizeof(dev_cmd_handler[0]);
+
+ if (!hwdev)
+ return;
+
+ *out_size = 0;
+
+ for (i = 0; i < size; i++) {
+ if (cmd == dev_cmd_handler[i].cmd &&
+ mod == dev_cmd_handler[i].mod) {
+ dev_cmd_handler[i].handler(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ }
+ }
+
+ /* can't find this event cmd */
+ if (i == size)
+ sdk_warn(hwdev->dev_hdl, "Unsupported mod(%d) event cmd(%d) to process\n",
+ mod, cmd);
+}
+
+/* pf link change event */
+static void pf_nic_event_handler(void *hwdev, void *pri_handle, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ _event_handler(hwdev, HIFC_MOD_L2NIC, cmd, buf_in, in_size,
+ buf_out, out_size);
+}
+
+static void pf_hilink_event_handler(void *hwdev, void *pri_handle, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ _event_handler(hwdev, HIFC_MOD_HILINK, cmd, buf_in, in_size,
+ buf_out, out_size);
+}
+
+/* pf fault report event */
+void pf_fault_event_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ _event_handler(hwdev, HIFC_MOD_COMM, HIFC_MGMT_CMD_FAULT_REPORT,
+ buf_in, in_size, buf_out, out_size);
+}
+
+void mgmt_watchdog_event_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ _event_handler(hwdev, HIFC_MOD_COMM, HIFC_MGMT_CMD_WATCHDOG_INFO,
+ buf_in, in_size, buf_out, out_size);
+}
+
+void mgmt_fmw_act_event_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ _event_handler(hwdev, HIFC_MOD_COMM, HIFC_MGMT_CMD_FMW_ACT_NTC,
+ buf_in, in_size, buf_out, out_size);
+}
+
+void mgmt_pcie_dfx_event_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ _event_handler(hwdev, HIFC_MOD_COMM, HIFC_MGMT_CMD_PCIE_DFX_NTC,
+ buf_in, in_size, buf_out, out_size);
+}
+
+void mgmt_get_mctp_event_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ _event_handler(hwdev, HIFC_MOD_COMM, HIFC_MGMT_CMD_GET_HOST_INFO,
+ buf_in, in_size, buf_out, out_size);
+}
+
+void mgmt_heartbeat_event_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ _event_handler(hwdev, HIFC_MOD_COMM, HIFC_MGMT_CMD_HEARTBEAT_EVENT,
+ buf_in, in_size, buf_out, out_size);
+}
+
+static void pf_event_register(struct hifc_hwdev *hwdev)
+{
+ if (hifc_is_hwdev_mod_inited(hwdev, HIFC_HWDEV_MGMT_INITED)) {
+ hifc_register_mgmt_msg_cb(hwdev, HIFC_MOD_L2NIC,
+ hwdev, pf_nic_event_handler);
+ hifc_register_mgmt_msg_cb(hwdev, HIFC_MOD_HILINK,
+ hwdev,
+ pf_hilink_event_handler);
+ hifc_comm_recv_mgmt_self_cmd_reg(hwdev,
+ HIFC_MGMT_CMD_FAULT_REPORT,
+ pf_fault_event_handler);
+
+ hifc_comm_recv_mgmt_self_cmd_reg(hwdev,
+ HIFC_MGMT_CMD_WATCHDOG_INFO,
+ mgmt_watchdog_event_handler);
+
+ hifc_comm_recv_mgmt_self_cmd_reg(hwdev,
+ HIFC_MGMT_CMD_FMW_ACT_NTC,
+ mgmt_fmw_act_event_handler);
+ hifc_comm_recv_mgmt_self_cmd_reg(hwdev,
+ HIFC_MGMT_CMD_PCIE_DFX_NTC,
+ mgmt_pcie_dfx_event_handler);
+ hifc_comm_recv_mgmt_self_cmd_reg(hwdev,
+ HIFC_MGMT_CMD_GET_HOST_INFO,
+ mgmt_get_mctp_event_handler);
+ }
+}
+
+void hifc_event_register(void *dev, void *pri_handle,
+ hifc_event_handler callback)
+{
+ struct hifc_hwdev *hwdev = dev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for register event\n");
+ return;
+ }
+
+ hwdev->event_callback = callback;
+ hwdev->event_pri_handle = pri_handle;
+
+ pf_event_register(hwdev);
+}
+
+void hifc_event_unregister(void *dev)
+{
+ struct hifc_hwdev *hwdev = dev;
+
+ hwdev->event_callback = NULL;
+ hwdev->event_pri_handle = NULL;
+
+ hifc_unregister_mgmt_msg_cb(hwdev, HIFC_MOD_L2NIC);
+ hifc_unregister_mgmt_msg_cb(hwdev, HIFC_MOD_HILINK);
+ hifc_comm_recv_up_self_cmd_unreg(hwdev,
+ HIFC_MGMT_CMD_FAULT_REPORT);
+ hifc_comm_recv_up_self_cmd_unreg(hwdev,
+ HIFC_MGMT_CMD_WATCHDOG_INFO);
+ hifc_comm_recv_up_self_cmd_unreg(hwdev,
+ HIFC_MGMT_CMD_FMW_ACT_NTC);
+ hifc_comm_recv_up_self_cmd_unreg(hwdev,
+ HIFC_MGMT_CMD_PCIE_DFX_NTC);
+ hifc_comm_recv_up_self_cmd_unreg(hwdev,
+ HIFC_MGMT_CMD_GET_HOST_INFO);
+}
+
+/* 0 - heartbeat lost, 1 - normal */
+static u8 hifc_get_heartbeat_status(struct hifc_hwdev *hwdev)
+{
+ struct hifc_hwif *hwif = hwdev->hwif;
+ u32 attr1;
+
+ /* suprise remove should be set 1 */
+ if (!hifc_get_chip_present_flag(hwdev))
+ return 1;
+
+ attr1 = hifc_hwif_read_reg(hwif, HIFC_CSR_FUNC_ATTR1_ADDR);
+ if (attr1 == HIFC_PCIE_LINK_DOWN) {
+ sdk_err(hwdev->dev_hdl, "Detect pcie is link down\n");
+ hifc_set_chip_absent(hwdev);
+ hifc_force_complete_all(hwdev);
+ /* should notify chiperr to pangea
+ * when detecting pcie link down
+ */
+ return 1;
+ }
+
+ return HIFC_AF1_GET(attr1, MGMT_INIT_STATUS);
+}
+
+static void hifc_heartbeat_event_handler(struct work_struct *work)
+{
+ struct hifc_hwdev *hwdev =
+ container_of(work, struct hifc_hwdev, timer_work);
+ u16 out = 0;
+
+ _event_handler(hwdev, HIFC_MOD_L2NIC, HIFC_MGMT_CMD_HEART_LOST_REPORT,
+ NULL, 0, &out, &out);
+}
+
+static bool __detect_heartbeat_ehd_lost(struct hifc_hwdev *hwdev)
+{
+ struct hifc_heartbeat_enhanced *hb_ehd = &hwdev->heartbeat_ehd;
+ u64 update_time;
+ bool hb_ehd_lost = false;
+
+ if (!hb_ehd->en)
+ return false;
+
+ if (time_after(jiffies, hb_ehd->start_detect_jiffies)) {
+ update_time = jiffies_to_msecs(jiffies -
+ hb_ehd->last_update_jiffies);
+ if (update_time > HIFC_HEARBEAT_ENHANCED_LOST) {
+ sdk_warn(hwdev->dev_hdl, "Heartbeat enhanced lost for %d millisecond\n",
+ (u32)update_time);
+ hb_ehd_lost = true;
+ }
+ } else {
+ /* mgmt may not report heartbeart enhanced event and won't
+ * update last_update_jiffies
+ */
+ hb_ehd->last_update_jiffies = jiffies;
+ }
+
+ return hb_ehd_lost;
+}
+
+static void hifc_heartbeat_timer_handler(struct timer_list *t)
+{
+ struct hifc_hwdev *hwdev = from_timer(hwdev, t, heartbeat_timer);
+
+ if (__detect_heartbeat_ehd_lost(hwdev) ||
+ !hifc_get_heartbeat_status(hwdev)) {
+ hwdev->heartbeat_lost = 1;
+ queue_work(hwdev->workq, &hwdev->timer_work);
+ } else {
+ mod_timer(&hwdev->heartbeat_timer,
+ jiffies + msecs_to_jiffies(HIFC_HEARTBEAT_PERIOD));
+ }
+}
+
+void add_to_timer(struct timer_list *timer, long period)
+{
+ if (!timer)
+ return;
+
+ add_timer(timer);
+}
+
+void delete_timer(struct timer_list *timer)
+{
+ if (!timer)
+ return;
+
+ del_timer_sync(timer);
+}
+
+void hifc_init_heartbeat(struct hifc_hwdev *hwdev)
+{
+ timer_setup(&hwdev->heartbeat_timer, hifc_heartbeat_timer_handler, 0);
+ hwdev->heartbeat_timer.expires =
+ jiffies + msecs_to_jiffies(HIFC_HEARTBEAT_START_EXPIRE);
+
+ add_to_timer(&hwdev->heartbeat_timer, HIFC_HEARTBEAT_PERIOD);
+
+ INIT_WORK(&hwdev->timer_work, hifc_heartbeat_event_handler);
+}
+
+void hifc_destroy_heartbeat(struct hifc_hwdev *hwdev)
+{
+ delete_timer(&hwdev->heartbeat_timer);
+}
+
+u8 hifc_nic_sw_aeqe_handler(void *handle, u8 event, u64 data)
+{
+ struct hifc_hwdev *hwdev = (struct hifc_hwdev *)handle;
+ u8 event_level = FAULT_LEVEL_MAX;
+
+ switch (event) {
+ case HIFC_INTERNAL_TSO_FATAL_ERROR:
+ case HIFC_INTERNAL_LRO_FATAL_ERROR:
+ case HIFC_INTERNAL_TX_FATAL_ERROR:
+ case HIFC_INTERNAL_RX_FATAL_ERROR:
+ case HIFC_INTERNAL_OTHER_FATAL_ERROR:
+ atomic_inc(&hwdev->hw_stats.nic_ucode_event_stats[event]);
+ sdk_err(hwdev->dev_hdl, "SW aeqe event type: 0x%x, data: 0x%llx\n",
+ event, data);
+ event_level = FAULT_LEVEL_FATAL;
+ break;
+ default:
+ sdk_err(hwdev->dev_hdl, "Unsupported sw event %d to process.\n",
+ event);
+ }
+
+ return event_level;
+}
+
+void hifc_set_pcie_order_cfg(void *handle)
+{
+ struct hifc_hwdev *hwdev = handle;
+ u32 val;
+
+ if (!hwdev)
+ return;
+
+ val = hifc_hwif_read_reg(hwdev->hwif,
+ HIFC_GLB_DMA_SO_RO_REPLACE_ADDR);
+
+ if (HIFC_GLB_DMA_SO_RO_GET(val, SO_RO_CFG)) {
+ val = HIFC_GLB_DMA_SO_R0_CLEAR(val, SO_RO_CFG);
+ val |= HIFC_GLB_DMA_SO_R0_SET(HIFC_DISABLE_ORDER, SO_RO_CFG);
+ hifc_hwif_write_reg(hwdev->hwif,
+ HIFC_GLB_DMA_SO_RO_REPLACE_ADDR, val);
+ }
+}
+
+int hifc_get_board_info(void *hwdev, struct hifc_board_info *info)
+{
+ struct hifc_comm_board_info board_info = {0};
+ u16 out_size = sizeof(board_info);
+ int err;
+
+ if (!hwdev || !info)
+ return -EINVAL;
+
+ err = hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_GET_BOARD_INFO,
+ &board_info, sizeof(board_info),
+ &board_info, &out_size, 0);
+ if (err || board_info.status || !out_size) {
+ sdk_err(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Failed to get board info, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, board_info.status, out_size);
+ return -EFAULT;
+ }
+
+ memcpy(info, &board_info.info, sizeof(*info));
+
+ return 0;
+}
+
+int hifc_get_phy_init_status(void *hwdev,
+ enum phy_init_status_type *init_status)
+{
+ struct hifc_phy_init_status phy_info = {0};
+ u16 out_size = sizeof(phy_info);
+ int err;
+
+ if (!hwdev || !init_status)
+ return -EINVAL;
+
+ err = hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_GET_PHY_INIT_STATUS,
+ &phy_info, sizeof(phy_info),
+ &phy_info, &out_size, 0);
+ if ((phy_info.status != HIFC_MGMT_CMD_UNSUPPORTED &&
+ phy_info.status) || err || !out_size) {
+ sdk_err(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Failed to get phy info, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, phy_info.status, out_size);
+ return -EFAULT;
+ }
+
+ *init_status = phy_info.init_status;
+
+ return phy_info.status;
+}
+
+int hifc_phy_init_status_judge(void *hwdev)
+{
+ enum phy_init_status_type init_status;
+ int ret;
+ unsigned long end;
+
+ /* It's not a phy, so don't judge phy status */
+ if (!HIFC_BOARD_IS_PHY((struct hifc_hwdev *)hwdev))
+ return 0;
+
+ end = jiffies + msecs_to_jiffies(PHY_DOING_INIT_TIMEOUT);
+ do {
+ ret = hifc_get_phy_init_status(hwdev, &init_status);
+ if (ret == HIFC_MGMT_CMD_UNSUPPORTED)
+ return 0;
+ else if (ret)
+ return -EFAULT;
+
+ switch (init_status) {
+ case PHY_INIT_SUCCESS:
+ sdk_info(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Phy init is success\n");
+ return 0;
+ case PHY_NONSUPPORT:
+ sdk_info(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Phy init is nonsupport\n");
+ return 0;
+ case PHY_INIT_FAIL:
+ sdk_err(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Phy init is failed\n");
+ return -EIO;
+ case PHY_INIT_DOING:
+ msleep(250);
+ break;
+ default:
+ sdk_err(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Phy init is invalid, init_status: %d\n",
+ init_status);
+ return -EINVAL;
+ }
+ } while (time_before(jiffies, end));
+
+ sdk_err(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Phy init is timeout\n");
+
+ return -ETIMEDOUT;
+}
+
+int hifc_get_mgmt_channel_status(void *handle)
+{
+ struct hifc_hwdev *hwdev = handle;
+ u32 val;
+
+ if (!hwdev)
+ return true;
+
+ if (hifc_func_type(hwdev) == TYPE_VF ||
+ !(hwdev->feature_cap & HIFC_FUNC_SUPP_DFX_REG))
+ return false;
+
+ val = hifc_hwif_read_reg(hwdev->hwif, HIFC_ICPL_RESERVD_ADDR);
+
+ return HIFC_GET_MGMT_CHANNEL_STATUS(val, MGMT_CHANNEL_STATUS);
+}
+
+#define HIFC_RED_REG_TIME_OUT 3000
+
+int hifc_read_reg(void *hwdev, u32 reg_addr, u32 *val)
+{
+ struct hifc_reg_info reg_info = {0};
+ u16 out_size = sizeof(reg_info);
+ int err;
+
+ if (!hwdev || !val)
+ return -EINVAL;
+
+ reg_info.reg_addr = reg_addr;
+ reg_info.val_length = sizeof(u32);
+
+ err = hifc_pf_msg_to_mgmt_sync(hwdev, HIFC_MOD_COMM,
+ HIFC_MGMT_CMD_REG_READ,
+ ®_info, sizeof(reg_info),
+ ®_info, &out_size,
+ HIFC_RED_REG_TIME_OUT);
+ if (reg_info.status || err || !out_size) {
+ sdk_err(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Failed to read reg, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, reg_info.status, out_size);
+ return -EFAULT;
+ }
+
+ *val = reg_info.data[0];
+
+ return 0;
+}
+
+void hifc_swe_fault_handler(struct hifc_hwdev *hwdev, u8 level,
+ u8 event, u64 val)
+{
+ struct hifc_fault_info_node *fault_node;
+
+ if (level < FAULT_LEVEL_MAX) {
+ fault_node = kzalloc(sizeof(*fault_node), GFP_KERNEL);
+ if (!fault_node) {
+ sdk_err(hwdev->dev_hdl, "Malloc fault node memory failed\n");
+ return;
+ }
+
+ fault_node->info.fault_src = HIFC_FAULT_SRC_SW_MGMT_UCODE;
+ fault_node->info.fault_lev = level;
+ fault_node->info.fault_data.sw_mgmt.event_id = event;
+ fault_node->info.fault_data.sw_mgmt.event_data = val;
+ hifc_refresh_history_fault(hwdev, &fault_node->info);
+
+ down(&hwdev->fault_list_sem);
+ kfree(fault_node);
+ up(&hwdev->fault_list_sem);
+ }
+}
+
+void hifc_set_func_deinit_flag(void *hwdev)
+{
+ struct hifc_hwdev *dev = hwdev;
+
+ set_bit(HIFC_HWDEV_FUNC_DEINIT, &dev->func_state);
+}
+
+int hifc_get_card_present_state(void *hwdev, bool *card_present_state)
+{
+ u32 addr, attr1;
+
+ if (!hwdev || !card_present_state)
+ return -EINVAL;
+
+ addr = HIFC_CSR_FUNC_ATTR1_ADDR;
+ attr1 = hifc_hwif_read_reg(((struct hifc_hwdev *)hwdev)->hwif, addr);
+ if (attr1 == HIFC_PCIE_LINK_DOWN) {
+ sdk_warn(((struct hifc_hwdev *)hwdev)->dev_hdl, "Card is not present\n");
+ *card_present_state = (bool)0;
+ } else {
+ *card_present_state = (bool)1;
+ }
+
+ return 0;
+}
+
+void hifc_disable_mgmt_msg_report(void *hwdev)
+{
+ struct hifc_hwdev *hw_dev = (struct hifc_hwdev *)hwdev;
+
+ hifc_set_pf_status(hw_dev->hwif, HIFC_PF_STATUS_INIT);
+}
+
diff --git a/drivers/scsi/huawei/hifc/hifc_hwdev.h b/drivers/scsi/huawei/hifc/hifc_hwdev.h
new file mode 100644
index 000000000000..6ebf59b31fb8
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_hwdev.h
@@ -0,0 +1,456 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+
+#ifndef HIFC_HWDEV_H_
+#define HIFC_HWDEV_H_
+
+/* to use 0-level CLA, page size must be: 64B(wqebb) * 4096(max_q_depth) */
+#define HIFC_DEFAULT_WQ_PAGE_SIZE 0x40000
+#define HIFC_HW_WQ_PAGE_SIZE 0x1000
+
+#define HIFC_MSG_TO_MGMT_MAX_LEN 2016
+
+#define HIFC_MGMT_STATUS_ERR_OK 0 /* Ok */
+#define HIFC_MGMT_STATUS_ERR_PARAM 1 /* Invalid parameter */
+#define HIFC_MGMT_STATUS_ERR_FAILED 2 /* Operation failed */
+#define HIFC_MGMT_STATUS_ERR_PORT 3 /* Invalid port */
+#define HIFC_MGMT_STATUS_ERR_TIMEOUT 4 /* Operation time out */
+#define HIFC_MGMT_STATUS_ERR_NOMATCH 5 /* Version not match */
+#define HIFC_MGMT_STATUS_ERR_EXIST 6 /* Entry exists */
+#define HIFC_MGMT_STATUS_ERR_NOMEM 7 /* Out of memory */
+#define HIFC_MGMT_STATUS_ERR_INIT 8 /* Feature not initialized */
+#define HIFC_MGMT_STATUS_ERR_FAULT 9 /* Invalid address */
+#define HIFC_MGMT_STATUS_ERR_PERM 10 /* Operation not permitted */
+#define HIFC_MGMT_STATUS_ERR_EMPTY 11 /* Table empty */
+#define HIFC_MGMT_STATUS_ERR_FULL 12 /* Table full */
+#define HIFC_MGMT_STATUS_ERR_NOT_FOUND 13 /* Not found */
+#define HIFC_MGMT_STATUS_ERR_BUSY 14 /* Device or resource busy */
+#define HIFC_MGMT_STATUS_ERR_RESOURCE 15 /* No resources for operation */
+#define HIFC_MGMT_STATUS_ERR_CONFIG 16 /* Invalid configuration */
+#define HIFC_MGMT_STATUS_ERR_UNAVAIL 17 /* Feature unavailable */
+#define HIFC_MGMT_STATUS_ERR_CRC 18 /* CRC check failed */
+#define HIFC_MGMT_STATUS_ERR_NXIO 19 /* No such device or address */
+#define HIFC_MGMT_STATUS_ERR_ROLLBACK 20 /* Chip rollback fail */
+#define HIFC_MGMT_STATUS_ERR_LEN 32 /* Length too short or too long */
+#define HIFC_MGMT_STATUS_ERR_UNSUPPORT 0xFF /* Feature not supported*/
+/* Qe buffer relates define */
+
+enum hifc_rx_buf_size {
+ HIFC_RX_BUF_SIZE_32B = 0x20,
+ HIFC_RX_BUF_SIZE_64B = 0x40,
+ HIFC_RX_BUF_SIZE_96B = 0x60,
+ HIFC_RX_BUF_SIZE_128B = 0x80,
+ HIFC_RX_BUF_SIZE_192B = 0xC0,
+ HIFC_RX_BUF_SIZE_256B = 0x100,
+ HIFC_RX_BUF_SIZE_384B = 0x180,
+ HIFC_RX_BUF_SIZE_512B = 0x200,
+ HIFC_RX_BUF_SIZE_768B = 0x300,
+ HIFC_RX_BUF_SIZE_1K = 0x400,
+ HIFC_RX_BUF_SIZE_1_5K = 0x600,
+ HIFC_RX_BUF_SIZE_2K = 0x800,
+ HIFC_RX_BUF_SIZE_3K = 0xC00,
+ HIFC_RX_BUF_SIZE_4K = 0x1000,
+ HIFC_RX_BUF_SIZE_8K = 0x2000,
+ HIFC_RX_BUF_SIZE_16K = 0x4000,
+};
+
+enum hifc_res_state {
+ HIFC_RES_CLEAN = 0,
+ HIFC_RES_ACTIVE = 1,
+};
+
+enum ppf_tmr_status {
+ HIFC_PPF_TMR_FLAG_STOP,
+ HIFC_PPF_TMR_FLAG_START,
+};
+
+struct cfg_mgmt_info;
+struct hifc_hwif;
+struct hifc_wqs;
+struct hifc_aeqs;
+struct hifc_ceqs;
+struct hifc_msg_pf_to_mgmt;
+struct hifc_cmdqs;
+
+struct hifc_root_ctxt {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_idx;
+ u16 rsvd1;
+ u8 set_cmdq_depth;
+ u8 cmdq_depth;
+ u8 lro_en;
+ u8 rsvd2;
+ u8 ppf_idx;
+ u8 rsvd3;
+ u16 rq_depth;
+ u16 rx_buf_sz;
+ u16 sq_depth;
+};
+
+struct hifc_page_addr {
+ void *virt_addr;
+ u64 phys_addr;
+};
+
+#define HIFC_PCIE_LINK_DOWN 0xFFFFFFFF
+
+#define HIFC_DEV_ACTIVE_FW_TIMEOUT (35 * 1000)
+#define HIFC_DEV_BUSY_ACTIVE_FW 0xFE
+
+#define HIFC_HW_WQ_NAME "hifc_hardware"
+#define HIFC_HEARTBEAT_PERIOD 1000
+#define HIFC_HEARTBEAT_START_EXPIRE 5000
+
+#define HIFC_CHIP_ERROR_TYPE_MAX 1024
+#define HIFC_CHIP_FAULT_SIZE \
+ (HIFC_NODE_ID_MAX * FAULT_LEVEL_MAX * HIFC_CHIP_ERROR_TYPE_MAX)
+
+#define HIFC_CSR_DMA_ATTR_TBL_BASE 0xC80
+#define HIFC_CSR_DMA_ATTR_TBL_STRIDE 0x4
+#define HIFC_CSR_DMA_ATTR_TBL_ADDR(idx) \
+ (HIFC_CSR_DMA_ATTR_TBL_BASE \
+ + (idx) * HIFC_CSR_DMA_ATTR_TBL_STRIDE)
+
+/* MSI-X registers */
+#define HIFC_CSR_MSIX_CNT_BASE 0x2004
+#define HIFC_CSR_MSIX_STRIDE 0x8
+
+#define HIFC_CSR_MSIX_CNT_ADDR(idx) \
+ (HIFC_CSR_MSIX_CNT_BASE + (idx) * HIFC_CSR_MSIX_STRIDE)
+
+enum hifc_node_id {
+ HIFC_NODE_ID_IPSU = 4,
+ HIFC_NODE_ID_MGMT_HOST = 21, /*Host CPU send API to uP */
+ HIFC_NODE_ID_MAX = 22
+};
+
+#define HIFC_HWDEV_INIT_MODES_MASK ((1UL << HIFC_HWDEV_ALL_INITED) - 1)
+
+enum hifc_hwdev_func_state {
+ HIFC_HWDEV_FUNC_INITED = HIFC_HWDEV_ALL_INITED,
+ HIFC_HWDEV_FUNC_DEINIT,
+ HIFC_HWDEV_STATE_BUSY = 31,
+};
+
+struct hifc_cqm_stats {
+ atomic_t cqm_cmd_alloc_cnt;
+ atomic_t cqm_cmd_free_cnt;
+ atomic_t cqm_send_cmd_box_cnt;
+ atomic_t cqm_db_addr_alloc_cnt;
+ atomic_t cqm_db_addr_free_cnt;
+ atomic_t cqm_fc_srq_create_cnt;
+ atomic_t cqm_qpc_mpt_create_cnt;
+ atomic_t cqm_nonrdma_queue_create_cnt;
+ atomic_t cqm_qpc_mpt_delete_cnt;
+ atomic_t cqm_nonrdma_queue_delete_cnt;
+ atomic_t cqm_aeq_callback_cnt[112];
+};
+
+struct hifc_link_event_stats {
+ atomic_t link_down_stats;
+ atomic_t link_up_stats;
+};
+
+struct hifc_fault_event_stats {
+ atomic_t chip_fault_stats[HIFC_NODE_ID_MAX][FAULT_LEVEL_MAX];
+ atomic_t fault_type_stat[FAULT_TYPE_MAX];
+ atomic_t pcie_fault_stats;
+};
+
+struct hifc_hw_stats {
+ atomic_t heart_lost_stats;
+ atomic_t nic_ucode_event_stats[HIFC_NIC_FATAL_ERROR_MAX];
+ struct hifc_cqm_stats cqm_stats;
+ struct hifc_link_event_stats link_event_stats;
+ struct hifc_fault_event_stats fault_event_stats;
+};
+
+struct hifc_fault_info_node {
+ struct list_head list;
+ struct hifc_hwdev *hwdev;
+ struct hifc_fault_recover_info info;
+};
+
+enum heartbeat_support_state {
+ HEARTBEAT_NOT_SUPPORT = 0,
+ HEARTBEAT_SUPPORT,
+};
+
+/* 25s for max 5 heartbeat event lost */
+#define HIFC_HEARBEAT_ENHANCED_LOST 25000
+struct hifc_heartbeat_enhanced {
+ bool en; /* enable enhanced heartbeat or not */
+
+ unsigned long last_update_jiffies;
+ u32 last_heartbeat;
+
+ unsigned long start_detect_jiffies;
+};
+
+#define HIFC_CMD_VER_FUNC_ID 2
+#define HIFC_GLB_DMA_SO_RO_REPLACE_ADDR 0x488C
+#define HIFC_ICPL_RESERVD_ADDR 0x9204
+
+#define l2nic_msg_to_mgmt_sync(hwdev, cmd, buf_in, in_size, buf_out, out_size)\
+ hifc_msg_to_mgmt_sync(hwdev, HIFC_MOD_L2NIC, cmd, \
+ buf_in, in_size, \
+ buf_out, out_size, 0)
+
+struct hifc_hwdev {
+ void *adapter_hdl; /* pointer to hifc_pcidev or NDIS_Adapter */
+ void *pcidev_hdl; /* pointer to pcidev or Handler */
+ void *dev_hdl; /* pointer to pcidev->dev or Handler, for
+ * sdk_err() or dma_alloc()
+ */
+ u32 wq_page_size;
+
+ void *cqm_hdl;
+ void *chip_node;
+
+ struct hifc_hwif *hwif; /* include void __iomem *bar */
+ struct cfg_mgmt_info *cfg_mgmt;
+ struct hifc_wqs *wqs; /* for FC slq */
+
+ struct hifc_aeqs *aeqs;
+ struct hifc_ceqs *ceqs;
+
+ struct hifc_msg_pf_to_mgmt *pf_to_mgmt;
+ struct hifc_clp_pf_to_mgmt *clp_pf_to_mgmt;
+
+ struct hifc_cmdqs *cmdqs;
+
+ struct hifc_page_addr page_pa0;
+ struct hifc_page_addr page_pa1;
+
+ hifc_event_handler event_callback;
+ void *event_pri_handle;
+ bool history_fault_flag;
+ struct hifc_fault_recover_info history_fault;
+ struct semaphore fault_list_sem;
+
+ struct work_struct timer_work;
+ struct workqueue_struct *workq;
+ struct timer_list heartbeat_timer;
+ /* true represent heartbeat lost, false represent heartbeat restore */
+ u32 heartbeat_lost;
+ int chip_present_flag;
+ struct hifc_heartbeat_enhanced heartbeat_ehd;
+ struct hifc_hw_stats hw_stats;
+ u8 *chip_fault_stats;
+
+ u32 statufull_ref_cnt;
+ ulong func_state;
+
+ u64 feature_cap; /* enum hifc_func_cap */
+
+ /* In bmgw x86 host, driver can't send message to mgmt cpu directly,
+ * need to trasmit message ppf mbox to bmgw arm host.
+ */
+
+ struct hifc_board_info board_info;
+};
+
+int hifc_init_comm_ch(struct hifc_hwdev *hwdev);
+void hifc_uninit_comm_ch(struct hifc_hwdev *hwdev);
+
+enum hifc_set_arm_type {
+ HIFC_SET_ARM_CMDQ,
+ HIFC_SET_ARM_SQ,
+ HIFC_SET_ARM_TYPE_NUM,
+};
+
+/* up to driver event */
+#define HIFC_PORT_CMD_MGMT_RESET 0x0
+struct hifc_vport_state {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 state;
+ u8 rsvd2[3];
+};
+
+struct hifc_l2nic_reset {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u16 reset_flag;
+};
+
+/* HILINK module interface */
+
+/* cmd of mgmt CPU message for HILINK module */
+enum hifc_hilink_cmd {
+ HIFC_HILINK_CMD_GET_LINK_INFO = 0x3,
+ HIFC_HILINK_CMD_SET_LINK_SETTINGS = 0x8,
+};
+
+enum hilink_info_print_event {
+ HILINK_EVENT_LINK_UP = 1,
+ HILINK_EVENT_LINK_DOWN,
+ HILINK_EVENT_CABLE_PLUGGED,
+ HILINK_EVENT_MAX_TYPE,
+};
+
+enum hifc_link_port_type {
+ LINK_PORT_FIBRE = 1,
+ LINK_PORT_ELECTRIC,
+ LINK_PORT_COPPER,
+ LINK_PORT_AOC,
+ LINK_PORT_BACKPLANE,
+ LINK_PORT_BASET,
+ LINK_PORT_MAX_TYPE,
+};
+
+enum hilink_fibre_subtype {
+ FIBRE_SUBTYPE_SR = 1,
+ FIBRE_SUBTYPE_LR,
+ FIBRE_SUBTYPE_MAX,
+};
+
+enum hilink_fec_type {
+ HILINK_FEC_RSFEC,
+ HILINK_FEC_BASEFEC,
+ HILINK_FEC_NOFEC,
+ HILINK_FEC_MAX_TYPE,
+};
+
+/* cmd of mgmt CPU message */
+enum hifc_port_cmd {
+ HIFC_PORT_CMD_SET_MAC = 0x9,
+ HIFC_PORT_CMD_GET_AUTONEG_CAP = 0xf,
+ HIFC_PORT_CMD_SET_VPORT_ENABLE = 0x5d,
+ HIFC_PORT_CMD_UPDATE_MAC = 0xa4,
+ HIFC_PORT_CMD_GET_SFP_INFO = 0xad,
+ HIFC_PORT_CMD_GET_STD_SFP_INFO = 0xF0,
+ HIFC_PORT_CMD_GET_SFP_ABS = 0xFB,
+};
+
+struct hi30_ffe_data {
+ u8 PRE2;
+ u8 PRE1;
+ u8 POST1;
+ u8 POST2;
+ u8 MAIN;
+};
+
+struct hi30_ctle_data {
+ u8 ctlebst[3];
+ u8 ctlecmband[3];
+ u8 ctlermband[3];
+ u8 ctleza[3];
+ u8 ctlesqh[3];
+ u8 ctleactgn[3];
+ u8 ctlepassgn;
+};
+
+#define HILINK_MAX_LANE 4
+
+struct hilink_lane {
+ u8 lane_used;
+ u8 hi30_ffe[5];
+ u8 hi30_ctle[19];
+ u8 hi30_dfe[14];
+ u8 rsvd4;
+};
+
+struct hifc_link_info {
+ u8 vendor_name[16];
+ /* port type:
+ * 1 - fiber; 2 - electric; 3 - copper; 4 - AOC; 5 - backplane;
+ * 6 - baseT; 0xffff - unknown
+ *
+ * port subtype:
+ * Only when port_type is fiber:
+ * 1 - SR; 2 - LR
+ */
+ u32 port_type;
+ u32 port_sub_type;
+ u32 cable_length;
+ u8 cable_temp;
+ u8 cable_max_speed; /* 1(G)/10(G)/25(G)... */
+ u8 sfp_type; /* 0 - qsfp; 1 - sfp */
+ u8 rsvd0;
+ u32 power[4]; /* uW; if is sfp, only power[2] is valid */
+
+ u8 an_state; /* 0 - off; 1 - on */
+ u8 fec; /* 0 - RSFEC; 1 - BASEFEC; 2 - NOFEC */
+ u16 speed; /* 1(G)/10(G)/25(G)... */
+
+ u8 cable_absent; /* 0 - cable present; 1 - cable unpresent */
+ u8 alos; /* 0 - yes; 1 - no */
+ u8 rx_los; /* 0 - yes; 1 - no */
+ u8 pma_status;
+ u32 pma_dbg_info_reg; /* pma debug info: */
+ u32 pma_signal_ok_reg; /* signal ok: */
+
+ u32 pcs_err_blk_cnt_reg; /* error block counter: */
+ u32 rf_lf_status_reg; /* RF/LF status: */
+ u8 pcs_link_reg; /* pcs link: */
+ u8 mac_link_reg; /* mac link: */
+ u8 mac_tx_en;
+ u8 mac_rx_en;
+ u32 pcs_err_cnt;
+
+ /* struct hifc_hilink_lane: 40 bytes */
+ u8 lane1[40]; /* 25GE lane in old firmware */
+
+ u8 rsvd1[266]; /* hilink machine state */
+
+ u8 lane2[HILINK_MAX_LANE * 40]; /* max 4 lane for 40GE/100GE */
+
+ u8 rsvd2[2];
+};
+
+struct hifc_hilink_link_info {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 port_id;
+ u8 info_type; /* 1: link up 2: link down 3 cable plugged */
+ u8 rsvd1;
+
+ struct hifc_link_info info;
+
+ u8 rsvd2[352];
+};
+
+int hifc_set_arm_bit(void *hwdev, enum hifc_set_arm_type q_type, u16 q_id);
+void hifc_set_chip_present(void *hwdev);
+void hifc_force_complete_all(void *hwdev);
+void hifc_init_heartbeat(struct hifc_hwdev *hwdev);
+void hifc_destroy_heartbeat(struct hifc_hwdev *hwdev);
+u8 hifc_nic_sw_aeqe_handler(void *handle, u8 event, u64 data);
+int hifc_l2nic_reset_base(struct hifc_hwdev *hwdev, u16 reset_flag);
+int hifc_pf_msg_to_mgmt_sync(void *hwdev, enum hifc_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u32 timeout);
+void hifc_swe_fault_handler(struct hifc_hwdev *hwdev, u8 level,
+ u8 event, u64 val);
+bool hifc_mgmt_event_ack_first(u8 mod, u8 cmd);
+int hifc_phy_init_status_judge(void *hwdev);
+int hifc_api_csr_rd32(void *hwdev, u8 dest, u32 addr, u32 *val);
+int hifc_api_csr_wr32(void *hwdev, u8 dest, u32 addr, u32 val);
+void mgmt_heartbeat_event_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+struct hifc_sge {
+ u32 hi_addr;
+ u32 lo_addr;
+ u32 len;
+};
+
+void hifc_cpu_to_be32(void *data, int len);
+void hifc_be32_to_cpu(void *data, int len);
+void hifc_set_sge(struct hifc_sge *sge, dma_addr_t addr, u32 len);
+#endif
diff --git a/drivers/scsi/huawei/hifc/hifc_hwif.c b/drivers/scsi/huawei/hifc/hifc_hwif.c
new file mode 100644
index 000000000000..ec84c9bc2f2f
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_hwif.c
@@ -0,0 +1,630 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/io-mapping.h>
+
+#include "hifc_knl_adp.h"
+#include "hifc_hw.h"
+#include "hifc_hwdev.h"
+#include "hifc_hwif.h"
+#include "hifc_api_cmd.h"
+#include "hifc_mgmt.h"
+#include "hifc_eqs.h"
+
+#define WAIT_HWIF_READY_TIMEOUT 10000
+#define HIFC_SELFTEST_RESULT 0x883C
+
+u32 hifc_hwif_read_reg(struct hifc_hwif *hwif, u32 reg)
+{
+ return be32_to_cpu(readl(hwif->cfg_regs_base + reg));
+}
+
+void hifc_hwif_write_reg(struct hifc_hwif *hwif, u32 reg, u32 val)
+{
+ writel(cpu_to_be32(val), hwif->cfg_regs_base + reg);
+}
+
+/**
+ * hwif_ready - test if the HW initialization passed
+ * @hwdev: the pointer to hw device
+ * Return: 0 - success, negative - failure
+ **/
+static int hwif_ready(struct hifc_hwdev *hwdev)
+{
+ u32 addr, attr1;
+
+ addr = HIFC_CSR_FUNC_ATTR1_ADDR;
+ attr1 = hifc_hwif_read_reg(hwdev->hwif, addr);
+
+ if (attr1 == HIFC_PCIE_LINK_DOWN)
+ return -EBUSY;
+
+ if (!HIFC_AF1_GET(attr1, MGMT_INIT_STATUS))
+ return -EBUSY;
+
+ return 0;
+}
+
+static int wait_hwif_ready(struct hifc_hwdev *hwdev)
+{
+ ulong timeout = 0;
+
+ do {
+ if (!hwif_ready(hwdev))
+ return 0;
+
+ usleep_range(999, 1000);
+ timeout++;
+ } while (timeout <= WAIT_HWIF_READY_TIMEOUT);
+
+ sdk_err(hwdev->dev_hdl, "Wait for hwif timeout\n");
+ return -EBUSY;
+}
+
+/**
+ * set_hwif_attr - set the attributes as members in hwif
+ * @hwif: the hardware interface of a pci function device
+ * @attr0: the first attribute that was read from the hw
+ * @attr1: the second attribute that was read from the hw
+ * @attr2: the third attribute that was read from the hw
+ **/
+static void set_hwif_attr(struct hifc_hwif *hwif, u32 attr0, u32 attr1,
+ u32 attr2)
+{
+ hwif->attr.func_global_idx = HIFC_AF0_GET(attr0, FUNC_GLOBAL_IDX);
+ hwif->attr.port_to_port_idx = HIFC_AF0_GET(attr0, P2P_IDX);
+ hwif->attr.pci_intf_idx = HIFC_AF0_GET(attr0, PCI_INTF_IDX);
+ hwif->attr.vf_in_pf = HIFC_AF0_GET(attr0, VF_IN_PF);
+ hwif->attr.func_type = HIFC_AF0_GET(attr0, FUNC_TYPE);
+
+ hwif->attr.ppf_idx = HIFC_AF1_GET(attr1, PPF_IDX);
+
+ hwif->attr.num_aeqs = BIT(HIFC_AF1_GET(attr1, AEQS_PER_FUNC));
+ hwif->attr.num_ceqs = BIT(HIFC_AF1_GET(attr1, CEQS_PER_FUNC));
+ hwif->attr.num_irqs = BIT(HIFC_AF1_GET(attr1, IRQS_PER_FUNC));
+ hwif->attr.num_dma_attr = BIT(HIFC_AF1_GET(attr1, DMA_ATTR_PER_FUNC));
+}
+
+/**
+ * get_hwif_attr - read and set the attributes as members in hwif
+ * @hwif: the hardware interface of a pci function device
+ **/
+static void get_hwif_attr(struct hifc_hwif *hwif)
+{
+ u32 addr, attr0, attr1, attr2;
+
+ addr = HIFC_CSR_FUNC_ATTR0_ADDR;
+ attr0 = hifc_hwif_read_reg(hwif, addr);
+
+ addr = HIFC_CSR_FUNC_ATTR1_ADDR;
+ attr1 = hifc_hwif_read_reg(hwif, addr);
+
+ addr = HIFC_CSR_FUNC_ATTR2_ADDR;
+ attr2 = hifc_hwif_read_reg(hwif, addr);
+
+ set_hwif_attr(hwif, attr0, attr1, attr2);
+}
+
+void hifc_set_pf_status(struct hifc_hwif *hwif, enum hifc_pf_status status)
+{
+ u32 attr5 = HIFC_AF5_SET(status, PF_STATUS);
+ u32 addr = HIFC_CSR_FUNC_ATTR5_ADDR;
+
+ hifc_hwif_write_reg(hwif, addr, attr5);
+}
+
+enum hifc_pf_status hifc_get_pf_status(struct hifc_hwif *hwif)
+{
+ u32 attr5 = hifc_hwif_read_reg(hwif, HIFC_CSR_FUNC_ATTR5_ADDR);
+
+ return HIFC_AF5_GET(attr5, PF_STATUS);
+}
+
+enum hifc_doorbell_ctrl hifc_get_doorbell_ctrl_status(struct hifc_hwif *hwif)
+{
+ u32 attr4 = hifc_hwif_read_reg(hwif, HIFC_CSR_FUNC_ATTR4_ADDR);
+
+ return HIFC_AF4_GET(attr4, DOORBELL_CTRL);
+}
+
+enum hifc_outbound_ctrl hifc_get_outbound_ctrl_status(struct hifc_hwif *hwif)
+{
+ u32 attr4 = hifc_hwif_read_reg(hwif, HIFC_CSR_FUNC_ATTR4_ADDR);
+
+ return HIFC_AF4_GET(attr4, OUTBOUND_CTRL);
+}
+
+void hifc_enable_doorbell(struct hifc_hwif *hwif)
+{
+ u32 addr, attr4;
+
+ addr = HIFC_CSR_FUNC_ATTR4_ADDR;
+ attr4 = hifc_hwif_read_reg(hwif, addr);
+
+ attr4 = HIFC_AF4_CLEAR(attr4, DOORBELL_CTRL);
+ attr4 |= HIFC_AF4_SET(ENABLE_DOORBELL, DOORBELL_CTRL);
+
+ hifc_hwif_write_reg(hwif, addr, attr4);
+}
+
+void hifc_disable_doorbell(struct hifc_hwif *hwif)
+{
+ u32 addr, attr4;
+
+ addr = HIFC_CSR_FUNC_ATTR4_ADDR;
+ attr4 = hifc_hwif_read_reg(hwif, addr);
+
+ attr4 = HIFC_AF4_CLEAR(attr4, DOORBELL_CTRL);
+ attr4 |= HIFC_AF4_SET(DISABLE_DOORBELL, DOORBELL_CTRL);
+
+ hifc_hwif_write_reg(hwif, addr, attr4);
+}
+
+/**
+ * set_ppf - try to set hwif as ppf and set the type of hwif in this case
+ * @hwif: the hardware interface of a pci function device
+ **/
+static void set_ppf(struct hifc_hwif *hwif)
+{
+ struct hifc_func_attr *attr = &hwif->attr;
+ u32 addr, val, ppf_election;
+
+ /* Read Modify Write */
+ addr = HIFC_CSR_PPF_ELECTION_ADDR;
+
+ val = hifc_hwif_read_reg(hwif, addr);
+ val = HIFC_PPF_ELECTION_CLEAR(val, IDX);
+
+ ppf_election = HIFC_PPF_ELECTION_SET(attr->func_global_idx, IDX);
+ val |= ppf_election;
+
+ hifc_hwif_write_reg(hwif, addr, val);
+
+ /* Check PPF */
+ val = hifc_hwif_read_reg(hwif, addr);
+
+ attr->ppf_idx = HIFC_PPF_ELECTION_GET(val, IDX);
+ if (attr->ppf_idx == attr->func_global_idx)
+ attr->func_type = TYPE_PPF;
+}
+
+/**
+ * get_mpf - get the mpf index into the hwif
+ * @hwif: the hardware interface of a pci function device
+ **/
+static void get_mpf(struct hifc_hwif *hwif)
+{
+ struct hifc_func_attr *attr = &hwif->attr;
+ u32 mpf_election, addr;
+
+ addr = HIFC_CSR_GLOBAL_MPF_ELECTION_ADDR;
+
+ mpf_election = hifc_hwif_read_reg(hwif, addr);
+ attr->mpf_idx = HIFC_MPF_ELECTION_GET(mpf_election, IDX);
+}
+
+/**
+ * set_mpf - try to set hwif as mpf and set the mpf idx in hwif
+ * @hwif: the hardware interface of a pci function device
+ **/
+static void set_mpf(struct hifc_hwif *hwif)
+{
+ struct hifc_func_attr *attr = &hwif->attr;
+ u32 addr, val, mpf_election;
+
+ /* Read Modify Write */
+ addr = HIFC_CSR_GLOBAL_MPF_ELECTION_ADDR;
+
+ val = hifc_hwif_read_reg(hwif, addr);
+
+ val = HIFC_MPF_ELECTION_CLEAR(val, IDX);
+ mpf_election = HIFC_MPF_ELECTION_SET(attr->func_global_idx, IDX);
+
+ val |= mpf_election;
+ hifc_hwif_write_reg(hwif, addr, val);
+}
+
+static void init_db_area_idx(struct hifc_free_db_area *free_db_area)
+{
+ u32 i;
+
+ for (i = 0; i < HIFC_DB_MAX_AREAS; i++)
+ free_db_area->db_idx[i] = i;
+
+ free_db_area->num_free = HIFC_DB_MAX_AREAS;
+
+ spin_lock_init(&free_db_area->idx_lock);
+}
+
+static int get_db_idx(struct hifc_hwif *hwif, u32 *idx)
+{
+ struct hifc_free_db_area *free_db_area = &hwif->free_db_area;
+ u32 pos;
+ u32 pg_idx;
+
+ spin_lock(&free_db_area->idx_lock);
+
+retry:
+ if (free_db_area->num_free == 0) {
+ spin_unlock(&free_db_area->idx_lock);
+ return -ENOMEM;
+ }
+
+ free_db_area->num_free--;
+
+ pos = free_db_area->alloc_pos++;
+ pos &= HIFC_DB_MAX_AREAS - 1;
+
+ pg_idx = free_db_area->db_idx[pos];
+
+ free_db_area->db_idx[pos] = 0xFFFFFFFF;
+
+ /* pg_idx out of range */
+ if (pg_idx >= HIFC_DB_MAX_AREAS)
+ goto retry;
+
+ spin_unlock(&free_db_area->idx_lock);
+
+ *idx = pg_idx;
+
+ return 0;
+}
+
+static void free_db_idx(struct hifc_hwif *hwif, u32 idx)
+{
+ struct hifc_free_db_area *free_db_area = &hwif->free_db_area;
+ u32 pos;
+
+ if (idx >= HIFC_DB_MAX_AREAS)
+ return;
+
+ spin_lock(&free_db_area->idx_lock);
+
+ pos = free_db_area->return_pos++;
+ pos &= HIFC_DB_MAX_AREAS - 1;
+
+ free_db_area->db_idx[pos] = idx;
+
+ free_db_area->num_free++;
+
+ spin_unlock(&free_db_area->idx_lock);
+}
+
+void hifc_free_db_addr(void *hwdev, void __iomem *db_base,
+ void __iomem *dwqe_base)
+{
+ struct hifc_hwif *hwif;
+ u32 idx;
+
+ if (!hwdev || !db_base)
+ return;
+
+ hwif = ((struct hifc_hwdev *)hwdev)->hwif;
+ idx = DB_IDX(db_base, hwif->db_base);
+
+#if defined(__aarch64__)
+ /* No need to unmap */
+#else
+ if (dwqe_base)
+ io_mapping_unmap(dwqe_base);
+#endif
+
+ free_db_idx(hwif, idx);
+}
+
+int hifc_alloc_db_addr(void *hwdev, void __iomem **db_base,
+ void __iomem **dwqe_base)
+{
+ struct hifc_hwif *hwif;
+ u64 offset;
+ u32 idx;
+ int err;
+
+ if (!hwdev || !db_base)
+ return -EINVAL;
+
+ hwif = ((struct hifc_hwdev *)hwdev)->hwif;
+
+ err = get_db_idx(hwif, &idx);
+ if (err)
+ return -EFAULT;
+
+ *db_base = hwif->db_base + idx * HIFC_DB_PAGE_SIZE;
+
+ if (!dwqe_base)
+ return 0;
+
+ offset = ((u64)idx) << PAGE_SHIFT;
+
+#if defined(__aarch64__)
+ *dwqe_base = hwif->dwqe_mapping + offset;
+#else
+ *dwqe_base = io_mapping_map_wc(hwif->dwqe_mapping, offset,
+ HIFC_DB_PAGE_SIZE);
+#endif
+
+ if (!(*dwqe_base)) {
+ hifc_free_db_addr(hwdev, *db_base, NULL);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+void hifc_set_msix_state(void *hwdev, u16 msix_idx, enum hifc_msix_state flag)
+{
+ struct hifc_hwif *hwif;
+ u32 offset = msix_idx * HIFC_PCI_MSIX_ENTRY_SIZE +
+ HIFC_PCI_MSIX_ENTRY_VECTOR_CTRL;
+ u32 mask_bits;
+
+ if (!hwdev)
+ return;
+
+ hwif = ((struct hifc_hwdev *)hwdev)->hwif;
+
+ mask_bits = readl(hwif->intr_regs_base + offset);
+ mask_bits &= ~HIFC_PCI_MSIX_ENTRY_CTRL_MASKBIT;
+ if (flag)
+ mask_bits |= HIFC_PCI_MSIX_ENTRY_CTRL_MASKBIT;
+
+ writel(mask_bits, hwif->intr_regs_base + offset);
+}
+
+static void disable_all_msix(struct hifc_hwdev *hwdev)
+{
+ u16 num_irqs = hwdev->hwif->attr.num_irqs;
+ u16 i;
+
+ for (i = 0; i < num_irqs; i++)
+ hifc_set_msix_state(hwdev, i, HIFC_MSIX_DISABLE);
+}
+
+static int wait_until_doorbell_and_outbound_enabled(struct hifc_hwif *hwif)
+{
+ enum hifc_doorbell_ctrl db_ctrl;
+ enum hifc_outbound_ctrl outbound_ctrl;
+ u32 cnt = 0;
+
+ while (cnt < HIFC_WAIT_DOORBELL_AND_OUTBOUND_TIMEOUT) {
+ db_ctrl = hifc_get_doorbell_ctrl_status(hwif);
+ outbound_ctrl = hifc_get_outbound_ctrl_status(hwif);
+
+ if (outbound_ctrl == ENABLE_OUTBOUND &&
+ db_ctrl == ENABLE_DOORBELL)
+ return 0;
+
+ usleep_range(900, 1000);
+ cnt++;
+ }
+
+ return -EFAULT;
+}
+
+static void __print_selftest_reg(struct hifc_hwdev *hwdev)
+{
+ u32 addr, attr0, attr1;
+
+ addr = HIFC_CSR_FUNC_ATTR1_ADDR;
+ attr1 = hifc_hwif_read_reg(hwdev->hwif, addr);
+
+ if (attr1 == HIFC_PCIE_LINK_DOWN) {
+ sdk_err(hwdev->dev_hdl, "PCIE is link down\n");
+ return;
+ }
+
+ addr = HIFC_CSR_FUNC_ATTR0_ADDR;
+ attr0 = hifc_hwif_read_reg(hwdev->hwif, addr);
+ if (HIFC_AF0_GET(attr0, FUNC_TYPE) != TYPE_VF &&
+ !HIFC_AF0_GET(attr0, PCI_INTF_IDX))
+ sdk_err(hwdev->dev_hdl, "Selftest reg: 0x%08x\n",
+ hifc_hwif_read_reg(hwdev->hwif,
+ HIFC_SELFTEST_RESULT));
+}
+
+/**
+ * hifc_init_hwif - initialize the hw interface
+ * @hwdev: the pointer to hw device
+ * @cfg_reg_base: configuration base address
+ * Return: 0 - success, negative - failure
+ **/
+int hifc_init_hwif(struct hifc_hwdev *hwdev, void *cfg_reg_base,
+ void *intr_reg_base, u64 db_base_phy,
+ void *db_base, void *dwqe_mapping)
+{
+ struct hifc_hwif *hwif;
+ int err;
+
+ hwif = kzalloc(sizeof(*hwif), GFP_KERNEL);
+ if (!hwif)
+ return -ENOMEM;
+
+ hwdev->hwif = hwif;
+ hwif->pdev = hwdev->pcidev_hdl;
+
+ hwif->cfg_regs_base = cfg_reg_base;
+ hwif->intr_regs_base = intr_reg_base;
+
+ hwif->db_base_phy = db_base_phy;
+ hwif->db_base = db_base;
+ hwif->dwqe_mapping = dwqe_mapping;
+ init_db_area_idx(&hwif->free_db_area);
+
+ err = wait_hwif_ready(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Chip status is not ready\n");
+ __print_selftest_reg(hwdev);
+ goto hwif_ready_err;
+ }
+
+ get_hwif_attr(hwif);
+
+ err = wait_until_doorbell_and_outbound_enabled(hwif);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Hw doorbell/outbound is disabled\n");
+ goto hwif_ready_err;
+ }
+
+ set_ppf(hwif);
+
+ if (HIFC_IS_PPF(hwdev))
+ set_mpf(hwif);
+
+ get_mpf(hwif);
+
+ disable_all_msix(hwdev);
+ /* disable mgmt cpu report any event */
+ hifc_set_pf_status(hwdev->hwif, HIFC_PF_STATUS_INIT);
+
+ pr_info("global_func_idx: %d, func_type: %d, host_id: %d, ppf: %d, mpf: %d\n",
+ hwif->attr.func_global_idx, hwif->attr.func_type,
+ hwif->attr.pci_intf_idx, hwif->attr.ppf_idx,
+ hwif->attr.mpf_idx);
+
+ return 0;
+
+hwif_ready_err:
+ kfree(hwif);
+
+ return err;
+}
+
+/**
+ * hifc_free_hwif - free the hw interface
+ * @hwdev: the pointer to hw device
+ **/
+void hifc_free_hwif(struct hifc_hwdev *hwdev)
+{
+ kfree(hwdev->hwif);
+}
+
+int hifc_dma_zalloc_coherent_align(void *dev_hdl, u64 size, u64 align,
+ unsigned flag,
+ struct hifc_dma_addr_align *mem_align)
+{
+ void *vaddr, *align_vaddr;
+ dma_addr_t paddr, align_paddr;
+ u64 real_size = size;
+
+ vaddr = dma_zalloc_coherent(dev_hdl, real_size, &paddr, flag);
+ if (!vaddr)
+ return -ENOMEM;
+
+ align_paddr = ALIGN(paddr, align);
+ /* align */
+ if (align_paddr == paddr) {
+ align_vaddr = vaddr;
+ goto out;
+ }
+
+ dma_free_coherent(dev_hdl, real_size, vaddr, paddr);
+
+ /* realloc memory for align */
+ real_size = size + align;
+ vaddr = dma_zalloc_coherent(dev_hdl, real_size, &paddr, flag);
+ if (!vaddr)
+ return -ENOMEM;
+
+ align_paddr = ALIGN(paddr, align);
+ align_vaddr = (void *)((u64)vaddr + (align_paddr - paddr));
+
+out:
+ mem_align->real_size = (u32)real_size;
+ mem_align->ori_vaddr = vaddr;
+ mem_align->ori_paddr = paddr;
+ mem_align->align_vaddr = align_vaddr;
+ mem_align->align_paddr = align_paddr;
+
+ return 0;
+}
+
+void hifc_dma_free_coherent_align(void *dev_hdl,
+ struct hifc_dma_addr_align *mem_align)
+{
+ dma_free_coherent(dev_hdl, mem_align->real_size,
+ mem_align->ori_vaddr, mem_align->ori_paddr);
+}
+
+u16 hifc_global_func_id(void *hwdev)
+{
+ struct hifc_hwif *hwif;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hifc_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.func_global_idx;
+}
+
+/**
+ * get function id from register,used by sriov hot migration process
+ * @hwdev: the pointer to hw device
+ **/
+u16 hifc_global_func_id_hw(void *hwdev)
+{
+ u32 addr, attr0;
+ struct hifc_hwdev *dev;
+
+ dev = (struct hifc_hwdev *)hwdev;
+ addr = HIFC_CSR_FUNC_ATTR0_ADDR;
+ attr0 = hifc_hwif_read_reg(dev->hwif, addr);
+
+ return HIFC_AF0_GET(attr0, FUNC_GLOBAL_IDX);
+}
+
+/**
+ * get function id, used by sriov hot migratition process.
+ * @hwdev: the pointer to hw device
+ * @func_id: function id
+ **/
+int hifc_global_func_id_get(void *hwdev, u16 *func_id)
+{
+ *func_id = hifc_global_func_id(hwdev);
+ return 0;
+}
+
+u8 hifc_pcie_itf_id(void *hwdev)
+{
+ struct hifc_hwif *hwif;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hifc_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.pci_intf_idx;
+}
+EXPORT_SYMBOL(hifc_pcie_itf_id);
+
+enum func_type hifc_func_type(void *hwdev)
+{
+ struct hifc_hwif *hwif;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hifc_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.func_type;
+}
+
+u8 hifc_ppf_idx(void *hwdev)
+{
+ struct hifc_hwif *hwif;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hifc_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.ppf_idx;
+}
diff --git a/drivers/scsi/huawei/hifc/hifc_hwif.h b/drivers/scsi/huawei/hifc/hifc_hwif.h
new file mode 100644
index 000000000000..da72253dcf5f
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_hwif.h
@@ -0,0 +1,243 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+
+#ifndef HIFC_HWIF_H
+#define HIFC_HWIF_H
+
+#include "hifc_hwdev.h"
+
+#define HIFC_WAIT_DOORBELL_AND_OUTBOUND_TIMEOUT 60000
+#define HIFC_CSR_GLOBAL_BASE_ADDR 0x4000
+/* HW interface registers */
+#define HIFC_CSR_FUNC_ATTR0_ADDR 0x0
+#define HIFC_CSR_FUNC_ATTR1_ADDR 0x4
+#define HIFC_CSR_FUNC_ATTR2_ADDR 0x8
+#define HIFC_CSR_FUNC_ATTR4_ADDR 0x10
+
+#define HIFC_CSR_FUNC_ATTR5_ADDR 0x14
+#define HIFC_PCI_MSIX_ENTRY_SIZE 16
+#define HIFC_PCI_MSIX_ENTRY_VECTOR_CTRL 12
+#define HIFC_PCI_MSIX_ENTRY_CTRL_MASKBIT 1
+
+/* total doorbell or direct wqe size is 512kB, db num: 128, dwqe: 128*/
+#define HIFC_DB_DWQE_SIZE 0x00080000
+/* db/dwqe page size: 4K */
+#define HIFC_DB_PAGE_SIZE 0x00001000ULL
+#define HIFC_DB_MAX_AREAS (HIFC_DB_DWQE_SIZE / HIFC_DB_PAGE_SIZE)
+
+#define HIFC_ELECTION_BASE 0x200
+#define HIFC_PPF_ELECTION_STRIDE 0x4
+#define HIFC_CSR_MAX_PORTS 4
+#define HIFC_CSR_PPF_ELECTION_ADDR \
+ (HIFC_CSR_GLOBAL_BASE_ADDR + HIFC_ELECTION_BASE)
+
+#define HIFC_CSR_GLOBAL_MPF_ELECTION_ADDR \
+ (HIFC_CSR_GLOBAL_BASE_ADDR + HIFC_ELECTION_BASE + \
+ HIFC_CSR_MAX_PORTS * HIFC_PPF_ELECTION_STRIDE)
+#define DB_IDX(db, db_base) \
+ ((u32)(((ulong)(db) - (ulong)(db_base)) / \
+ HIFC_DB_PAGE_SIZE))
+
+#define HIFC_AF0_FUNC_GLOBAL_IDX_SHIFT 0
+#define HIFC_AF0_P2P_IDX_SHIFT 10
+#define HIFC_AF0_PCI_INTF_IDX_SHIFT 14
+#define HIFC_AF0_VF_IN_PF_SHIFT 16
+#define HIFC_AF0_FUNC_TYPE_SHIFT 24
+#define HIFC_AF0_FUNC_GLOBAL_IDX_MASK 0x3FF
+#define HIFC_AF0_P2P_IDX_MASK 0xF
+#define HIFC_AF0_PCI_INTF_IDX_MASK 0x3
+#define HIFC_AF0_VF_IN_PF_MASK 0xFF
+#define HIFC_AF0_FUNC_TYPE_MASK 0x1
+
+#define HIFC_AF0_GET(val, member) \
+ (((val) >> HIFC_AF0_##member##_SHIFT) & HIFC_AF0_##member##_MASK)
+
+#define HIFC_AF1_PPF_IDX_SHIFT 0
+#define HIFC_AF1_AEQS_PER_FUNC_SHIFT 8
+#define HIFC_AF1_CEQS_PER_FUNC_SHIFT 12
+#define HIFC_AF1_IRQS_PER_FUNC_SHIFT 20
+#define HIFC_AF1_DMA_ATTR_PER_FUNC_SHIFT 24
+#define HIFC_AF1_MGMT_INIT_STATUS_SHIFT 30
+#define HIFC_AF1_PF_INIT_STATUS_SHIFT 31
+
+#define HIFC_AF1_PPF_IDX_MASK 0x1F
+#define HIFC_AF1_AEQS_PER_FUNC_MASK 0x3
+#define HIFC_AF1_CEQS_PER_FUNC_MASK 0x7
+#define HIFC_AF1_IRQS_PER_FUNC_MASK 0xF
+#define HIFC_AF1_DMA_ATTR_PER_FUNC_MASK 0x7
+#define HIFC_AF1_MGMT_INIT_STATUS_MASK 0x1
+#define HIFC_AF1_PF_INIT_STATUS_MASK 0x1
+
+#define HIFC_AF1_GET(val, member) \
+ (((val) >> HIFC_AF1_##member##_SHIFT) & HIFC_AF1_##member##_MASK)
+
+#define HIFC_AF4_OUTBOUND_CTRL_SHIFT 0
+#define HIFC_AF4_DOORBELL_CTRL_SHIFT 1
+#define HIFC_AF4_OUTBOUND_CTRL_MASK 0x1
+#define HIFC_AF4_DOORBELL_CTRL_MASK 0x1
+
+#define HIFC_AF4_GET(val, member) \
+ (((val) >> HIFC_AF4_##member##_SHIFT) & HIFC_AF4_##member##_MASK)
+
+#define HIFC_AF4_SET(val, member) \
+ (((val) & HIFC_AF4_##member##_MASK) << HIFC_AF4_##member##_SHIFT)
+
+#define HIFC_AF4_CLEAR(val, member) \
+ ((val) & (~(HIFC_AF4_##member##_MASK << \
+ HIFC_AF4_##member##_SHIFT)))
+
+#define HIFC_AF5_PF_STATUS_SHIFT 0
+#define HIFC_AF5_PF_STATUS_MASK 0xFFFF
+
+#define HIFC_AF5_SET(val, member) \
+ (((val) & HIFC_AF5_##member##_MASK) << HIFC_AF5_##member##_SHIFT)
+
+#define HIFC_AF5_GET(val, member) \
+ (((val) >> HIFC_AF5_##member##_SHIFT) & HIFC_AF5_##member##_MASK)
+
+#define HIFC_PPF_ELECTION_IDX_SHIFT 0
+#define HIFC_PPF_ELECTION_IDX_MASK 0x1F
+
+#define HIFC_PPF_ELECTION_SET(val, member) \
+ (((val) & HIFC_PPF_ELECTION_##member##_MASK) << \
+ HIFC_PPF_ELECTION_##member##_SHIFT)
+
+#define HIFC_PPF_ELECTION_GET(val, member) \
+ (((val) >> HIFC_PPF_ELECTION_##member##_SHIFT) & \
+ HIFC_PPF_ELECTION_##member##_MASK)
+
+#define HIFC_PPF_ELECTION_CLEAR(val, member) \
+ ((val) & (~(HIFC_PPF_ELECTION_##member##_MASK \
+ << HIFC_PPF_ELECTION_##member##_SHIFT)))
+
+#define HIFC_MPF_ELECTION_IDX_SHIFT 0
+#define HIFC_MPF_ELECTION_IDX_MASK 0x1F
+
+#define HIFC_MPF_ELECTION_SET(val, member) \
+ (((val) & HIFC_MPF_ELECTION_##member##_MASK) << \
+ HIFC_MPF_ELECTION_##member##_SHIFT)
+
+#define HIFC_MPF_ELECTION_GET(val, member) \
+ (((val) >> HIFC_MPF_ELECTION_##member##_SHIFT) & \
+ HIFC_MPF_ELECTION_##member##_MASK)
+
+#define HIFC_MPF_ELECTION_CLEAR(val, member) \
+ ((val) & (~(HIFC_MPF_ELECTION_##member##_MASK \
+ << HIFC_MPF_ELECTION_##member##_SHIFT)))
+
+#define HIFC_HWIF_NUM_AEQS(hwif) ((hwif)->attr.num_aeqs)
+#define HIFC_HWIF_NUM_CEQS(hwif) ((hwif)->attr.num_ceqs)
+#define HIFC_HWIF_PPF_IDX(hwif) ((hwif)->attr.ppf_idx)
+#define HIFC_PCI_INTF_IDX(hwif) ((hwif)->attr.pci_intf_idx)
+
+#define HIFC_FUNC_TYPE(dev) ((dev)->hwif->attr.func_type)
+#define HIFC_IS_PPF(dev) (HIFC_FUNC_TYPE(dev) == TYPE_PPF)
+
+enum hifc_pcie_nosnoop {
+ HIFC_PCIE_SNOOP = 0,
+ HIFC_PCIE_NO_SNOOP = 1,
+};
+
+enum hifc_pcie_tph {
+ HIFC_PCIE_TPH_DISABLE = 0,
+ HIFC_PCIE_TPH_ENABLE = 1,
+};
+
+enum hifc_pf_status {
+ HIFC_PF_STATUS_INIT = 0X0,
+ HIFC_PF_STATUS_ACTIVE_FLAG = 0x11,
+ HIFC_PF_STATUS_FLR_START_FLAG = 0x12,
+ HIFC_PF_STATUS_FLR_FINISH_FLAG = 0x13,
+};
+
+enum hifc_outbound_ctrl {
+ ENABLE_OUTBOUND = 0x0,
+ DISABLE_OUTBOUND = 0x1,
+};
+
+enum hifc_doorbell_ctrl {
+ ENABLE_DOORBELL = 0x0,
+ DISABLE_DOORBELL = 0x1,
+};
+
+struct hifc_free_db_area {
+ u32 db_idx[HIFC_DB_MAX_AREAS];
+ u32 num_free;
+ u32 alloc_pos;
+ u32 return_pos;
+ /* spinlock for allocating doorbell area */
+ spinlock_t idx_lock;
+};
+
+enum func_type {
+ TYPE_PF,
+ TYPE_VF,
+ TYPE_PPF,
+ TYPE_UNKNOWN,
+};
+
+struct hifc_func_attr {
+ u16 func_global_idx;
+ u8 port_to_port_idx;
+ u8 pci_intf_idx;
+ u8 vf_in_pf;
+ enum func_type func_type;
+
+ u8 mpf_idx;
+
+ u8 ppf_idx;
+
+ u16 num_irqs; /* max: 2 ^ 15 */
+ u8 num_aeqs; /* max: 2 ^ 3 */
+ u8 num_ceqs; /* max: 2 ^ 7 */
+
+ u8 num_dma_attr; /* max: 2 ^ 6 */
+};
+
+struct hifc_hwif {
+ u8 __iomem *cfg_regs_base;
+ u8 __iomem *intr_regs_base;
+ u64 db_base_phy;
+ u8 __iomem *db_base;
+
+#if defined(__aarch64__)
+ void __iomem *dwqe_mapping;
+#else
+ struct io_mapping *dwqe_mapping;
+#endif
+ struct hifc_free_db_area free_db_area;
+ struct hifc_func_attr attr;
+ void *pdev;
+};
+
+struct hifc_dma_addr_align {
+ u32 real_size;
+ void *ori_vaddr;
+ dma_addr_t ori_paddr;
+ void *align_vaddr;
+ dma_addr_t align_paddr;
+};
+
+u32 hifc_hwif_read_reg(struct hifc_hwif *hwif, u32 reg);
+void hifc_hwif_write_reg(struct hifc_hwif *hwif, u32 reg, u32 val);
+void hifc_set_pf_status(struct hifc_hwif *hwif, enum hifc_pf_status status);
+enum hifc_pf_status hifc_get_pf_status(struct hifc_hwif *hwif);
+enum hifc_doorbell_ctrl
+ hifc_get_doorbell_ctrl_status(struct hifc_hwif *hwif);
+enum hifc_outbound_ctrl
+ hifc_get_outbound_ctrl_status(struct hifc_hwif *hwif);
+void hifc_enable_doorbell(struct hifc_hwif *hwif);
+void hifc_disable_doorbell(struct hifc_hwif *hwif);
+int hifc_init_hwif(struct hifc_hwdev *hwdev, void *cfg_reg_base,
+ void *intr_reg_base, u64 db_base_phy,
+ void *db_base, void *dwqe_mapping);
+void hifc_free_hwif(struct hifc_hwdev *hwdev);
+int hifc_dma_zalloc_coherent_align(void *dev_hdl, u64 size, u64 align,
+ unsigned flag,
+ struct hifc_dma_addr_align *mem_align);
+void hifc_dma_free_coherent_align(void *dev_hdl,
+ struct hifc_dma_addr_align *mem_align);
+#endif
diff --git a/drivers/scsi/huawei/hifc/hifc_mgmt.c b/drivers/scsi/huawei/hifc/hifc_mgmt.c
new file mode 100644
index 000000000000..3f4818898e8d
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_mgmt.c
@@ -0,0 +1,1426 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/spinlock.h>
+#include <linux/completion.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/semaphore.h>
+
+#include "hifc_knl_adp.h"
+#include "hifc_hw.h"
+#include "hifc_hwdev.h"
+#include "hifc_hwif.h"
+#include "hifc_api_cmd.h"
+#include "hifc_mgmt.h"
+#include "hifc_eqs.h"
+
+#define BUF_OUT_DEFAULT_SIZE 1
+#define SEGMENT_LEN 48
+#define MGMT_MSG_MAX_SEQ_ID (ALIGN(HIFC_MSG_TO_MGMT_MAX_LEN, \
+ SEGMENT_LEN) / SEGMENT_LEN)
+
+#define MAX_PF_MGMT_BUF_SIZE 2048UL
+#define MGMT_MSG_SIZE_MIN 20
+#define MGMT_MSG_SIZE_STEP 16
+#define MGMT_MSG_RSVD_FOR_DEV 8
+#define MGMT_MSG_TIMEOUT 5000 /* millisecond */
+#define SYNC_MSG_ID_MASK 0x1FF
+#define ASYNC_MSG_ID_MASK 0x1FF
+#define ASYNC_MSG_FLAG 0x200
+#define MSG_NO_RESP 0xFFFF
+#define MAX_MSG_SZ 2016
+
+#define MSG_SZ_IS_VALID(in_size) ((in_size) <= MAX_MSG_SZ)
+
+#define SYNC_MSG_ID(pf_to_mgmt) ((pf_to_mgmt)->sync_msg_id)
+
+#define SYNC_MSG_ID_INC(pf_to_mgmt) (SYNC_MSG_ID(pf_to_mgmt) = \
+ (SYNC_MSG_ID(pf_to_mgmt) + 1) & SYNC_MSG_ID_MASK)
+
+#define ASYNC_MSG_ID(pf_to_mgmt) ((pf_to_mgmt)->async_msg_id)
+
+#define ASYNC_MSG_ID_INC(pf_to_mgmt) (ASYNC_MSG_ID(pf_to_mgmt) = \
+ ((ASYNC_MSG_ID(pf_to_mgmt) + 1) & ASYNC_MSG_ID_MASK) \
+ | ASYNC_MSG_FLAG)
+
+static void pf_to_mgmt_send_event_set(struct hifc_msg_pf_to_mgmt *pf_to_mgmt,
+ int event_flag)
+{
+ spin_lock(&pf_to_mgmt->sync_event_lock);
+ pf_to_mgmt->event_flag = event_flag;
+ spin_unlock(&pf_to_mgmt->sync_event_lock);
+}
+
+/**
+ * hifc_register_mgmt_msg_cb - register sync msg handler for a module
+ * @hwdev: the pointer to hw device
+ * @mod: module in the chip that this handler will handle its sync messages
+ * @pri_handle: pri handle function
+ * @callback: the handler for a sync message that will handle messages
+ * Return: 0 - success, negative - failure
+ **/
+int hifc_register_mgmt_msg_cb(void *hwdev, enum hifc_mod_type mod,
+ void *pri_handle, hifc_mgmt_msg_cb callback)
+{
+ struct hifc_msg_pf_to_mgmt *pf_to_mgmt;
+
+ if (mod >= HIFC_MOD_HW_MAX || !hwdev)
+ return -EFAULT;
+
+ pf_to_mgmt = ((struct hifc_hwdev *)hwdev)->pf_to_mgmt;
+ if (!pf_to_mgmt)
+ return -EINVAL;
+
+ pf_to_mgmt->recv_mgmt_msg_cb[mod] = callback;
+ pf_to_mgmt->recv_mgmt_msg_data[mod] = pri_handle;
+
+ set_bit(HIFC_MGMT_MSG_CB_REG, &pf_to_mgmt->mgmt_msg_cb_state[mod]);
+
+ return 0;
+}
+
+/**
+ * hifc_unregister_mgmt_msg_cb - unregister sync msg handler for a module
+ * @hwdev: the pointer to hw device
+ * @mod: module in the chip that this handler will handle its sync messages
+ **/
+void hifc_unregister_mgmt_msg_cb(void *hwdev, enum hifc_mod_type mod)
+{
+ struct hifc_msg_pf_to_mgmt *pf_to_mgmt;
+
+ if (!hwdev || mod >= HIFC_MOD_HW_MAX)
+ return;
+
+ pf_to_mgmt = ((struct hifc_hwdev *)hwdev)->pf_to_mgmt;
+ if (!pf_to_mgmt)
+ return;
+
+ clear_bit(HIFC_MGMT_MSG_CB_REG, &pf_to_mgmt->mgmt_msg_cb_state[mod]);
+
+ while (test_bit(HIFC_MGMT_MSG_CB_RUNNING,
+ &pf_to_mgmt->mgmt_msg_cb_state[mod]))
+ usleep_range(900, 1000);
+
+ pf_to_mgmt->recv_mgmt_msg_cb[mod] = NULL;
+ pf_to_mgmt->recv_mgmt_msg_data[mod] = NULL;
+}
+
+void hifc_comm_recv_mgmt_self_cmd_reg(void *hwdev, u8 cmd,
+ comm_up_self_msg_proc proc)
+{
+ struct hifc_msg_pf_to_mgmt *pf_to_mgmt;
+ u8 cmd_idx;
+
+ if (!hwdev || !proc)
+ return;
+
+ pf_to_mgmt = ((struct hifc_hwdev *)hwdev)->pf_to_mgmt;
+ if (!pf_to_mgmt)
+ return;
+
+ cmd_idx = pf_to_mgmt->proc.cmd_num;
+ if (cmd_idx >= HIFC_COMM_SELF_CMD_MAX) {
+ sdk_err(pf_to_mgmt->hwdev->dev_hdl,
+ "Register recv up process failed(cmd=0x%x)\r\n", cmd);
+ return;
+ }
+
+ pf_to_mgmt->proc.info[cmd_idx].cmd = cmd;
+ pf_to_mgmt->proc.info[cmd_idx].proc = proc;
+
+ pf_to_mgmt->proc.cmd_num++;
+}
+
+void hifc_comm_recv_up_self_cmd_unreg(void *hwdev, u8 cmd)
+{
+ struct hifc_msg_pf_to_mgmt *pf_to_mgmt;
+ u8 cmd_idx;
+
+ if (!hwdev)
+ return;
+
+ pf_to_mgmt = ((struct hifc_hwdev *)hwdev)->pf_to_mgmt;
+ if (!pf_to_mgmt)
+ return;
+
+ cmd_idx = pf_to_mgmt->proc.cmd_num;
+ if (cmd_idx >= HIFC_COMM_SELF_CMD_MAX) {
+ sdk_err(pf_to_mgmt->hwdev->dev_hdl,
+ "Unregister recv up process failed(cmd=0x%x)\r\n", cmd);
+ return;
+ }
+
+ for (cmd_idx = 0; cmd_idx < HIFC_COMM_SELF_CMD_MAX; cmd_idx++) {
+ if (cmd == pf_to_mgmt->proc.info[cmd_idx].cmd) {
+ pf_to_mgmt->proc.info[cmd_idx].cmd = 0;
+ pf_to_mgmt->proc.info[cmd_idx].proc = NULL;
+ pf_to_mgmt->proc.cmd_num--;
+ }
+ }
+}
+
+/**
+ * mgmt_msg_len - calculate the total message length
+ * @msg_data_len: the length of the message data
+ * Return: the total message length
+ **/
+static u16 mgmt_msg_len(u16 msg_data_len)
+{
+ /* u64 - the size of the header */
+ u16 msg_size;
+
+ msg_size = (u16)(MGMT_MSG_RSVD_FOR_DEV + sizeof(u64) + msg_data_len);
+
+ if (msg_size > MGMT_MSG_SIZE_MIN)
+ msg_size = MGMT_MSG_SIZE_MIN +
+ ALIGN((msg_size - MGMT_MSG_SIZE_MIN),
+ MGMT_MSG_SIZE_STEP);
+ else
+ msg_size = MGMT_MSG_SIZE_MIN;
+
+ return msg_size;
+}
+
+/**
+ * prepare_header - prepare the header of the message
+ * @pf_to_mgmt: PF to MGMT channel
+ * @header: pointer of the header to prepare
+ * @msg_len: the length of the message
+ * @mod: module in the chip that will get the message
+ * @ack_type: message ack type
+ * @direction: the direction of the original message
+ * @cmd: vmd type
+ * @msg_id: message id
+ **/
+static void prepare_header(struct hifc_msg_pf_to_mgmt *pf_to_mgmt,
+ u64 *header, u16 msg_len, enum hifc_mod_type mod,
+ enum hifc_msg_ack_type ack_type,
+ enum hifc_msg_direction_type direction,
+ enum hifc_mgmt_cmd cmd, u32 msg_id)
+{
+ struct hifc_hwif *hwif = pf_to_mgmt->hwdev->hwif;
+
+ *header = HIFC_MSG_HEADER_SET(msg_len, MSG_LEN) |
+ HIFC_MSG_HEADER_SET(mod, MODULE) |
+ HIFC_MSG_HEADER_SET(msg_len, SEG_LEN) |
+ HIFC_MSG_HEADER_SET(ack_type, NO_ACK) |
+ HIFC_MSG_HEADER_SET(0, ASYNC_MGMT_TO_PF) |
+ HIFC_MSG_HEADER_SET(0, SEQID) |
+ HIFC_MSG_HEADER_SET(LAST_SEGMENT, LAST) |
+ HIFC_MSG_HEADER_SET(direction, DIRECTION) |
+ HIFC_MSG_HEADER_SET(cmd, CMD) |
+ HIFC_MSG_HEADER_SET(HIFC_PCI_INTF_IDX(hwif), PCI_INTF_IDX) |
+ HIFC_MSG_HEADER_SET(hwif->attr.port_to_port_idx, P2P_IDX) |
+ HIFC_MSG_HEADER_SET(msg_id, MSG_ID);
+}
+
+static void clp_prepare_header(struct hifc_hwdev *hwdev,
+ u64 *header, u16 msg_len, enum hifc_mod_type mod,
+ enum hifc_msg_ack_type ack_type,
+ enum hifc_msg_direction_type direction,
+ enum hifc_mgmt_cmd cmd, u32 msg_id)
+{
+ struct hifc_hwif *hwif = hwdev->hwif;
+
+ *header = HIFC_MSG_HEADER_SET(msg_len, MSG_LEN) |
+ HIFC_MSG_HEADER_SET(mod, MODULE) |
+ HIFC_MSG_HEADER_SET(msg_len, SEG_LEN) |
+ HIFC_MSG_HEADER_SET(ack_type, NO_ACK) |
+ HIFC_MSG_HEADER_SET(0, ASYNC_MGMT_TO_PF) |
+ HIFC_MSG_HEADER_SET(0, SEQID) |
+ HIFC_MSG_HEADER_SET(LAST_SEGMENT, LAST) |
+ HIFC_MSG_HEADER_SET(direction, DIRECTION) |
+ HIFC_MSG_HEADER_SET(cmd, CMD) |
+ HIFC_MSG_HEADER_SET(HIFC_PCI_INTF_IDX(hwif), PCI_INTF_IDX) |
+ HIFC_MSG_HEADER_SET(hwif->attr.port_to_port_idx, P2P_IDX) |
+ HIFC_MSG_HEADER_SET(msg_id, MSG_ID);
+}
+
+/**
+ * prepare_mgmt_cmd - prepare the mgmt command
+ * @mgmt_cmd: pointer to the command to prepare
+ * @header: pointer of the header to prepare
+ * @msg: the data of the message
+ * @msg_len: the length of the message
+ **/
+static void prepare_mgmt_cmd(u8 *mgmt_cmd, u64 *header, const void *msg,
+ int msg_len)
+{
+ memset(mgmt_cmd, 0, MGMT_MSG_RSVD_FOR_DEV);
+
+ mgmt_cmd += MGMT_MSG_RSVD_FOR_DEV;
+ memcpy(mgmt_cmd, header, sizeof(*header));
+
+ mgmt_cmd += sizeof(*header);
+ memcpy(mgmt_cmd, msg, msg_len);
+}
+
+/**
+ * send_msg_to_mgmt_async - send async message
+ * @pf_to_mgmt: PF to MGMT channel
+ * @mod: module in the chip that will get the message
+ * @cmd: command of the message
+ * @msg: the data of the message
+ * @msg_len: the length of the message
+ * @direction: the direction of the original message
+ * @resp_msg_id: msg id to response for
+ * Return: 0 - success, negative - failure
+ **/
+static int send_msg_to_mgmt_async(struct hifc_msg_pf_to_mgmt *pf_to_mgmt,
+ enum hifc_mod_type mod, u8 cmd,
+ void *msg, u16 msg_len,
+ enum hifc_msg_direction_type direction,
+ u16 resp_msg_id)
+{
+ void *mgmt_cmd = pf_to_mgmt->async_msg_buf;
+ struct hifc_api_cmd_chain *chain;
+ u64 header;
+ u16 cmd_size = mgmt_msg_len(msg_len);
+
+ if (!hifc_get_chip_present_flag(pf_to_mgmt->hwdev))
+ return -EFAULT;
+
+ if (direction == HIFC_MSG_RESPONSE)
+ prepare_header(pf_to_mgmt, &header, msg_len, mod, HIFC_MSG_ACK,
+ direction, cmd, resp_msg_id);
+ else
+ prepare_header(pf_to_mgmt, &header, msg_len, mod, HIFC_MSG_ACK,
+ direction, cmd, ASYNC_MSG_ID(pf_to_mgmt));
+
+ prepare_mgmt_cmd((u8 *)mgmt_cmd, &header, msg, msg_len);
+
+ chain = pf_to_mgmt->cmd_chain[HIFC_API_CMD_WRITE_ASYNC_TO_MGMT_CPU];
+
+ return hifc_api_cmd_write(chain, HIFC_NODE_ID_MGMT_HOST, mgmt_cmd,
+ cmd_size);
+}
+
+int hifc_pf_to_mgmt_async(void *hwdev, enum hifc_mod_type mod,
+ u8 cmd, void *buf_in, u16 in_size)
+{
+ struct hifc_msg_pf_to_mgmt *pf_to_mgmt;
+ void *dev = ((struct hifc_hwdev *)hwdev)->dev_hdl;
+ int err;
+
+ pf_to_mgmt = ((struct hifc_hwdev *)hwdev)->pf_to_mgmt;
+
+ /* Lock the async_msg_buf */
+ spin_lock_bh(&pf_to_mgmt->async_msg_lock);
+ ASYNC_MSG_ID_INC(pf_to_mgmt);
+
+ err = send_msg_to_mgmt_async(pf_to_mgmt, mod, cmd, buf_in, in_size,
+ HIFC_MSG_DIRECT_SEND, MSG_NO_RESP);
+ spin_unlock_bh(&pf_to_mgmt->async_msg_lock);
+
+ if (err) {
+ sdk_err(dev, "Failed to send async mgmt msg\n");
+ return err;
+ }
+
+ return 0;
+}
+
+/**
+ * send_msg_to_mgmt_sync - send async message
+ * @pf_to_mgmt: PF to MGMT channel
+ * @mod: module in the chip that will get the message
+ * @cmd: command of the message
+ * @msg: the msg data
+ * @msg_len: the msg data length
+ * @direction: the direction of the original message
+ * @resp_msg_id: msg id to response for
+ * Return: 0 - success, negative - failure
+ **/
+static int send_msg_to_mgmt_sync(struct hifc_msg_pf_to_mgmt *pf_to_mgmt,
+ enum hifc_mod_type mod, u8 cmd,
+ void *msg, u16 msg_len,
+ enum hifc_msg_ack_type ack_type,
+ enum hifc_msg_direction_type direction,
+ u16 resp_msg_id)
+{
+ void *mgmt_cmd = pf_to_mgmt->sync_msg_buf;
+ struct hifc_api_cmd_chain *chain;
+ u64 header;
+ u16 cmd_size = mgmt_msg_len(msg_len);
+
+ if (!hifc_get_chip_present_flag(pf_to_mgmt->hwdev))
+ return -EFAULT;
+
+ if (direction == HIFC_MSG_RESPONSE)
+ prepare_header(pf_to_mgmt, &header, msg_len, mod, ack_type,
+ direction, cmd, resp_msg_id);
+ else
+ prepare_header(pf_to_mgmt, &header, msg_len, mod, ack_type,
+ direction, cmd, SYNC_MSG_ID_INC(pf_to_mgmt));
+
+ if (ack_type == HIFC_MSG_ACK)
+ pf_to_mgmt_send_event_set(pf_to_mgmt, SEND_EVENT_START);
+
+ prepare_mgmt_cmd((u8 *)mgmt_cmd, &header, msg, msg_len);
+
+ chain = pf_to_mgmt->cmd_chain[HIFC_API_CMD_WRITE_TO_MGMT_CPU];
+
+ return hifc_api_cmd_write(chain, HIFC_NODE_ID_MGMT_HOST, mgmt_cmd,
+ cmd_size);
+}
+
+static inline void msg_to_mgmt_pre(enum hifc_mod_type mod, void *buf_in)
+{
+ struct hifc_msg_head *msg_head;
+
+ /* set aeq fix num to 3, need to ensure response aeq id < 3*/
+ if (mod == HIFC_MOD_COMM || mod == HIFC_MOD_L2NIC) {
+ msg_head = buf_in;
+
+ if (msg_head->resp_aeq_num >= HIFC_MAX_AEQS)
+ msg_head->resp_aeq_num = 0;
+ }
+}
+
+int hifc_pf_to_mgmt_sync(void *hwdev, enum hifc_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout)
+{
+ struct hifc_msg_pf_to_mgmt *pf_to_mgmt;
+ void *dev = ((struct hifc_hwdev *)hwdev)->dev_hdl;
+ struct hifc_recv_msg *recv_msg;
+ struct completion *recv_done;
+ ulong timeo;
+ int err;
+ ulong ret;
+
+ msg_to_mgmt_pre(mod, buf_in);
+
+ pf_to_mgmt = ((struct hifc_hwdev *)hwdev)->pf_to_mgmt;
+
+ /* Lock the sync_msg_buf */
+ down(&pf_to_mgmt->sync_msg_lock);
+ recv_msg = &pf_to_mgmt->recv_resp_msg_from_mgmt;
+ recv_done = &recv_msg->recv_done;
+
+ init_completion(recv_done);
+
+ err = send_msg_to_mgmt_sync(pf_to_mgmt, mod, cmd, buf_in, in_size,
+ HIFC_MSG_ACK, HIFC_MSG_DIRECT_SEND,
+ MSG_NO_RESP);
+ if (err) {
+ sdk_err(dev, "Failed to send sync msg to mgmt, sync_msg_id: %d\n",
+ pf_to_mgmt->sync_msg_id);
+ pf_to_mgmt_send_event_set(pf_to_mgmt, SEND_EVENT_FAIL);
+ goto unlock_sync_msg;
+ }
+
+ timeo = msecs_to_jiffies(timeout ? timeout : MGMT_MSG_TIMEOUT);
+
+ ret = wait_for_completion_timeout(recv_done, timeo);
+ if (!ret) {
+ sdk_err(dev, "Mgmt response sync cmd timeout, sync_msg_id: %d\n",
+ pf_to_mgmt->sync_msg_id);
+ hifc_dump_aeq_info((struct hifc_hwdev *)hwdev);
+ err = -ETIMEDOUT;
+ pf_to_mgmt_send_event_set(pf_to_mgmt, SEND_EVENT_TIMEOUT);
+ goto unlock_sync_msg;
+ }
+ pf_to_mgmt_send_event_set(pf_to_mgmt, SEND_EVENT_END);
+
+ if (!(((struct hifc_hwdev *)hwdev)->chip_present_flag)) {
+ up(&pf_to_mgmt->sync_msg_lock);
+ return -ETIMEDOUT;
+ }
+
+ if (buf_out && out_size) {
+ if (*out_size < recv_msg->msg_len) {
+ sdk_err(dev, "Invalid response message length: %d for mod %d cmd %d from mgmt, should less than: %d\n",
+ recv_msg->msg_len, mod, cmd, *out_size);
+ err = -EFAULT;
+ goto unlock_sync_msg;
+ }
+
+ if (recv_msg->msg_len)
+ memcpy(buf_out, recv_msg->msg, recv_msg->msg_len);
+
+ *out_size = recv_msg->msg_len;
+ }
+
+unlock_sync_msg:
+ up(&pf_to_mgmt->sync_msg_lock);
+
+ return err;
+}
+
+static int __get_clp_reg(void *hwdev, enum clp_data_type data_type,
+ enum clp_reg_type reg_type, u32 *reg_addr)
+{
+ struct hifc_hwdev *dev = hwdev;
+ u32 offset;
+
+ offset = HIFC_CLP_REG_GAP * hifc_pcie_itf_id(dev);
+
+ switch (reg_type) {
+ case HIFC_CLP_BA_HOST:
+ *reg_addr = (data_type == HIFC_CLP_REQ_HOST) ?
+ HIFC_CLP_REG(REQ_SRAM_BA) :
+ HIFC_CLP_REG(RSP_SRAM_BA);
+ break;
+
+ case HIFC_CLP_SIZE_HOST:
+ *reg_addr = HIFC_CLP_REG(SRAM_SIZE);
+ break;
+
+ case HIFC_CLP_LEN_HOST:
+ *reg_addr = (data_type == HIFC_CLP_REQ_HOST) ?
+ HIFC_CLP_REG(REQ) : HIFC_CLP_REG(RSP);
+ break;
+
+ case HIFC_CLP_START_REQ_HOST:
+ *reg_addr = HIFC_CLP_REG(REQ);
+ break;
+
+ case HIFC_CLP_READY_RSP_HOST:
+ *reg_addr = HIFC_CLP_REG(RSP);
+ break;
+
+ default:
+ *reg_addr = 0;
+ break;
+ }
+ if (*reg_addr == 0)
+ return -EINVAL;
+
+ *reg_addr += offset;
+
+ return 0;
+}
+
+static inline int clp_param_valid(struct hifc_hwdev *hwdev,
+ enum clp_data_type data_type,
+ enum clp_reg_type reg_type)
+{
+ if (data_type == HIFC_CLP_REQ_HOST &&
+ reg_type == HIFC_CLP_READY_RSP_HOST)
+ return -EINVAL;
+
+ if (data_type == HIFC_CLP_RSP_HOST &&
+ reg_type == HIFC_CLP_START_REQ_HOST)
+ return -EINVAL;
+
+ return 0;
+}
+
+static u32 get_clp_reg_value(struct hifc_hwdev *hwdev,
+ enum clp_reg_type reg_type, u32 reg_addr)
+{
+ u32 reg_value;
+
+ reg_value = hifc_hwif_read_reg(hwdev->hwif, reg_addr);
+
+ switch (reg_type) {
+ case HIFC_CLP_BA_HOST:
+ reg_value = ((reg_value >>
+ HIFC_CLP_OFFSET(SRAM_BASE)) &
+ HIFC_CLP_MASK(SRAM_BASE));
+ break;
+
+ case HIFC_CLP_SIZE_HOST:
+ reg_value = ((reg_value >>
+ HIFC_CLP_OFFSET(SRAM_SIZE)) &
+ HIFC_CLP_MASK(SRAM_SIZE));
+ break;
+
+ case HIFC_CLP_LEN_HOST:
+ reg_value = ((reg_value >> HIFC_CLP_OFFSET(LEN)) &
+ HIFC_CLP_MASK(LEN));
+ break;
+
+ case HIFC_CLP_START_REQ_HOST:
+ reg_value = ((reg_value >> HIFC_CLP_OFFSET(START)) &
+ HIFC_CLP_MASK(START));
+ break;
+
+ case HIFC_CLP_READY_RSP_HOST:
+ reg_value = ((reg_value >> HIFC_CLP_OFFSET(READY)) &
+ HIFC_CLP_MASK(READY));
+ break;
+
+ default:
+ break;
+ }
+
+ return reg_value;
+}
+
+static int hifc_read_clp_reg(struct hifc_hwdev *hwdev,
+ enum clp_data_type data_type,
+ enum clp_reg_type reg_type, u32 *read_value)
+{
+ u32 reg_addr;
+ int err;
+
+ err = clp_param_valid(hwdev, data_type, reg_type);
+ if (err)
+ return err;
+
+ err = __get_clp_reg(hwdev, data_type, reg_type, ®_addr);
+ if (err)
+ return err;
+
+ *read_value = get_clp_reg_value(hwdev, reg_type, reg_addr);
+
+ return 0;
+}
+
+static int __check_data_type(enum clp_data_type data_type,
+ enum clp_reg_type reg_type)
+{
+ if (data_type == HIFC_CLP_REQ_HOST &&
+ reg_type == HIFC_CLP_READY_RSP_HOST)
+ return -EINVAL;
+ if (data_type == HIFC_CLP_RSP_HOST &&
+ reg_type == HIFC_CLP_START_REQ_HOST)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int __check_reg_value(enum clp_reg_type reg_type, u32 value)
+{
+ if (reg_type == HIFC_CLP_BA_HOST &&
+ value > HIFC_CLP_SRAM_BASE_REG_MAX)
+ return -EINVAL;
+
+ if (reg_type == HIFC_CLP_SIZE_HOST &&
+ value > HIFC_CLP_SRAM_SIZE_REG_MAX)
+ return -EINVAL;
+
+ if (reg_type == HIFC_CLP_LEN_HOST &&
+ value > HIFC_CLP_LEN_REG_MAX)
+ return -EINVAL;
+
+ if ((reg_type == HIFC_CLP_START_REQ_HOST ||
+ reg_type == HIFC_CLP_READY_RSP_HOST) &&
+ value > HIFC_CLP_START_OR_READY_REG_MAX)
+ return -EINVAL;
+
+ return 0;
+}
+
+static void hifc_write_clp_reg(struct hifc_hwdev *hwdev,
+ enum clp_data_type data_type,
+ enum clp_reg_type reg_type, u32 value)
+{
+ u32 reg_addr, reg_value;
+
+ if (__check_data_type(data_type, reg_type))
+ return;
+
+ if (__check_reg_value(reg_type, value))
+ return;
+
+ if (__get_clp_reg(hwdev, data_type, reg_type, ®_addr))
+ return;
+
+ reg_value = hifc_hwif_read_reg(hwdev->hwif, reg_addr);
+
+ switch (reg_type) {
+ case HIFC_CLP_LEN_HOST:
+ reg_value = reg_value &
+ (~(HIFC_CLP_MASK(LEN) << HIFC_CLP_OFFSET(LEN)));
+ reg_value = reg_value | (value << HIFC_CLP_OFFSET(LEN));
+ break;
+
+ case HIFC_CLP_START_REQ_HOST:
+ reg_value = reg_value &
+ (~(HIFC_CLP_MASK(START) <<
+ HIFC_CLP_OFFSET(START)));
+ reg_value = reg_value | (value << HIFC_CLP_OFFSET(START));
+ break;
+
+ case HIFC_CLP_READY_RSP_HOST:
+ reg_value = reg_value &
+ (~(HIFC_CLP_MASK(READY) <<
+ HIFC_CLP_OFFSET(READY)));
+ reg_value = reg_value | (value << HIFC_CLP_OFFSET(READY));
+ break;
+
+ default:
+ return;
+ }
+
+ hifc_hwif_write_reg(hwdev->hwif, reg_addr, reg_value);
+}
+
+static int hifc_read_clp_data(struct hifc_hwdev *hwdev,
+ void *buf_out, u16 *out_size)
+{
+ int err;
+ u32 reg = HIFC_CLP_DATA(RSP);
+ u32 ready, delay_cnt;
+ u32 *ptr = (u32 *)buf_out;
+ u32 temp_out_size = 0;
+
+ err = hifc_read_clp_reg(hwdev, HIFC_CLP_RSP_HOST,
+ HIFC_CLP_READY_RSP_HOST, &ready);
+ if (err)
+ return err;
+
+ delay_cnt = 0;
+ while (ready == 0) {
+ usleep_range(9000, 10000);
+ delay_cnt++;
+ err = hifc_read_clp_reg(hwdev, HIFC_CLP_RSP_HOST,
+ HIFC_CLP_READY_RSP_HOST, &ready);
+ if (err || delay_cnt > HIFC_CLP_DELAY_CNT_MAX) {
+ sdk_err(hwdev->dev_hdl, "timeout with delay_cnt:%d\n",
+ delay_cnt);
+ return -EINVAL;
+ }
+ }
+
+ err = hifc_read_clp_reg(hwdev, HIFC_CLP_RSP_HOST,
+ HIFC_CLP_LEN_HOST, &temp_out_size);
+ if (err)
+ return err;
+
+ if (temp_out_size > HIFC_CLP_SRAM_SIZE_REG_MAX || !temp_out_size) {
+ sdk_err(hwdev->dev_hdl, "invalid temp_out_size:%d\n",
+ temp_out_size);
+ return -EINVAL;
+ }
+
+ *out_size = (u16)(temp_out_size & 0xffff);
+ for (; temp_out_size > 0; temp_out_size--) {
+ *ptr = hifc_hwif_read_reg(hwdev->hwif, reg);
+ ptr++;
+ reg = reg + 4;
+ }
+
+ hifc_write_clp_reg(hwdev, HIFC_CLP_RSP_HOST,
+ HIFC_CLP_READY_RSP_HOST, (u32)0x0);
+ hifc_write_clp_reg(hwdev, HIFC_CLP_RSP_HOST,
+ HIFC_CLP_LEN_HOST, (u32)0x0);
+
+ return 0;
+}
+
+static int hifc_write_clp_data(struct hifc_hwdev *hwdev,
+ void *buf_in, u16 in_size)
+{
+ int err;
+ u32 reg = HIFC_CLP_DATA(REQ);
+ u32 start = 1;
+ u32 delay_cnt = 0;
+ u32 *ptr = (u32 *)buf_in;
+
+ err = hifc_read_clp_reg(hwdev, HIFC_CLP_REQ_HOST,
+ HIFC_CLP_START_REQ_HOST, &start);
+ if (err)
+ return err;
+
+ while (start == 1) {
+ usleep_range(9000, 10000);
+ delay_cnt++;
+ err = hifc_read_clp_reg(hwdev, HIFC_CLP_REQ_HOST,
+ HIFC_CLP_START_REQ_HOST, &start);
+ if (err || delay_cnt > HIFC_CLP_DELAY_CNT_MAX)
+ return -EINVAL;
+ }
+
+ hifc_write_clp_reg(hwdev, HIFC_CLP_REQ_HOST,
+ HIFC_CLP_LEN_HOST, in_size);
+ hifc_write_clp_reg(hwdev, HIFC_CLP_REQ_HOST,
+ HIFC_CLP_START_REQ_HOST, (u32)0x1);
+
+ for (; in_size > 0; in_size--) {
+ hifc_hwif_write_reg(hwdev->hwif, reg, *ptr);
+ ptr++;
+ reg = reg + 4;
+ }
+
+ return 0;
+}
+
+static int hifc_check_clp_init_status(struct hifc_hwdev *hwdev)
+{
+ int err;
+ u32 reg_value = 0;
+
+ err = hifc_read_clp_reg(hwdev, HIFC_CLP_REQ_HOST,
+ HIFC_CLP_BA_HOST, ®_value);
+ if (err || !reg_value) {
+ sdk_err(hwdev->dev_hdl, "Wrong req ba value:0x%x\n", reg_value);
+ return -EINVAL;
+ }
+
+ err = hifc_read_clp_reg(hwdev, HIFC_CLP_RSP_HOST,
+ HIFC_CLP_BA_HOST, ®_value);
+ if (err || !reg_value) {
+ sdk_err(hwdev->dev_hdl, "Wrong rsp ba value:0x%x\n", reg_value);
+ return -EINVAL;
+ }
+
+ err = hifc_read_clp_reg(hwdev, HIFC_CLP_REQ_HOST,
+ HIFC_CLP_SIZE_HOST, ®_value);
+ if (err || !reg_value) {
+ sdk_err(hwdev->dev_hdl, "Wrong req size\n");
+ return -EINVAL;
+ }
+
+ err = hifc_read_clp_reg(hwdev, HIFC_CLP_RSP_HOST,
+ HIFC_CLP_SIZE_HOST, ®_value);
+ if (err || !reg_value) {
+ sdk_err(hwdev->dev_hdl, "Wrong rsp size\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static void hifc_clear_clp_data(struct hifc_hwdev *hwdev,
+ enum clp_data_type data_type)
+{
+ u32 reg = (data_type == HIFC_CLP_REQ_HOST) ?
+ HIFC_CLP_DATA(REQ) : HIFC_CLP_DATA(RSP);
+ u32 count = HIFC_CLP_INPUT_BUFFER_LEN_HOST / HIFC_CLP_DATA_UNIT_HOST;
+
+ for (; count > 0; count--) {
+ hifc_hwif_write_reg(hwdev->hwif, reg, 0x0);
+ reg = reg + 4;
+ }
+}
+
+int hifc_pf_clp_to_mgmt(void *hwdev, enum hifc_mod_type mod, u8 cmd,
+ const void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hifc_clp_pf_to_mgmt *clp_pf_to_mgmt;
+ struct hifc_hwdev *dev = hwdev;
+ u64 header;
+ u16 real_size;
+ u8 *clp_msg_buf;
+ int err;
+
+ clp_pf_to_mgmt = ((struct hifc_hwdev *)hwdev)->clp_pf_to_mgmt;
+ clp_msg_buf = clp_pf_to_mgmt->clp_msg_buf;
+
+ /*4 bytes alignment*/
+ if (in_size % HIFC_CLP_DATA_UNIT_HOST)
+ real_size = (in_size + (u16)sizeof(header)
+ + HIFC_CLP_DATA_UNIT_HOST);
+ else
+ real_size = in_size + (u16)sizeof(header);
+ real_size = real_size / HIFC_CLP_DATA_UNIT_HOST;
+
+ if (real_size >
+ (HIFC_CLP_INPUT_BUFFER_LEN_HOST / HIFC_CLP_DATA_UNIT_HOST)) {
+ sdk_err(dev->dev_hdl, "Invalid real_size:%d\n", real_size);
+ return -EINVAL;
+ }
+ down(&clp_pf_to_mgmt->clp_msg_lock);
+
+ err = hifc_check_clp_init_status(dev);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Check clp init status failed\n");
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return err;
+ }
+
+ hifc_clear_clp_data(dev, HIFC_CLP_RSP_HOST);
+ hifc_write_clp_reg(dev, HIFC_CLP_RSP_HOST,
+ HIFC_CLP_READY_RSP_HOST, 0x0);
+
+ /*Send request*/
+ memset(clp_msg_buf, 0x0, HIFC_CLP_INPUT_BUFFER_LEN_HOST);
+ clp_prepare_header(dev, &header, in_size, mod, 0, 0, cmd, 0);
+
+ memcpy(clp_msg_buf, &header, sizeof(header));
+ clp_msg_buf += sizeof(header);
+ memcpy(clp_msg_buf, buf_in, in_size);
+
+ clp_msg_buf = clp_pf_to_mgmt->clp_msg_buf;
+
+ hifc_clear_clp_data(dev, HIFC_CLP_REQ_HOST);
+ err = hifc_write_clp_data(hwdev,
+ clp_pf_to_mgmt->clp_msg_buf, real_size);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Send clp request failed\n");
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return -EINVAL;
+ }
+
+ /*Get response*/
+ clp_msg_buf = clp_pf_to_mgmt->clp_msg_buf;
+ memset(clp_msg_buf, 0x0, HIFC_CLP_INPUT_BUFFER_LEN_HOST);
+ err = hifc_read_clp_data(hwdev, clp_msg_buf, &real_size);
+ hifc_clear_clp_data(dev, HIFC_CLP_RSP_HOST);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Read clp response failed\n");
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return -EINVAL;
+ }
+
+ real_size = (u16)((real_size * HIFC_CLP_DATA_UNIT_HOST) & 0xffff);
+ if ((real_size <= sizeof(header)) ||
+ (real_size > HIFC_CLP_INPUT_BUFFER_LEN_HOST)) {
+ sdk_err(dev->dev_hdl, "Invalid response size:%d", real_size);
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return -EINVAL;
+ }
+ real_size = real_size - sizeof(header);
+ if (real_size != *out_size) {
+ sdk_err(dev->dev_hdl, "Invalid real_size:%d, out_size:%d\n",
+ real_size, *out_size);
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return -EINVAL;
+ }
+
+ memcpy(buf_out, (clp_msg_buf + sizeof(header)), real_size);
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+
+ return 0;
+}
+
+/* This function is only used by txrx flush */
+int hifc_pf_to_mgmt_no_ack(void *hwdev, enum hifc_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size)
+{
+ struct hifc_msg_pf_to_mgmt *pf_to_mgmt;
+ void *dev = ((struct hifc_hwdev *)hwdev)->dev_hdl;
+ int err = -EINVAL;
+
+ if (!hifc_is_hwdev_mod_inited(hwdev, HIFC_HWDEV_MGMT_INITED)) {
+ sdk_err(dev, "Mgmt module not initialized\n");
+ return -EINVAL;
+ }
+
+ pf_to_mgmt = ((struct hifc_hwdev *)hwdev)->pf_to_mgmt;
+
+ if (!MSG_SZ_IS_VALID(in_size)) {
+ sdk_err(dev, "Mgmt msg buffer size: %d is not valid\n",
+ in_size);
+ return -EINVAL;
+ }
+
+ if (!(((struct hifc_hwdev *)hwdev)->chip_present_flag))
+ return -EPERM;
+
+ /* Lock the sync_msg_buf */
+ down(&pf_to_mgmt->sync_msg_lock);
+
+ err = send_msg_to_mgmt_sync(pf_to_mgmt, mod, cmd, buf_in, in_size,
+ HIFC_MSG_NO_ACK, HIFC_MSG_DIRECT_SEND,
+ MSG_NO_RESP);
+
+ up(&pf_to_mgmt->sync_msg_lock);
+
+ return err;
+}
+
+/**
+ * api cmd write or read bypass defaut use poll, if want to use aeq interrupt,
+ * please set wb_trigger_aeqe to 1
+ **/
+int hifc_api_cmd_write_nack(void *hwdev, u8 dest, void *cmd, u16 size)
+{
+ struct hifc_msg_pf_to_mgmt *pf_to_mgmt;
+ struct hifc_api_cmd_chain *chain;
+
+ if (!hwdev || !size || !cmd)
+ return -EINVAL;
+
+ if (!hifc_is_hwdev_mod_inited(hwdev, HIFC_HWDEV_MGMT_INITED) ||
+ hifc_get_mgmt_channel_status(hwdev))
+ return -EPERM;
+
+ pf_to_mgmt = ((struct hifc_hwdev *)hwdev)->pf_to_mgmt;
+ chain = pf_to_mgmt->cmd_chain[HIFC_API_CMD_POLL_WRITE];
+
+ if (!(((struct hifc_hwdev *)hwdev)->chip_present_flag))
+ return -EPERM;
+
+ return hifc_api_cmd_write(chain, dest, cmd, size);
+}
+
+int hifc_api_cmd_read_ack(void *hwdev, u8 dest, void *cmd, u16 size, void *ack,
+ u16 ack_size)
+{
+ struct hifc_msg_pf_to_mgmt *pf_to_mgmt;
+ struct hifc_api_cmd_chain *chain;
+
+ if (!hwdev || !cmd || (ack_size && !ack))
+ return -EINVAL;
+
+ if (!hifc_is_hwdev_mod_inited(hwdev, HIFC_HWDEV_MGMT_INITED) ||
+ hifc_get_mgmt_channel_status(hwdev))
+ return -EPERM;
+
+ pf_to_mgmt = ((struct hifc_hwdev *)hwdev)->pf_to_mgmt;
+ chain = pf_to_mgmt->cmd_chain[HIFC_API_CMD_POLL_READ];
+
+ if (!(((struct hifc_hwdev *)hwdev)->chip_present_flag))
+ return -EPERM;
+
+ return hifc_api_cmd_read(chain, dest, cmd, size, ack, ack_size);
+}
+
+static void __send_mgmt_ack(struct hifc_msg_pf_to_mgmt *pf_to_mgmt,
+ enum hifc_mod_type mod, u8 cmd, void *buf_in,
+ u16 in_size, u16 msg_id)
+{
+ u16 buf_size;
+
+ if (!in_size)
+ buf_size = BUF_OUT_DEFAULT_SIZE;
+ else
+ buf_size = in_size;
+
+ spin_lock_bh(&pf_to_mgmt->async_msg_lock);
+ /* MGMT sent sync msg, send the response */
+ send_msg_to_mgmt_async(pf_to_mgmt, mod, cmd,
+ buf_in, buf_size, HIFC_MSG_RESPONSE,
+ msg_id);
+ spin_unlock_bh(&pf_to_mgmt->async_msg_lock);
+}
+
+/**
+ * mgmt_recv_msg_handler - handler for message from mgmt cpu
+ * @pf_to_mgmt: PF to MGMT channel
+ * @recv_msg: received message details
+ **/
+static void mgmt_recv_msg_handler(struct hifc_msg_pf_to_mgmt *pf_to_mgmt,
+ enum hifc_mod_type mod, u8 cmd, void *buf_in,
+ u16 in_size, u16 msg_id, int need_resp)
+{
+ void *dev = pf_to_mgmt->hwdev->dev_hdl;
+ void *buf_out = pf_to_mgmt->mgmt_ack_buf;
+ enum hifc_mod_type tmp_mod = mod;
+ bool ack_first = false;
+ u16 out_size = 0;
+
+ memset(buf_out, 0, MAX_PF_MGMT_BUF_SIZE);
+
+ if (mod >= HIFC_MOD_HW_MAX) {
+ sdk_warn(dev, "Receive illegal message from mgmt cpu, mod = %d\n",
+ mod);
+ goto resp;
+ }
+
+ set_bit(HIFC_MGMT_MSG_CB_RUNNING,
+ &pf_to_mgmt->mgmt_msg_cb_state[tmp_mod]);
+
+ if (!pf_to_mgmt->recv_mgmt_msg_cb[mod] ||
+ !test_bit(HIFC_MGMT_MSG_CB_REG,
+ &pf_to_mgmt->mgmt_msg_cb_state[tmp_mod])) {
+ sdk_warn(dev, "Receive mgmt callback is null, mod = %d\n",
+ mod);
+ clear_bit(HIFC_MGMT_MSG_CB_RUNNING,
+ &pf_to_mgmt->mgmt_msg_cb_state[tmp_mod]);
+ goto resp;
+ }
+
+ ack_first = hifc_mgmt_event_ack_first(mod, cmd);
+ if (ack_first && need_resp) {
+ /* send ack to mgmt first to avoid command timeout in
+ * mgmt(100ms in mgmt);
+ * mgmt to host command don't need any response data from host,
+ * just need ack from host
+ */
+ __send_mgmt_ack(pf_to_mgmt, mod, cmd, buf_out, in_size, msg_id);
+ }
+
+ pf_to_mgmt->recv_mgmt_msg_cb[tmp_mod](pf_to_mgmt->hwdev,
+ pf_to_mgmt->recv_mgmt_msg_data[tmp_mod],
+ cmd, buf_in, in_size,
+ buf_out, &out_size);
+
+ clear_bit(HIFC_MGMT_MSG_CB_RUNNING,
+ &pf_to_mgmt->mgmt_msg_cb_state[tmp_mod]);
+
+resp:
+ if (!ack_first && need_resp)
+ __send_mgmt_ack(pf_to_mgmt, mod, cmd, buf_out, out_size,
+ msg_id);
+}
+
+/**
+ * mgmt_resp_msg_handler - handler for response message from mgmt cpu
+ * @pf_to_mgmt: PF to MGMT channel
+ * @recv_msg: received message details
+ **/
+static void mgmt_resp_msg_handler(struct hifc_msg_pf_to_mgmt *pf_to_mgmt,
+ struct hifc_recv_msg *recv_msg)
+{
+ void *dev = pf_to_mgmt->hwdev->dev_hdl;
+
+ /* delete async msg */
+ if (recv_msg->msg_id & ASYNC_MSG_FLAG)
+ return;
+
+ spin_lock(&pf_to_mgmt->sync_event_lock);
+ if (recv_msg->msg_id == pf_to_mgmt->sync_msg_id &&
+ pf_to_mgmt->event_flag == SEND_EVENT_START) {
+ complete(&recv_msg->recv_done);
+ } else if (recv_msg->msg_id != pf_to_mgmt->sync_msg_id) {
+ sdk_err(dev, "Send msg id(0x%x) recv msg id(0x%x) dismatch, event state=%d\n",
+ pf_to_mgmt->sync_msg_id, recv_msg->msg_id,
+ pf_to_mgmt->event_flag);
+ } else {
+ sdk_err(dev, "Wait timeout, send msg id(0x%x) recv msg id(0x%x), event state=%d!\n",
+ pf_to_mgmt->sync_msg_id, recv_msg->msg_id,
+ pf_to_mgmt->event_flag);
+ }
+ spin_unlock(&pf_to_mgmt->sync_event_lock);
+}
+
+static void recv_mgmt_msg_work_handler(struct work_struct *work)
+{
+ struct hifc_mgmt_msg_handle_work *mgmt_work =
+ container_of(work, struct hifc_mgmt_msg_handle_work, work);
+
+ mgmt_recv_msg_handler(mgmt_work->pf_to_mgmt, mgmt_work->mod,
+ mgmt_work->cmd, mgmt_work->msg,
+ mgmt_work->msg_len, mgmt_work->msg_id,
+ !mgmt_work->async_mgmt_to_pf);
+
+ kfree(mgmt_work->msg);
+ kfree(mgmt_work);
+}
+
+static bool check_mgmt_seq_id_and_seg_len(struct hifc_recv_msg *recv_msg,
+ u8 seq_id, u8 seg_len)
+{
+ if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN)
+ return false;
+
+ if (seq_id == 0) {
+ recv_msg->seq_id = seq_id;
+ } else {
+ if (seq_id != recv_msg->seq_id + 1)
+ return false;
+ recv_msg->seq_id = seq_id;
+ }
+
+ return true;
+}
+
+/**
+ * recv_mgmt_msg_handler - handler a message from mgmt cpu
+ * @pf_to_mgmt: PF to MGMT channel
+ * @header: the header of the message
+ * @recv_msg: received message details
+ **/
+static void recv_mgmt_msg_handler(struct hifc_msg_pf_to_mgmt *pf_to_mgmt,
+ u8 *header, struct hifc_recv_msg *recv_msg)
+{
+ struct hifc_mgmt_msg_handle_work *mgmt_work;
+ u64 mbox_header = *((u64 *)header);
+ void *msg_body = header + sizeof(mbox_header);
+ u8 seq_id, seq_len;
+ u32 offset;
+ u64 dir;
+
+ /* Don't need to get anything from hw when cmd is async */
+ dir = HIFC_MSG_HEADER_GET(mbox_header, DIRECTION);
+ if (dir == HIFC_MSG_RESPONSE &&
+ HIFC_MSG_HEADER_GET(mbox_header, MSG_ID) & ASYNC_MSG_FLAG)
+ return;
+
+ seq_len = HIFC_MSG_HEADER_GET(mbox_header, SEG_LEN);
+ seq_id = HIFC_MSG_HEADER_GET(mbox_header, SEQID);
+
+ if (!check_mgmt_seq_id_and_seg_len(recv_msg, seq_id, seq_len)) {
+ sdk_err(pf_to_mgmt->hwdev->dev_hdl,
+ "Mgmt msg sequence id and segment length check fail, front seq_id: 0x%x, current seq_id: 0x%x, seg len: 0x%x\n",
+ recv_msg->seq_id, seq_id, seq_len);
+ /* set seq_id to invalid seq_id */
+ recv_msg->seq_id = MGMT_MSG_MAX_SEQ_ID;
+ return;
+ }
+
+ offset = seq_id * SEGMENT_LEN;
+ memcpy((u8 *)recv_msg->msg + offset, msg_body, seq_len);
+
+ if (!HIFC_MSG_HEADER_GET(mbox_header, LAST))
+ return;
+
+ recv_msg->cmd = HIFC_MSG_HEADER_GET(mbox_header, CMD);
+ recv_msg->mod = HIFC_MSG_HEADER_GET(mbox_header, MODULE);
+ recv_msg->async_mgmt_to_pf = HIFC_MSG_HEADER_GET(mbox_header,
+ ASYNC_MGMT_TO_PF);
+ recv_msg->msg_len = HIFC_MSG_HEADER_GET(mbox_header, MSG_LEN);
+ recv_msg->msg_id = HIFC_MSG_HEADER_GET(mbox_header, MSG_ID);
+ recv_msg->seq_id = MGMT_MSG_MAX_SEQ_ID;
+
+ if (HIFC_MSG_HEADER_GET(mbox_header, DIRECTION) ==
+ HIFC_MSG_RESPONSE) {
+ mgmt_resp_msg_handler(pf_to_mgmt, recv_msg);
+ return;
+ }
+
+ mgmt_work = kzalloc(sizeof(*mgmt_work), GFP_KERNEL);
+ if (!mgmt_work) {
+ sdk_err(pf_to_mgmt->hwdev->dev_hdl,
+ "Allocate mgmt work memory failed\n");
+ return;
+ }
+
+ if (recv_msg->msg_len) {
+ mgmt_work->msg = kzalloc(recv_msg->msg_len, GFP_KERNEL);
+ if (!mgmt_work->msg) {
+ sdk_err(pf_to_mgmt->hwdev->dev_hdl, "Allocate mgmt msg memory failed\n");
+ kfree(mgmt_work);
+ return;
+ }
+ }
+
+ mgmt_work->pf_to_mgmt = pf_to_mgmt;
+ mgmt_work->msg_len = recv_msg->msg_len;
+ memcpy(mgmt_work->msg, recv_msg->msg, recv_msg->msg_len);
+ mgmt_work->msg_id = recv_msg->msg_id;
+ mgmt_work->mod = recv_msg->mod;
+ mgmt_work->cmd = recv_msg->cmd;
+ mgmt_work->async_mgmt_to_pf = recv_msg->async_mgmt_to_pf;
+
+ INIT_WORK(&mgmt_work->work, recv_mgmt_msg_work_handler);
+ queue_work(pf_to_mgmt->workq, &mgmt_work->work);
+}
+
+/**
+ * hifc_mgmt_msg_aeqe_handler - handler for a mgmt message event
+ * @hwdev: the pointer to hw device
+ * @header: the header of the message
+ * @size: unused
+ **/
+void hifc_mgmt_msg_aeqe_handler(void *hwdev, u8 *header, u8 size)
+{
+ struct hifc_hwdev *dev = (struct hifc_hwdev *)hwdev;
+ struct hifc_msg_pf_to_mgmt *pf_to_mgmt;
+ struct hifc_recv_msg *recv_msg;
+ bool is_send_dir = false;
+
+ pf_to_mgmt = dev->pf_to_mgmt;
+
+ is_send_dir = (HIFC_MSG_HEADER_GET(*(u64 *)header, DIRECTION) ==
+ HIFC_MSG_DIRECT_SEND) ? true : false;
+
+ recv_msg = is_send_dir ? &pf_to_mgmt->recv_msg_from_mgmt :
+ &pf_to_mgmt->recv_resp_msg_from_mgmt;
+
+ recv_mgmt_msg_handler(pf_to_mgmt, header, recv_msg);
+}
+
+/**
+ * alloc_recv_msg - allocate received message memory
+ * @recv_msg: pointer that will hold the allocated data
+ * Return: 0 - success, negative - failure
+ **/
+static int alloc_recv_msg(struct hifc_recv_msg *recv_msg)
+{
+ recv_msg->seq_id = MGMT_MSG_MAX_SEQ_ID;
+
+ recv_msg->msg = kzalloc(MAX_PF_MGMT_BUF_SIZE, GFP_KERNEL);
+ if (!recv_msg->msg)
+ return -ENOMEM;
+
+ return 0;
+}
+
+/**
+ * free_recv_msg - free received message memory
+ * @recv_msg: pointer that holds the allocated data
+ **/
+static void free_recv_msg(struct hifc_recv_msg *recv_msg)
+{
+ kfree(recv_msg->msg);
+}
+
+/**
+ * alloc_msg_buf - allocate all the message buffers of PF to MGMT channel
+ * @pf_to_mgmt: PF to MGMT channel
+ * Return: 0 - success, negative - failure
+ **/
+static int alloc_msg_buf(struct hifc_msg_pf_to_mgmt *pf_to_mgmt)
+{
+ int err;
+ void *dev = pf_to_mgmt->hwdev->dev_hdl;
+
+ err = alloc_recv_msg(&pf_to_mgmt->recv_msg_from_mgmt);
+ if (err) {
+ sdk_err(dev, "Failed to allocate recv msg\n");
+ return err;
+ }
+
+ err = alloc_recv_msg(&pf_to_mgmt->recv_resp_msg_from_mgmt);
+ if (err) {
+ sdk_err(dev, "Failed to allocate resp recv msg\n");
+ goto alloc_msg_for_resp_err;
+ }
+
+ pf_to_mgmt->async_msg_buf = kzalloc(MAX_PF_MGMT_BUF_SIZE, GFP_KERNEL);
+ if (!pf_to_mgmt->async_msg_buf) {
+ err = -ENOMEM;
+ goto async_msg_buf_err;
+ }
+
+ pf_to_mgmt->sync_msg_buf = kzalloc(MAX_PF_MGMT_BUF_SIZE, GFP_KERNEL);
+ if (!pf_to_mgmt->sync_msg_buf) {
+ err = -ENOMEM;
+ goto sync_msg_buf_err;
+ }
+
+ pf_to_mgmt->mgmt_ack_buf = kzalloc(MAX_PF_MGMT_BUF_SIZE, GFP_KERNEL);
+ if (!pf_to_mgmt->mgmt_ack_buf) {
+ err = -ENOMEM;
+ goto ack_msg_buf_err;
+ }
+
+ return 0;
+
+ack_msg_buf_err:
+ kfree(pf_to_mgmt->sync_msg_buf);
+
+sync_msg_buf_err:
+ kfree(pf_to_mgmt->async_msg_buf);
+
+async_msg_buf_err:
+ free_recv_msg(&pf_to_mgmt->recv_resp_msg_from_mgmt);
+
+alloc_msg_for_resp_err:
+ free_recv_msg(&pf_to_mgmt->recv_msg_from_mgmt);
+ return err;
+}
+
+/**
+ * free_msg_buf - free all the message buffers of PF to MGMT channel
+ * @pf_to_mgmt: PF to MGMT channel
+ **/
+static void free_msg_buf(struct hifc_msg_pf_to_mgmt *pf_to_mgmt)
+{
+ kfree(pf_to_mgmt->mgmt_ack_buf);
+ kfree(pf_to_mgmt->sync_msg_buf);
+ kfree(pf_to_mgmt->async_msg_buf);
+
+ free_recv_msg(&pf_to_mgmt->recv_resp_msg_from_mgmt);
+ free_recv_msg(&pf_to_mgmt->recv_msg_from_mgmt);
+}
+
+/**
+ * hifc_pf_to_mgmt_init - initialize PF to MGMT channel
+ * @hwdev: the pointer to hw device
+ * Return: 0 - success, negative - failure
+ **/
+int hifc_pf_to_mgmt_init(struct hifc_hwdev *hwdev)
+{
+ struct hifc_msg_pf_to_mgmt *pf_to_mgmt;
+ void *dev = hwdev->dev_hdl;
+ int err;
+
+ pf_to_mgmt = kzalloc(sizeof(*pf_to_mgmt), GFP_KERNEL);
+ if (!pf_to_mgmt)
+ return -ENOMEM;
+
+ hwdev->pf_to_mgmt = pf_to_mgmt;
+ pf_to_mgmt->hwdev = hwdev;
+ spin_lock_init(&pf_to_mgmt->async_msg_lock);
+ spin_lock_init(&pf_to_mgmt->sync_event_lock);
+ sema_init(&pf_to_mgmt->sync_msg_lock, 1);
+ pf_to_mgmt->workq = create_singlethread_workqueue(HIFC_MGMT_WQ_NAME);
+ if (!pf_to_mgmt->workq) {
+ sdk_err(dev, "Failed to initialize MGMT workqueue\n");
+ err = -ENOMEM;
+ goto create_mgmt_workq_err;
+ }
+
+ err = alloc_msg_buf(pf_to_mgmt);
+ if (err) {
+ sdk_err(dev, "Failed to allocate msg buffers\n");
+ goto alloc_msg_buf_err;
+ }
+
+ err = hifc_api_cmd_init(hwdev, pf_to_mgmt->cmd_chain);
+ if (err) {
+ sdk_err(dev, "Failed to init the api cmd chains\n");
+ goto api_cmd_init_err;
+ }
+
+ return 0;
+
+api_cmd_init_err:
+ free_msg_buf(pf_to_mgmt);
+
+alloc_msg_buf_err:
+ destroy_workqueue(pf_to_mgmt->workq);
+
+create_mgmt_workq_err:
+ kfree(pf_to_mgmt);
+
+ return err;
+}
+
+/**
+ * hifc_pf_to_mgmt_free - free PF to MGMT channel
+ * @hwdev: the pointer to hw device
+ **/
+void hifc_pf_to_mgmt_free(struct hifc_hwdev *hwdev)
+{
+ struct hifc_msg_pf_to_mgmt *pf_to_mgmt = hwdev->pf_to_mgmt;
+
+ /* destroy workqueue before free related pf_to_mgmt resources in case of
+ * illegal resource access
+ */
+ destroy_workqueue(pf_to_mgmt->workq);
+ hifc_api_cmd_free(pf_to_mgmt->cmd_chain);
+ free_msg_buf(pf_to_mgmt);
+ kfree(pf_to_mgmt);
+}
+
+void hifc_flush_mgmt_workq(void *hwdev)
+{
+ struct hifc_hwdev *dev = (struct hifc_hwdev *)hwdev;
+
+ flush_workqueue(dev->aeqs->workq);
+
+ if (hifc_func_type(dev) != TYPE_VF &&
+ hifc_is_hwdev_mod_inited(hwdev, HIFC_HWDEV_MGMT_INITED))
+ flush_workqueue(dev->pf_to_mgmt->workq);
+}
+
+int hifc_clp_pf_to_mgmt_init(struct hifc_hwdev *hwdev)
+{
+ struct hifc_clp_pf_to_mgmt *clp_pf_to_mgmt;
+
+ clp_pf_to_mgmt = kzalloc(sizeof(*clp_pf_to_mgmt), GFP_KERNEL);
+ if (!clp_pf_to_mgmt)
+ return -ENOMEM;
+
+ clp_pf_to_mgmt->clp_msg_buf = kzalloc(HIFC_CLP_INPUT_BUFFER_LEN_HOST,
+ GFP_KERNEL);
+ if (!clp_pf_to_mgmt->clp_msg_buf) {
+ kfree(clp_pf_to_mgmt);
+ return -ENOMEM;
+ }
+ sema_init(&clp_pf_to_mgmt->clp_msg_lock, 1);
+
+ hwdev->clp_pf_to_mgmt = clp_pf_to_mgmt;
+
+ return 0;
+}
+
+void hifc_clp_pf_to_mgmt_free(struct hifc_hwdev *hwdev)
+{
+ struct hifc_clp_pf_to_mgmt *clp_pf_to_mgmt = hwdev->clp_pf_to_mgmt;
+
+ kfree(clp_pf_to_mgmt->clp_msg_buf);
+ kfree(clp_pf_to_mgmt);
+}
+
diff --git a/drivers/scsi/huawei/hifc/hifc_mgmt.h b/drivers/scsi/huawei/hifc/hifc_mgmt.h
new file mode 100644
index 000000000000..2adcfe2968c1
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_mgmt.h
@@ -0,0 +1,407 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+
+#ifndef HIFC_MGMT_H_
+#define HIFC_MGMT_H_
+
+#define HIFC_MSG_HEADER_MSG_LEN_SHIFT 0
+#define HIFC_MSG_HEADER_MODULE_SHIFT 11
+#define HIFC_MSG_HEADER_SEG_LEN_SHIFT 16
+#define HIFC_MSG_HEADER_NO_ACK_SHIFT 22
+#define HIFC_MSG_HEADER_ASYNC_MGMT_TO_PF_SHIFT 23
+#define HIFC_MSG_HEADER_SEQID_SHIFT 24
+#define HIFC_MSG_HEADER_LAST_SHIFT 30
+#define HIFC_MSG_HEADER_DIRECTION_SHIFT 31
+#define HIFC_MSG_HEADER_CMD_SHIFT 32
+#define HIFC_MSG_HEADER_PCI_INTF_IDX_SHIFT 48
+#define HIFC_MSG_HEADER_P2P_IDX_SHIFT 50
+#define HIFC_MSG_HEADER_MSG_ID_SHIFT 54
+
+#define HIFC_MSG_HEADER_MSG_LEN_MASK 0x7FF
+#define HIFC_MSG_HEADER_MODULE_MASK 0x1F
+#define HIFC_MSG_HEADER_SEG_LEN_MASK 0x3F
+#define HIFC_MSG_HEADER_NO_ACK_MASK 0x1
+#define HIFC_MSG_HEADER_ASYNC_MGMT_TO_PF_MASK 0x1
+#define HIFC_MSG_HEADER_SEQID_MASK 0x3F
+#define HIFC_MSG_HEADER_LAST_MASK 0x1
+#define HIFC_MSG_HEADER_DIRECTION_MASK 0x1
+#define HIFC_MSG_HEADER_CMD_MASK 0xFF
+#define HIFC_MSG_HEADER_PCI_INTF_IDX_MASK 0x3
+#define HIFC_MSG_HEADER_P2P_IDX_MASK 0xF
+#define HIFC_MSG_HEADER_MSG_ID_MASK 0x3FF
+
+#define HIFC_MSG_HEADER_GET(val, member) \
+ (((val) >> HIFC_MSG_HEADER_##member##_SHIFT) & \
+ HIFC_MSG_HEADER_##member##_MASK)
+
+#define HIFC_MSG_HEADER_SET(val, member) \
+ ((u64)((val) & HIFC_MSG_HEADER_##member##_MASK) << \
+ HIFC_MSG_HEADER_##member##_SHIFT)
+
+#define HIFC_MGMT_WQ_NAME "hifc_mgmt"
+
+/*CLP*/
+enum clp_data_type {
+ HIFC_CLP_REQ_HOST = 0,
+ HIFC_CLP_RSP_HOST = 1
+};
+
+enum clp_reg_type {
+ HIFC_CLP_BA_HOST = 0,
+ HIFC_CLP_SIZE_HOST = 1,
+ HIFC_CLP_LEN_HOST = 2,
+ HIFC_CLP_START_REQ_HOST = 3,
+ HIFC_CLP_READY_RSP_HOST = 4
+};
+
+/* cmd of mgmt CPU message for HW module */
+enum hifc_mgmt_cmd {
+ HIFC_MGMT_CMD_RESET_MGMT = 0x0,
+ HIFC_MGMT_CMD_START_FLR = 0x1,
+ HIFC_MGMT_CMD_FLUSH_DOORBELL = 0x2,
+ HIFC_MGMT_CMD_CMDQ_CTXT_SET = 0x10,
+ HIFC_MGMT_CMD_VAT_SET = 0x12,
+ HIFC_MGMT_CMD_L2NIC_SQ_CI_ATTR_SET = 0x14,
+ HIFC_MGMT_CMD_PPF_TMR_SET = 0x22,
+ HIFC_MGMT_CMD_PPF_HT_GPA_SET = 0x23,
+ HIFC_MGMT_CMD_RES_STATE_SET = 0x24,
+ HIFC_MGMT_CMD_FUNC_TMR_BITMAT_SET = 0x32,
+ HIFC_MGMT_CMD_CEQ_CTRL_REG_WR_BY_UP = 0x33,
+ HIFC_MGMT_CMD_MSI_CTRL_REG_WR_BY_UP,
+ HIFC_MGMT_CMD_MSI_CTRL_REG_RD_BY_UP,
+ HIFC_MGMT_CMD_FAULT_REPORT = 0x37,
+ HIFC_MGMT_CMD_HEART_LOST_REPORT = 0x38,
+ HIFC_MGMT_CMD_SYNC_TIME = 0x46,
+ HIFC_MGMT_CMD_REG_READ = 0x48,
+ HIFC_MGMT_CMD_L2NIC_RESET = 0x4b,
+ HIFC_MGMT_CMD_ACTIVATE_FW = 0x4F,
+ HIFC_MGMT_CMD_PAGESIZE_SET = 0x50,
+ HIFC_MGMT_CMD_GET_BOARD_INFO = 0x52,
+ HIFC_MGMT_CMD_WATCHDOG_INFO = 0x56,
+ HIFC_MGMT_CMD_FMW_ACT_NTC = 0x57,
+ HIFC_MGMT_CMD_PCIE_DFX_NTC = 0x65,
+ HIFC_MGMT_CMD_PCIE_DFX_GET = 0x66,
+ HIFC_MGMT_CMD_GET_HOST_INFO = 0x67,
+ HIFC_MGMT_CMD_GET_PHY_INIT_STATUS = 0x6A,
+ HIFC_MGMT_CMD_HEARTBEAT_EVENT = 0x6C,
+};
+
+#define HIFC_CLP_REG_GAP 0x20
+#define HIFC_CLP_INPUT_BUFFER_LEN_HOST 2048UL
+#define HIFC_CLP_OUTPUT_BUFFER_LEN_HOST 2048UL
+#define HIFC_CLP_DATA_UNIT_HOST 4UL
+#define HIFC_BAR01_GLOABAL_CTL_OFFSET 0x4000
+#define HIFC_BAR01_CLP_OFFSET 0x5000
+
+#define HIFC_CLP_SRAM_SIZE_REG (HIFC_BAR01_GLOABAL_CTL_OFFSET + 0x220)
+#define HIFC_CLP_REQ_SRAM_BA_REG (HIFC_BAR01_GLOABAL_CTL_OFFSET + 0x224)
+#define HIFC_CLP_RSP_SRAM_BA_REG (HIFC_BAR01_GLOABAL_CTL_OFFSET + 0x228)
+#define HIFC_CLP_REQ_REG (HIFC_BAR01_GLOABAL_CTL_OFFSET + 0x22c)
+#define HIFC_CLP_RSP_REG (HIFC_BAR01_GLOABAL_CTL_OFFSET + 0x230)
+#define HIFC_CLP_REG(member) (HIFC_CLP_##member##_REG)
+
+#define HIFC_CLP_REQ_DATA (HIFC_BAR01_CLP_OFFSET)
+#define HIFC_CLP_RSP_DATA (HIFC_BAR01_CLP_OFFSET + 0x1000)
+#define HIFC_CLP_DATA(member) (HIFC_CLP_##member##_DATA)
+
+#define HIFC_CLP_SRAM_SIZE_OFFSET 16
+#define HIFC_CLP_SRAM_BASE_OFFSET 0
+#define HIFC_CLP_LEN_OFFSET 0
+#define HIFC_CLP_START_OFFSET 31
+#define HIFC_CLP_READY_OFFSET 31
+#define HIFC_CLP_OFFSET(member) (HIFC_CLP_##member##_OFFSET)
+
+#define HIFC_CLP_SRAM_SIZE_BIT_LEN 0x7ffUL
+#define HIFC_CLP_SRAM_BASE_BIT_LEN 0x7ffffffUL
+#define HIFC_CLP_LEN_BIT_LEN 0x7ffUL
+#define HIFC_CLP_START_BIT_LEN 0x1UL
+#define HIFC_CLP_READY_BIT_LEN 0x1UL
+#define HIFC_CLP_MASK(member) (HIFC_CLP_##member##_BIT_LEN)
+
+#define HIFC_CLP_DELAY_CNT_MAX 200UL
+#define HIFC_CLP_SRAM_SIZE_REG_MAX 0x3ff
+#define HIFC_CLP_SRAM_BASE_REG_MAX 0x7ffffff
+#define HIFC_CLP_LEN_REG_MAX 0x3ff
+#define HIFC_CLP_START_OR_READY_REG_MAX 0x1
+#define HIFC_MGMT_CMD_UNSUPPORTED 0xFF
+
+enum hifc_msg_direction_type {
+ HIFC_MSG_DIRECT_SEND = 0,
+ HIFC_MSG_RESPONSE = 1
+};
+
+enum hifc_msg_segment_type {
+ NOT_LAST_SEGMENT = 0,
+ LAST_SEGMENT = 1,
+};
+
+enum hifc_mgmt_msg_type {
+ ASYNC_MGMT_MSG = 0,
+ SYNC_MGMT_MSG = 1,
+};
+
+enum hifc_msg_ack_type {
+ HIFC_MSG_ACK = 0,
+ HIFC_MSG_NO_ACK = 1,
+};
+
+struct hifc_recv_msg {
+ void *msg;
+
+ struct completion recv_done;
+
+ u16 msg_len;
+ enum hifc_mod_type mod;
+ u8 cmd;
+ u8 seq_id;
+ u16 msg_id;
+ int async_mgmt_to_pf;
+};
+
+struct hifc_msg_head {
+ u8 status;
+ u8 version;
+ u8 resp_aeq_num;
+ u8 rsvd0[5];
+};
+
+#define HIFC_COMM_SELF_CMD_MAX 8
+
+struct comm_up_self_msg_sub_info {
+ u8 cmd;
+ comm_up_self_msg_proc proc;
+};
+
+struct comm_up_self_msg_info {
+ u8 cmd_num;
+ struct comm_up_self_msg_sub_info info[HIFC_COMM_SELF_CMD_MAX];
+};
+
+enum comm_pf_to_mgmt_event_state {
+ SEND_EVENT_UNINIT = 0,
+ SEND_EVENT_START,
+ SEND_EVENT_FAIL,
+ SEND_EVENT_TIMEOUT,
+ SEND_EVENT_END,
+};
+
+enum hifc_mgmt_msg_cb_state {
+ HIFC_MGMT_MSG_CB_REG = 0,
+ HIFC_MGMT_MSG_CB_RUNNING,
+};
+
+struct hifc_clp_pf_to_mgmt {
+ struct semaphore clp_msg_lock;
+ void *clp_msg_buf;
+};
+
+struct hifc_msg_pf_to_mgmt {
+ struct hifc_hwdev *hwdev;
+
+ /* Async cmd can not be scheduling */
+ spinlock_t async_msg_lock;
+ struct semaphore sync_msg_lock;
+
+ struct workqueue_struct *workq;
+
+ void *async_msg_buf;
+ void *sync_msg_buf;
+ void *mgmt_ack_buf;
+
+ struct hifc_recv_msg recv_msg_from_mgmt;
+ struct hifc_recv_msg recv_resp_msg_from_mgmt;
+
+ u16 async_msg_id;
+ u16 sync_msg_id;
+
+ struct hifc_api_cmd_chain *cmd_chain[HIFC_API_CMD_MAX];
+
+ hifc_mgmt_msg_cb recv_mgmt_msg_cb[HIFC_MOD_HW_MAX];
+ void *recv_mgmt_msg_data[HIFC_MOD_HW_MAX];
+ unsigned long mgmt_msg_cb_state[HIFC_MOD_HW_MAX];
+
+ struct comm_up_self_msg_info proc;
+
+ /* lock when sending msg */
+ spinlock_t sync_event_lock;
+ enum comm_pf_to_mgmt_event_state event_flag;
+};
+
+struct hifc_mgmt_msg_handle_work {
+ struct work_struct work;
+ struct hifc_msg_pf_to_mgmt *pf_to_mgmt;
+ void *msg;
+ u16 msg_len;
+ enum hifc_mod_type mod;
+ u8 cmd;
+ u16 msg_id;
+ int async_mgmt_to_pf;
+};
+
+/* show each drivers only such as nic_service_cap,
+ * toe_service_cap structure, but not show service_cap
+ */
+enum hifc_service_type {
+ SERVICE_T_NIC = 0,
+
+ SERVICE_T_FC = 5,
+
+ SERVICE_T_MAX,
+
+ /* Only used for interruption resource management,
+ * mark the request module
+ */
+ SERVICE_T_INTF = (1 << 15),
+ SERVICE_T_CQM = (1 << 16),
+};
+
+/* NIC service capability
+ * 1, The chip supports NIC RQ is 1K
+ * 2, PF/VF RQ specifications:
+ * disable RSS:
+ * disable VMDq: Each PF/VF at most 8 RQ
+ * enable the VMDq: Each PF/VF at most 1K RQ
+ * enable the RSS:
+ * disable VMDq: each PF at most 64 RQ, VF at most 32 RQ
+ * enable the VMDq: Each PF/VF at most 1K RQ
+ *
+ * 3, The chip supports NIC SQ is 1K
+ * 4, PF/VF SQ specifications:
+ * disable RSS:
+ * disable VMDq: Each PF/VF at most 8 SQ
+ * enable the VMDq: Each PF/VF at most 1K SQ
+ * enable the RSS:
+ * disable VMDq: each PF at most 64 SQ, VF at most 32 SQ
+ * enable the VMDq: Each PF/VF at most 1K SQ
+ */
+struct nic_service_cap {
+ /* PF resources*/
+ u16 max_sqs;
+ u16 max_rqs;
+
+ /* VF resources, vf obtain through the MailBox mechanism from
+ * according PF
+ */
+ u16 vf_max_sqs;
+ u16 vf_max_rqs;
+ bool lro_en; /* LRO feature enable bit*/
+ u8 lro_sz; /* LRO context space: n*16B */
+ u8 tso_sz; /* TSO context space: n*16B */
+
+ u16 max_queue_allowed;
+};
+
+/* PF FC service resource structure defined*/
+struct dev_fc_svc_cap {
+ /* PF Parent QPC */
+ u32 max_parent_qpc_num; /* max number is 2048*/
+
+ /* PF Child QPC */
+ u32 max_child_qpc_num; /* max number is 2048*/
+
+ /* PF SCQ */
+ u32 scq_num; /* 16 */
+
+ /* PF supports SRQ*/
+ u32 srq_num; /* Number of SRQ is 2*/
+
+ u8 vp_id_start;
+ u8 vp_id_end;
+};
+
+/* FC services*/
+struct fc_service_cap {
+ struct dev_fc_svc_cap dev_fc_cap;
+
+ /* Parent QPC */
+ u32 parent_qpc_size; /* 256B */
+
+ /* Child QPC */
+ u32 child_qpc_size; /* 256B */
+
+ /* SQ */
+ u32 sqe_size; /* 128B(in linked list mode)*/
+
+ /* SCQ */
+ u32 scqc_size; /* Size of the Context 32B*/
+ u32 scqe_size; /* 64B */
+
+ /* SRQ */
+ u32 srqc_size; /* Size of SRQ Context (64B)*/
+ u32 srqe_size; /* 32B */
+};
+
+bool hifc_support_fc(void *hwdev, struct fc_service_cap *cap);
+
+/* Service interface for obtaining service_cap public fields*/
+/* Obtain service_cap.host_oq_id_mask_val*/
+u8 hifc_host_oq_id_mask(void *hwdev);
+
+/* Obtain service_cap.dev_cap.max_sqs*/
+u16 hifc_func_max_qnum(void *hwdev);
+
+/* The following information is obtained from the bar space
+ * which is recorded by SDK layer.
+ * Here provide parameter query interface for service
+ */
+/* func_attr.glb_func_idx, global function index */
+u16 hifc_global_func_id(void *hwdev);
+/* func_attr.intr_num, MSI-X table entry in function*/
+enum intr_type {
+ INTR_TYPE_MSIX,
+ INTR_TYPE_MSI,
+ INTR_TYPE_INT,
+ INTR_TYPE_NONE,
+};
+
+u8 hifc_pcie_itf_id(void *hwdev); /* func_attr.itf_idx, pcie interface index */
+
+/* func_attr.func_type, 0-PF 1-VF 2-PPF */
+enum func_type hifc_func_type(void *hwdev);
+
+u8 hifc_ppf_idx(void *hwdev);
+
+enum hifc_msix_state {
+ HIFC_MSIX_ENABLE,
+ HIFC_MSIX_DISABLE,
+};
+
+void hifc_set_msix_state(void *hwdev, u16 msix_idx,
+ enum hifc_msix_state flag);
+
+/* Defines the IRQ information structure*/
+struct irq_info {
+ u16 msix_entry_idx; /* IRQ corresponding index number */
+ u32 irq_id; /* the IRQ number from OS */
+};
+
+int hifc_alloc_irqs(void *hwdev, enum hifc_service_type type, u16 req_num,
+ struct irq_info *irq_info_array, u16 *resp_num);
+void hifc_free_irq(void *hwdev, enum hifc_service_type type, u32 irq_id);
+
+int hifc_sync_time(void *hwdev, u64 time);
+void hifc_disable_mgmt_msg_report(void *hwdev);
+void hifc_set_func_deinit_flag(void *hwdev);
+void hifc_flush_mgmt_workq(void *hwdev);
+int hifc_global_func_id_get(void *hwdev, u16 *func_id);
+u16 hifc_global_func_id_hw(void *hwdev);
+int hifc_pf_to_mgmt_no_ack(void *hwdev, enum hifc_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size);
+void hifc_mgmt_msg_aeqe_handler(void *handle, u8 *header, u8 size);
+int hifc_pf_to_mgmt_init(struct hifc_hwdev *hwdev);
+void hifc_pf_to_mgmt_free(struct hifc_hwdev *hwdev);
+int hifc_pf_to_mgmt_sync(void *hwdev, enum hifc_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout);
+int hifc_pf_to_mgmt_async(void *hwdev, enum hifc_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size);
+int hifc_pf_clp_to_mgmt(void *hwdev, enum hifc_mod_type mod, u8 cmd,
+ const void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+int hifc_clp_pf_to_mgmt_init(struct hifc_hwdev *hwdev);
+void hifc_clp_pf_to_mgmt_free(struct hifc_hwdev *hwdev);
+
+#endif
diff --git a/drivers/scsi/huawei/hifc/hifc_sml.c b/drivers/scsi/huawei/hifc/hifc_sml.c
new file mode 100644
index 000000000000..2d04ff6ed5ff
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_sml.c
@@ -0,0 +1,361 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include "hifc_knl_adp.h"
+#include "hifc_hw.h"
+#include "hifc_hwdev.h"
+#include "hifc_sml.h"
+
+#ifndef HTONL
+#define HTONL(x) \
+ ((((x) & 0x000000ff) << 24) \
+ | (((x) & 0x0000ff00) << 8) \
+ | (((x) & 0x00ff0000) >> 8) \
+ | (((x) & 0xff000000) >> 24))
+#endif
+
+static void sml_ctr_htonl_n(u32 *node, u32 len)
+{
+ u32 i;
+
+ for (i = 0; i < len; i++) {
+ *node = HTONL(*node);
+ node++;
+ }
+}
+
+static void hifc_sml_ctr_read_build_req(struct chipif_sml_ctr_rd_req_s *msg,
+ u8 instance_id, u8 op_id,
+ u8 ack, u32 ctr_id, u32 init_val)
+{
+ msg->head.value = 0;
+ msg->head.bs.instance = instance_id;
+ msg->head.bs.op_id = op_id;
+ msg->head.bs.ack = ack;
+ msg->head.value = HTONL(msg->head.value);
+
+ msg->ctr_id = ctr_id;
+ msg->ctr_id = HTONL(msg->ctr_id);
+
+ msg->initial = init_val;
+}
+
+static void hifc_sml_ctr_write_build_req(struct chipif_sml_ctr_wr_req_s *msg,
+ u8 instance_id, u8 op_id,
+ u8 ack, u32 ctr_id,
+ u64 val1, u64 val2)
+{
+ msg->head.value = 0;
+ msg->head.bs.instance = instance_id;
+ msg->head.bs.op_id = op_id;
+ msg->head.bs.ack = ack;
+ msg->head.value = HTONL(msg->head.value);
+
+ msg->ctr_id = ctr_id;
+ msg->ctr_id = HTONL(msg->ctr_id);
+
+ msg->value1_h = val1 >> 32;
+ msg->value1_l = val1 & 0xFFFFFFFF;
+
+ msg->value2_h = val2 >> 32;
+ msg->value2_l = val2 & 0xFFFFFFFF;
+}
+
+/**
+ * hifc_sm_ctr_rd32 - small single 32 counter read
+ * @hwdev: the pointer to hw device
+ * @node: the node id
+ * @instance: instance value
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ */
+int hifc_sm_ctr_rd32(void *hwdev, u8 node, u8 instance, u32 ctr_id, u32 *value)
+{
+ struct chipif_sml_ctr_rd_req_s req;
+ union ctr_rd_rsp_u rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EFAULT;
+
+ hifc_sml_ctr_read_build_req(&req, instance, CHIPIF_SM_CTR_OP_READ,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hifc_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req),
+ (void *)&rsp, (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Sm 32bit counter read fail, err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, 4);
+ *value = rsp.bs_ss32_rsp.value1;
+
+ return 0;
+}
+
+/**
+ * hifc_sm_ctr_rd32_clear - small single 32 counter read and clear to zero
+ * @hwdev: the pointer to hw device
+ * @node: the node id
+ * @instance: instance value
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ * according to ACN error code (ERR_OK, ERR_PARAM, ERR_FAILED...etc)
+ */
+int hifc_sm_ctr_rd32_clear(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u32 *value)
+{
+ struct chipif_sml_ctr_rd_req_s req;
+ union ctr_rd_rsp_u rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EFAULT;
+
+ hifc_sml_ctr_read_build_req(&req, instance,
+ CHIPIF_SM_CTR_OP_READ_CLEAR,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hifc_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req),
+ (void *)&rsp, (unsigned short)sizeof(rsp));
+
+ if (ret) {
+ sdk_err(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Sm 32bit counter clear fail, err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, 4);
+ *value = rsp.bs_ss32_rsp.value1;
+
+ return 0;
+}
+
+/**
+ * hifc_sm_ctr_wr32 - small single 32 counter write
+ * @hwdev: the pointer to hw device
+ * @node: the node id
+ * @instance: instance value
+ * @ctr_id: counter id
+ * @value: write counter value
+ * Return: 0 - success, negative - failure
+ */
+int hifc_sm_ctr_wr32(void *hwdev, u8 node, u8 instance, u32 ctr_id, u32 value)
+{
+ struct chipif_sml_ctr_wr_req_s req;
+ struct chipif_sml_ctr_wr_rsp_s rsp;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ hifc_sml_ctr_write_build_req(&req, instance, CHIPIF_SM_CTR_OP_WRITE,
+ CHIPIF_NOACK, ctr_id, (u64)value, 0ULL);
+
+ return hifc_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req), (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+}
+
+/**
+ * hifc_sm_ctr_rd64 - big counter 64 read
+ * @hwdev: the pointer to hw device
+ * @node: the node id
+ * @instance: instance value
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ */
+int hifc_sm_ctr_rd64(void *hwdev, u8 node, u8 instance, u32 ctr_id, u64 *value)
+{
+ struct chipif_sml_ctr_rd_req_s req;
+ union ctr_rd_rsp_u rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EFAULT;
+
+ hifc_sml_ctr_read_build_req(&req, instance, CHIPIF_SM_CTR_OP_READ,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hifc_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req), (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Sm 64bit counter read fail err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, 4);
+ *value = ((u64)rsp.bs_bs64_rsp.value1 << 32) | rsp.bs_bs64_rsp.value2;
+
+ return 0;
+}
+
+/**
+ * hifc_sm_ctr_wr64 - big single 64 counter write
+ * @hwdev: the pointer to hw device
+ * @node: the node id
+ * @instance: instance value
+ * @ctr_id: counter id
+ * @value: write counter value
+ * Return: 0 - success, negative - failure
+ */
+int hifc_sm_ctr_wr64(void *hwdev, u8 node, u8 instance, u32 ctr_id, u64 value)
+{
+ struct chipif_sml_ctr_wr_req_s req;
+ struct chipif_sml_ctr_wr_rsp_s rsp;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ hifc_sml_ctr_write_build_req(&req, instance, CHIPIF_SM_CTR_OP_WRITE,
+ CHIPIF_NOACK, ctr_id, value, 0ULL);
+
+ return hifc_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req), (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+}
+
+/**
+ * hifc_sm_ctr_rd64_pair - big pair 128 counter read
+ * @hwdev: the pointer to hw device
+ * @node: the node id
+ * @instance: instance value
+ * @ctr_id: counter id
+ * @value1: read counter value ptr
+ * @value2: read counter value ptr
+ * Return: 0 - success, negative - failure
+ */
+int hifc_sm_ctr_rd64_pair(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u64 *value1, u64 *value2)
+{
+ struct chipif_sml_ctr_rd_req_s req;
+ union ctr_rd_rsp_u rsp;
+ int ret;
+
+ if (!hwdev || (0 != (ctr_id & 0x1)) || !value1 || !value2) {
+ pr_err("Hwdev(0x%p) or value1(0x%p) or value2(0x%p) is NULL or ctr_id(%d) is odd number\n",
+ hwdev, value1, value2, ctr_id);
+ return -EFAULT;
+ }
+
+ hifc_sml_ctr_read_build_req(&req, instance, CHIPIF_SM_CTR_OP_READ,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hifc_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req), (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Sm 64 bit rd pair ret(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, 4);
+ *value1 = ((u64)rsp.bs_bp64_rsp.val1_h << 32) | rsp.bs_bp64_rsp.val1_l;
+ *value2 = ((u64)rsp.bs_bp64_rsp.val2_h << 32) | rsp.bs_bp64_rsp.val2_l;
+
+ return 0;
+}
+
+/**
+ * hifc_sm_ctr_wr64_pair - big pair 128 counter write
+ * @hwdev: the pointer to hw device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @instance: instance value
+ * @value1: write counter value
+ * @value2: write counter value
+ * Return: 0 - success, negative - failure
+ */
+int hifc_sm_ctr_wr64_pair(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u64 value1, u64 value2)
+{
+ struct chipif_sml_ctr_wr_req_s req;
+ struct chipif_sml_ctr_wr_rsp_s rsp;
+
+ /* pair pattern ctr_id must be even number */
+ if (!hwdev || (0 != (ctr_id & 0x1))) {
+ pr_err("Handle is NULL or ctr_id(%d) is odd number for write 64 bit pair\n",
+ ctr_id);
+ return -EFAULT;
+ }
+
+ hifc_sml_ctr_write_build_req(&req, instance, CHIPIF_SM_CTR_OP_WRITE,
+ CHIPIF_NOACK, ctr_id, value1, value2);
+ return hifc_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req), (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+}
+
+int hifc_api_csr_rd32(void *hwdev, u8 dest, u32 addr, u32 *val)
+{
+ struct hifc_csr_request_api_data api_data = {0};
+ u32 csr_val = 0;
+ u16 in_size = sizeof(api_data);
+ int ret;
+
+ if (!hwdev || !val)
+ return -EFAULT;
+
+ memset(&api_data, 0, sizeof(struct hifc_csr_request_api_data));
+ api_data.dw0 = 0;
+ api_data.dw1.bits.operation_id = HIFC_CSR_OPERATION_READ_CSR;
+ api_data.dw1.bits.need_response = HIFC_CSR_NEED_RESP_DATA;
+ api_data.dw1.bits.data_size = HIFC_CSR_DATA_SZ_32;
+ api_data.dw1.val32 = cpu_to_be32(api_data.dw1.val32);
+ api_data.dw2.bits.csr_addr = addr;
+ api_data.dw2.val32 = cpu_to_be32(api_data.dw2.val32);
+
+ ret = hifc_api_cmd_read_ack(hwdev, dest, (u8 *)(&api_data),
+ in_size, &csr_val, 4);
+ if (ret) {
+ sdk_err(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Read 32 bit csr fail, dest %d addr 0x%x, ret: 0x%x\n",
+ dest, addr, ret);
+ return ret;
+ }
+
+ *val = csr_val;
+
+ return 0;
+}
+
+int hifc_api_csr_wr32(void *hwdev, u8 dest, u32 addr, u32 val)
+{
+ struct hifc_csr_request_api_data api_data;
+ u16 in_size = sizeof(api_data);
+ int ret;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ memset(&api_data, 0, sizeof(struct hifc_csr_request_api_data));
+ api_data.dw1.bits.operation_id = HIFC_CSR_OPERATION_WRITE_CSR;
+ api_data.dw1.bits.need_response = HIFC_CSR_NO_RESP_DATA;
+ api_data.dw1.bits.data_size = HIFC_CSR_DATA_SZ_32;
+ api_data.dw1.val32 = cpu_to_be32(api_data.dw1.val32);
+ api_data.dw2.bits.csr_addr = addr;
+ api_data.dw2.val32 = cpu_to_be32(api_data.dw2.val32);
+ api_data.csr_write_data_h = 0xffffffff;
+ api_data.csr_write_data_l = val;
+
+ ret = hifc_api_cmd_write_nack(hwdev, dest, (u8 *)(&api_data), in_size);
+ if (ret) {
+ sdk_err(((struct hifc_hwdev *)hwdev)->dev_hdl,
+ "Write 32 bit csr fail! dest %d addr 0x%x val 0x%x\n",
+ dest, addr, val);
+ return ret;
+ }
+
+ return 0;
+}
+
diff --git a/drivers/scsi/huawei/hifc/hifc_sml.h b/drivers/scsi/huawei/hifc/hifc_sml.h
new file mode 100644
index 000000000000..9fe2088f48a1
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_sml.h
@@ -0,0 +1,183 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+
+#ifndef __CHIPIF_SML_COUNTER_H__
+#define __CHIPIF_SML_COUNTER_H__
+
+#define CHIPIF_FUNC_PF 0
+#define CHIPIF_FUNC_VF 1
+#define CHIPIF_FUNC_PPF 2
+
+#define CHIPIF_ACK 1
+#define CHIPIF_NOACK 0
+
+#define CHIPIF_SM_CTR_OP_READ 0x2
+#define CHIPIF_SM_CTR_OP_READ_CLEAR 0x6
+#define CHIPIF_SM_CTR_OP_WRITE 0x3
+
+#define SMALL_CNT_READ_RSP_SIZE 16
+
+/* request head */
+union chipif_sml_ctr_req_head_u {
+ struct {
+ u32 pad:15;
+ u32 ack:1;
+ u32 op_id:5;
+ u32 instance:6;
+ u32 src:5;
+ } bs;
+
+ u32 value;
+};
+
+/* counter read request struct */
+struct chipif_sml_ctr_rd_req_s {
+ u32 extra;
+ union chipif_sml_ctr_req_head_u head;
+ u32 ctr_id;
+ u32 initial;
+ u32 pad;
+};
+
+/* counter read response union */
+union ctr_rd_rsp_u {
+ struct {
+ u32 value1:16;
+ u32 pad0:16;
+ u32 pad1[3];
+ } bs_ss16_rsp;
+
+ struct {
+ u32 value1;
+ u32 pad[3];
+ } bs_ss32_rsp;
+
+ struct {
+ u32 value1:20;
+ u32 pad0:12;
+ u32 value2:12;
+ u32 pad1:20;
+ u32 pad2[2];
+ } bs_sp_rsp;
+
+ struct {
+ u32 value1;
+ u32 value2;
+ u32 pad[2];
+ } bs_bs64_rsp;
+
+ struct {
+ u32 val1_h;
+ u32 val1_l;
+ u32 val2_h;
+ u32 val2_l;
+ } bs_bp64_rsp;
+
+};
+
+/* resopnse head */
+union sml_ctr_rsp_head_u {
+ struct {
+ u32 pad:30; /* reserve */
+ u32 code:2; /* error code */
+ } bs;
+
+ u32 value;
+};
+
+/* counter write request struct */
+struct chipif_sml_ctr_wr_req_s {
+ u32 extra;
+ union chipif_sml_ctr_req_head_u head;
+ u32 ctr_id;
+ u32 rsv1;
+ u32 rsv2;
+ u32 value1_h;
+ u32 value1_l;
+ u32 value2_h;
+ u32 value2_l;
+};
+
+/* counter write response struct */
+struct chipif_sml_ctr_wr_rsp_s {
+ union sml_ctr_rsp_head_u head;
+ u32 pad[3];
+};
+
+enum HIFC_CSR_API_DATA_OPERATION_ID {
+ HIFC_CSR_OPERATION_WRITE_CSR = 0x1E,
+ HIFC_CSR_OPERATION_READ_CSR = 0x1F
+};
+
+enum HIFC_CSR_API_DATA_NEED_RESPONSE_DATA {
+ HIFC_CSR_NO_RESP_DATA = 0,
+ HIFC_CSR_NEED_RESP_DATA = 1
+};
+
+enum HIFC_CSR_API_DATA_DATA_SIZE {
+ HIFC_CSR_DATA_SZ_32 = 0,
+ HIFC_CSR_DATA_SZ_64 = 1
+};
+
+struct hifc_csr_request_api_data {
+ u32 dw0;
+
+ union {
+ struct {
+ u32 reserved1:13;
+ /* this field indicates the write/read data size:
+ * 2'b00: 32 bits
+ * 2'b01: 64 bits
+ * 2'b10~2'b11:reserved
+ */
+ u32 data_size:2;
+ /* this field indicates that requestor expect receive a
+ * response data or not.
+ * 1'b0: expect not to receive a response data.
+ * 1'b1: expect to receive a response data.
+ */
+ u32 need_response:1;
+ /* this field indicates the operation that the requestor
+ * expected.
+ * 5'b1_1110: write value to csr space.
+ * 5'b1_1111: read register from csr space.
+ */
+ u32 operation_id:5;
+ u32 reserved2:6;
+ /* this field specifies the Src node ID for this API
+ * request message.
+ */
+ u32 src_node_id:5;
+ } bits;
+
+ u32 val32;
+ } dw1;
+
+ union {
+ struct {
+ /* it specifies the CSR address. */
+ u32 csr_addr:26;
+ u32 reserved3:6;
+ } bits;
+
+ u32 val32;
+ } dw2;
+
+ /* if data_size=2'b01, it is high 32 bits of write data. else, it is
+ * 32'hFFFF_FFFF.
+ */
+ u32 csr_write_data_h;
+ /* the low 32 bits of write data. */
+ u32 csr_write_data_l;
+};
+
+int hifc_sm_ctr_rd32(void *hwdev, u8 node, u8 instance, u32 ctr_id, u32 *value);
+int hifc_sm_ctr_rd64(void *hwdev, u8 node, u8 instance, u32 ctr_id, u64 *value);
+int hifc_sm_ctr_rd64_pair(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u64 *value1, u64 *value2);
+
+#endif
+
diff --git a/drivers/scsi/huawei/hifc/hifc_wq.c b/drivers/scsi/huawei/hifc/hifc_wq.c
new file mode 100644
index 000000000000..4e926d140b2c
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_wq.c
@@ -0,0 +1,624 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/device.h>
+#include <linux/vmalloc.h>
+#include <linux/types.h>
+#include <linux/atomic.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include "hifc_knl_adp.h"
+#include "hifc_hw.h"
+#include "hifc_hwif.h"
+#include "hifc_wq.h"
+
+#define WQS_MAX_NUM_BLOCKS 128
+#define WQS_FREE_BLOCKS_SIZE(wqs) (WQS_MAX_NUM_BLOCKS * \
+ sizeof((wqs)->free_blocks[0]))
+
+static void wqs_return_block(struct hifc_wqs *wqs, u32 page_idx, u32 block_idx)
+{
+ u32 pos;
+
+ spin_lock(&wqs->alloc_blocks_lock);
+
+ wqs->num_free_blks++;
+
+ pos = wqs->return_blk_pos++;
+ pos &= WQS_MAX_NUM_BLOCKS - 1;
+
+ wqs->free_blocks[pos].page_idx = page_idx;
+ wqs->free_blocks[pos].block_idx = block_idx;
+
+ spin_unlock(&wqs->alloc_blocks_lock);
+}
+
+static int wqs_next_block(struct hifc_wqs *wqs, u32 *page_idx,
+ u32 *block_idx)
+{
+ u32 pos;
+
+ spin_lock(&wqs->alloc_blocks_lock);
+
+ if (wqs->num_free_blks <= 0) {
+ spin_unlock(&wqs->alloc_blocks_lock);
+ return -ENOMEM;
+ }
+ wqs->num_free_blks--;
+
+ pos = wqs->alloc_blk_pos++;
+ pos &= WQS_MAX_NUM_BLOCKS - 1;
+
+ *page_idx = wqs->free_blocks[pos].page_idx;
+ *block_idx = wqs->free_blocks[pos].block_idx;
+
+ wqs->free_blocks[pos].page_idx = 0xFFFFFFFF;
+ wqs->free_blocks[pos].block_idx = 0xFFFFFFFF;
+
+ spin_unlock(&wqs->alloc_blocks_lock);
+
+ return 0;
+}
+
+static int queue_alloc_page(void *handle, u64 **vaddr, u64 *paddr,
+ u64 **shadow_vaddr, u64 page_sz)
+{
+ dma_addr_t dma_addr = 0;
+
+ *vaddr = dma_zalloc_coherent(handle, page_sz, &dma_addr,
+ GFP_KERNEL);
+ if (!*vaddr) {
+ sdk_err(handle, "Failed to allocate dma to wqs page\n");
+ return -ENOMEM;
+ }
+
+ if (!ADDR_4K_ALIGNED(dma_addr)) {
+ sdk_err(handle, "Cla is not 4k aligned!\n");
+ goto shadow_vaddr_err;
+ }
+
+ *paddr = (u64)dma_addr;
+
+ /* use vzalloc for big mem, shadow_vaddr only used at initialization */
+ *shadow_vaddr = vzalloc(page_sz);
+ if (!*shadow_vaddr) {
+ sdk_err(handle, "Failed to allocate shadow page vaddr\n");
+ goto shadow_vaddr_err;
+ }
+
+ return 0;
+
+shadow_vaddr_err:
+ dma_free_coherent(handle, page_sz, *vaddr, dma_addr);
+ return -ENOMEM;
+}
+
+static int wqs_allocate_page(struct hifc_wqs *wqs, u32 page_idx)
+{
+ return queue_alloc_page(wqs->dev_hdl, &wqs->page_vaddr[page_idx],
+ &wqs->page_paddr[page_idx],
+ &wqs->shadow_page_vaddr[page_idx],
+ WQS_PAGE_SIZE);
+}
+
+static void wqs_free_page(struct hifc_wqs *wqs, u32 page_idx)
+{
+ dma_free_coherent(wqs->dev_hdl, WQS_PAGE_SIZE,
+ wqs->page_vaddr[page_idx],
+ (dma_addr_t)wqs->page_paddr[page_idx]);
+ vfree(wqs->shadow_page_vaddr[page_idx]);
+}
+
+static int cmdq_allocate_page(struct hifc_cmdq_pages *cmdq_pages)
+{
+ return queue_alloc_page(cmdq_pages->dev_hdl,
+ &cmdq_pages->cmdq_page_vaddr,
+ &cmdq_pages->cmdq_page_paddr,
+ &cmdq_pages->cmdq_shadow_page_vaddr,
+ CMDQ_PAGE_SIZE);
+}
+
+static void cmdq_free_page(struct hifc_cmdq_pages *cmdq_pages)
+{
+ dma_free_coherent(cmdq_pages->dev_hdl, CMDQ_PAGE_SIZE,
+ cmdq_pages->cmdq_page_vaddr,
+ (dma_addr_t)cmdq_pages->cmdq_page_paddr);
+ vfree(cmdq_pages->cmdq_shadow_page_vaddr);
+}
+
+static int alloc_wqes_shadow(struct hifc_wq *wq)
+{
+ u64 size;
+
+ /* if wq->max_wqe_size == 0, we don't need to alloc shadow */
+ if (wq->max_wqe_size <= wq->wqebb_size)
+ return 0;
+
+ size = (u64)wq->num_q_pages * wq->max_wqe_size;
+ wq->shadow_wqe = kzalloc(size, GFP_KERNEL);
+ if (!wq->shadow_wqe) {
+ pr_err("Failed to allocate shadow wqe\n");
+ return -ENOMEM;
+ }
+
+ size = wq->num_q_pages * sizeof(wq->prod_idx);
+ wq->shadow_idx = kzalloc(size, GFP_KERNEL);
+ if (!wq->shadow_idx) {
+ pr_err("Failed to allocate shadow index\n");
+ goto shadow_idx_err;
+ }
+
+ return 0;
+
+shadow_idx_err:
+ kfree(wq->shadow_wqe);
+ return -ENOMEM;
+}
+
+static void free_wqes_shadow(struct hifc_wq *wq)
+{
+ if (wq->max_wqe_size <= wq->wqebb_size)
+ return;
+
+ kfree(wq->shadow_idx);
+ kfree(wq->shadow_wqe);
+}
+
+static void free_wq_pages(void *handle, struct hifc_wq *wq,
+ u32 num_q_pages)
+{
+ u32 i;
+
+ for (i = 0; i < num_q_pages; i++)
+ hifc_dma_free_coherent_align(handle, &wq->mem_align[i]);
+
+ free_wqes_shadow(wq);
+
+ wq->block_vaddr = NULL;
+ wq->shadow_block_vaddr = NULL;
+
+ kfree(wq->mem_align);
+}
+
+static int alloc_wq_pages(void *dev_hdl, struct hifc_wq *wq)
+{
+ struct hifc_dma_addr_align *mem_align;
+ u64 *vaddr, *paddr;
+ u32 i, num_q_pages;
+ int err;
+
+ vaddr = wq->shadow_block_vaddr;
+ paddr = wq->block_vaddr;
+
+ num_q_pages = ALIGN(WQ_SIZE(wq), wq->wq_page_size) / wq->wq_page_size;
+ if (num_q_pages > WQ_MAX_PAGES) {
+ sdk_err(dev_hdl, "Number(%d) wq pages exceeds the limit\n",
+ num_q_pages);
+ return -EINVAL;
+ }
+
+ if (num_q_pages & (num_q_pages - 1)) {
+ sdk_err(dev_hdl, "Wq num(%d) q pages must be power of 2\n",
+ num_q_pages);
+ return -EINVAL;
+ }
+
+ wq->num_q_pages = num_q_pages;
+
+ err = alloc_wqes_shadow(wq);
+ if (err) {
+ sdk_err(dev_hdl, "Failed to allocate wqe shadow\n");
+ return err;
+ }
+
+ wq->mem_align = kcalloc(wq->num_q_pages, sizeof(*wq->mem_align),
+ GFP_KERNEL);
+ if (!wq->mem_align) {
+ sdk_err(dev_hdl, "Failed to allocate mem_align\n");
+ free_wqes_shadow(wq);
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < num_q_pages; i++) {
+ mem_align = &wq->mem_align[i];
+ err = hifc_dma_zalloc_coherent_align(dev_hdl, wq->wq_page_size,
+ wq->wq_page_size,
+ GFP_KERNEL, mem_align);
+ if (err) {
+ sdk_err(dev_hdl, "Failed to allocate wq page\n");
+ goto alloc_wq_pages_err;
+ }
+
+ *paddr = cpu_to_be64(mem_align->align_paddr);
+ *vaddr = (u64)mem_align->align_vaddr;
+
+ paddr++;
+ vaddr++;
+ }
+
+ return 0;
+
+alloc_wq_pages_err:
+ free_wq_pages(dev_hdl, wq, i);
+
+ return -ENOMEM;
+}
+
+int hifc_wq_allocate(struct hifc_wqs *wqs, struct hifc_wq *wq,
+ u32 wqebb_size, u32 wq_page_size, u16 q_depth,
+ u32 max_wqe_size)
+{
+ u32 num_wqebbs_per_page;
+ int err;
+
+ if (wqebb_size == 0) {
+ sdk_err(wqs->dev_hdl, "Wqebb_size must be >0\n");
+ return -EINVAL;
+ }
+
+ if (q_depth & (q_depth - 1)) {
+ sdk_err(wqs->dev_hdl, "Wq q_depth(%d) isn't power of 2\n",
+ q_depth);
+ return -EINVAL;
+ }
+
+ if (wq_page_size & (wq_page_size - 1)) {
+ sdk_err(wqs->dev_hdl, "Wq page_size(%d) isn't power of 2\n",
+ wq_page_size);
+ return -EINVAL;
+ }
+
+ num_wqebbs_per_page = ALIGN(wq_page_size, wqebb_size) / wqebb_size;
+
+ if (num_wqebbs_per_page & (num_wqebbs_per_page - 1)) {
+ sdk_err(wqs->dev_hdl, "Num(%d) wqebbs per page isn't power of 2\n",
+ num_wqebbs_per_page);
+ return -EINVAL;
+ }
+
+ err = wqs_next_block(wqs, &wq->page_idx, &wq->block_idx);
+ if (err) {
+ sdk_err(wqs->dev_hdl, "Failed to get free wqs next block\n");
+ return err;
+ }
+
+ wq->wqebb_size = wqebb_size;
+ wq->wq_page_size = wq_page_size;
+ wq->q_depth = q_depth;
+ wq->max_wqe_size = max_wqe_size;
+ wq->num_wqebbs_per_page = num_wqebbs_per_page;
+
+ wq->wqebbs_per_page_shift = (u32)ilog2(num_wqebbs_per_page);
+
+ wq->block_vaddr = WQ_BASE_VADDR(wqs, wq);
+ wq->shadow_block_vaddr = WQ_BASE_ADDR(wqs, wq);
+ wq->block_paddr = WQ_BASE_PADDR(wqs, wq);
+
+ err = alloc_wq_pages(wqs->dev_hdl, wq);
+ if (err) {
+ sdk_err(wqs->dev_hdl, "Failed to allocate wq pages\n");
+ goto alloc_wq_pages_err;
+ }
+
+ atomic_set(&wq->delta, q_depth);
+ wq->cons_idx = 0;
+ wq->prod_idx = 0;
+ wq->mask = q_depth - 1;
+
+ return 0;
+
+alloc_wq_pages_err:
+ wqs_return_block(wqs, wq->page_idx, wq->block_idx);
+ return err;
+}
+
+void hifc_wq_free(struct hifc_wqs *wqs, struct hifc_wq *wq)
+{
+ free_wq_pages(wqs->dev_hdl, wq, wq->num_q_pages);
+
+ wqs_return_block(wqs, wq->page_idx, wq->block_idx);
+}
+
+static void init_wqs_blocks_arr(struct hifc_wqs *wqs)
+{
+ u32 page_idx, blk_idx, pos = 0;
+
+ for (page_idx = 0; page_idx < wqs->num_pages; page_idx++) {
+ for (blk_idx = 0; blk_idx < WQS_BLOCKS_PER_PAGE; blk_idx++) {
+ wqs->free_blocks[pos].page_idx = page_idx;
+ wqs->free_blocks[pos].block_idx = blk_idx;
+ pos++;
+ }
+ }
+
+ wqs->alloc_blk_pos = 0;
+ wqs->return_blk_pos = 0;
+ wqs->num_free_blks = WQS_MAX_NUM_BLOCKS;
+ spin_lock_init(&wqs->alloc_blocks_lock);
+}
+
+void hifc_wq_wqe_pg_clear(struct hifc_wq *wq)
+{
+ u64 *block_vaddr;
+ u32 pg_idx;
+
+ block_vaddr = wq->shadow_block_vaddr;
+
+ atomic_set(&wq->delta, wq->q_depth);
+ wq->cons_idx = 0;
+ wq->prod_idx = 0;
+
+ for (pg_idx = 0; pg_idx < wq->num_q_pages; pg_idx++)
+ memset((void *)(*(block_vaddr + pg_idx)), 0, wq->wq_page_size);
+}
+
+int hifc_cmdq_alloc(struct hifc_cmdq_pages *cmdq_pages,
+ struct hifc_wq *wq, void *dev_hdl,
+ int cmdq_blocks, u32 wq_page_size, u32 wqebb_size,
+ u16 q_depth, u32 max_wqe_size)
+{
+ int i, j, err = -ENOMEM;
+
+ if (q_depth & (q_depth - 1)) {
+ sdk_err(dev_hdl, "Cmdq q_depth(%d) isn't power of 2\n",
+ q_depth);
+ return -EINVAL;
+ }
+
+ cmdq_pages->dev_hdl = dev_hdl;
+
+ err = cmdq_allocate_page(cmdq_pages);
+ if (err) {
+ sdk_err(dev_hdl, "Failed to allocate CMDQ page\n");
+ return err;
+ }
+
+ for (i = 0; i < cmdq_blocks; i++) {
+ wq[i].page_idx = 0;
+ wq[i].block_idx = (u32)i;
+ wq[i].wqebb_size = wqebb_size;
+ wq[i].wq_page_size = wq_page_size;
+ wq[i].q_depth = q_depth;
+ wq[i].max_wqe_size = max_wqe_size;
+ wq[i].num_wqebbs_per_page =
+ ALIGN(wq_page_size, wqebb_size) / wqebb_size;
+
+ wq[i].wqebbs_per_page_shift =
+ (u32)ilog2(wq[i].num_wqebbs_per_page);
+
+ wq[i].block_vaddr = CMDQ_BASE_VADDR(cmdq_pages, &wq[i]);
+ wq[i].shadow_block_vaddr = CMDQ_BASE_ADDR(cmdq_pages, &wq[i]);
+ wq[i].block_paddr = CMDQ_BASE_PADDR(cmdq_pages, &wq[i]);
+
+ err = alloc_wq_pages(cmdq_pages->dev_hdl, &wq[i]);
+ if (err) {
+ sdk_err(dev_hdl, "Failed to alloc CMDQ blocks\n");
+ goto cmdq_block_err;
+ }
+
+ atomic_set(&wq[i].delta, q_depth);
+ wq[i].cons_idx = 0;
+ wq[i].prod_idx = 0;
+ wq[i].mask = q_depth - 1;
+ }
+
+ return 0;
+
+cmdq_block_err:
+ for (j = 0; j < i; j++)
+ free_wq_pages(cmdq_pages->dev_hdl, &wq[j], wq[j].num_q_pages);
+
+ cmdq_free_page(cmdq_pages);
+ return err;
+}
+
+void hifc_cmdq_free(struct hifc_cmdq_pages *cmdq_pages,
+ struct hifc_wq *wq, int cmdq_blocks)
+{
+ int i;
+
+ for (i = 0; i < cmdq_blocks; i++)
+ free_wq_pages(cmdq_pages->dev_hdl, &wq[i], wq[i].num_q_pages);
+
+ cmdq_free_page(cmdq_pages);
+}
+
+static int alloc_page_addr(struct hifc_wqs *wqs)
+{
+ u64 size = wqs->num_pages * sizeof(*wqs->page_paddr);
+
+ wqs->page_paddr = kzalloc(size, GFP_KERNEL);
+ if (!wqs->page_paddr)
+ return -ENOMEM;
+
+ size = wqs->num_pages * sizeof(*wqs->page_vaddr);
+ wqs->page_vaddr = kzalloc(size, GFP_KERNEL);
+ if (!wqs->page_vaddr)
+ goto page_vaddr_err;
+
+ size = wqs->num_pages * sizeof(*wqs->shadow_page_vaddr);
+ wqs->shadow_page_vaddr = kzalloc(size, GFP_KERNEL);
+ if (!wqs->shadow_page_vaddr)
+ goto page_shadow_vaddr_err;
+
+ return 0;
+
+page_shadow_vaddr_err:
+ kfree(wqs->page_vaddr);
+
+page_vaddr_err:
+ kfree(wqs->page_paddr);
+ return -ENOMEM;
+}
+
+static void free_page_addr(struct hifc_wqs *wqs)
+{
+ kfree(wqs->shadow_page_vaddr);
+ kfree(wqs->page_vaddr);
+ kfree(wqs->page_paddr);
+}
+
+int hifc_wqs_alloc(struct hifc_wqs *wqs, int num_wqs, void *dev_hdl)
+{
+ u32 i, page_idx;
+ int err;
+
+ wqs->dev_hdl = dev_hdl;
+ wqs->num_pages = WQ_NUM_PAGES(num_wqs);
+
+ if (alloc_page_addr(wqs)) {
+ sdk_err(dev_hdl, "Failed to allocate mem for page addresses\n");
+ return -ENOMEM;
+ }
+
+ for (page_idx = 0; page_idx < wqs->num_pages; page_idx++) {
+ err = wqs_allocate_page(wqs, page_idx);
+ if (err) {
+ sdk_err(dev_hdl, "Failed wq page allocation\n");
+ goto wq_allocate_page_err;
+ }
+ }
+
+ wqs->free_blocks = kzalloc(WQS_FREE_BLOCKS_SIZE(wqs), GFP_KERNEL);
+ if (!wqs->free_blocks) {
+ err = -ENOMEM;
+ goto alloc_blocks_err;
+ }
+
+ init_wqs_blocks_arr(wqs);
+ return 0;
+
+alloc_blocks_err:
+wq_allocate_page_err:
+ for (i = 0; i < page_idx; i++)
+ wqs_free_page(wqs, i);
+
+ free_page_addr(wqs);
+ return err;
+}
+
+void hifc_wqs_free(struct hifc_wqs *wqs)
+{
+ u32 page_idx;
+
+ for (page_idx = 0; page_idx < wqs->num_pages; page_idx++)
+ wqs_free_page(wqs, page_idx);
+
+ free_page_addr(wqs);
+ kfree(wqs->free_blocks);
+}
+
+static void copy_wqe_to_shadow(struct hifc_wq *wq, void *shadow_addr,
+ int num_wqebbs, u16 prod_idx)
+{
+ u8 *shadow_wqebb_addr, *wqe_page_addr, *wqebb_addr;
+ u32 i, offset;
+ u16 idx;
+
+ for (i = 0; i < (u32)num_wqebbs; i++) {
+ offset = i * wq->wqebb_size;
+ shadow_wqebb_addr = (u8 *)shadow_addr + offset;
+
+ idx = MASKED_WQE_IDX(wq, prod_idx + i);
+ wqe_page_addr = WQ_PAGE_ADDR(wq, idx);
+ wqebb_addr = wqe_page_addr +
+ WQE_PAGE_OFF(wq, MASKED_WQE_IDX(wq, idx));
+
+ memcpy(shadow_wqebb_addr, wqebb_addr, wq->wqebb_size);
+ }
+}
+
+void *hifc_get_wqebb_addr(struct hifc_wq *wq, u16 index)
+{
+ return WQ_PAGE_ADDR(wq, index) + WQE_PAGE_OFF(wq, index);
+}
+
+u64 hifc_get_first_wqe_page_addr(struct hifc_wq *wq)
+{
+ return be64_to_cpu(*wq->block_vaddr);
+}
+
+void *hifc_get_wqe(struct hifc_wq *wq, int num_wqebbs, u16 *prod_idx)
+{
+ u32 curr_pg, end_pg;
+ u16 curr_prod_idx, end_prod_idx;
+
+ if (atomic_sub_return(num_wqebbs, &wq->delta) < 0) {
+ atomic_add(num_wqebbs, &wq->delta);
+ return NULL;
+ }
+
+ /* use original cur_pi and end_pi, no need queue depth mask as
+ * WQE_PAGE_NUM will do num_queue_pages mask
+ */
+ curr_prod_idx = (u16)wq->prod_idx;
+ wq->prod_idx += num_wqebbs;
+
+ /* end prod index should points to the last wqebb of wqe,
+ * therefore minus 1
+ */
+ end_prod_idx = (u16)wq->prod_idx - 1;
+
+ curr_pg = WQE_PAGE_NUM(wq, curr_prod_idx);
+ end_pg = WQE_PAGE_NUM(wq, end_prod_idx);
+
+ *prod_idx = MASKED_WQE_IDX(wq, curr_prod_idx);
+
+ /* If we only have one page, still need to get shadown wqe when
+ * wqe rolling-over page
+ */
+ if (curr_pg != end_pg || MASKED_WQE_IDX(wq, end_prod_idx) < *prod_idx) {
+ u32 offset = curr_pg * wq->max_wqe_size;
+ u8 *shadow_addr = wq->shadow_wqe + offset;
+
+ wq->shadow_idx[curr_pg] = *prod_idx;
+ return shadow_addr;
+ }
+
+ return WQ_PAGE_ADDR(wq, *prod_idx) + WQE_PAGE_OFF(wq, *prod_idx);
+}
+
+void hifc_put_wqe(struct hifc_wq *wq, int num_wqebbs)
+{
+ atomic_add(num_wqebbs, &wq->delta);
+ wq->cons_idx += num_wqebbs;
+}
+
+void *hifc_read_wqe(struct hifc_wq *wq, int num_wqebbs, u16 *cons_idx)
+{
+ u32 curr_pg, end_pg;
+ u16 curr_cons_idx, end_cons_idx;
+
+ if ((atomic_read(&wq->delta) + num_wqebbs) > wq->q_depth)
+ return NULL;
+
+ curr_cons_idx = (u16)wq->cons_idx;
+
+ curr_cons_idx = MASKED_WQE_IDX(wq, curr_cons_idx);
+ end_cons_idx = MASKED_WQE_IDX(wq, curr_cons_idx + num_wqebbs - 1);
+
+ curr_pg = WQE_PAGE_NUM(wq, curr_cons_idx);
+ end_pg = WQE_PAGE_NUM(wq, end_cons_idx);
+
+ *cons_idx = curr_cons_idx;
+
+ if (curr_pg != end_pg) {
+ u32 offset = curr_pg * wq->max_wqe_size;
+ u8 *shadow_addr = wq->shadow_wqe + offset;
+
+ copy_wqe_to_shadow(wq, shadow_addr, num_wqebbs, *cons_idx);
+
+ return shadow_addr;
+ }
+
+ return WQ_PAGE_ADDR(wq, *cons_idx) + WQE_PAGE_OFF(wq, *cons_idx);
+}
diff --git a/drivers/scsi/huawei/hifc/hifc_wq.h b/drivers/scsi/huawei/hifc/hifc_wq.h
new file mode 100644
index 000000000000..207d54191afa
--- /dev/null
+++ b/drivers/scsi/huawei/hifc/hifc_wq.h
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Huawei Hifc PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ */
+
+#ifndef HIFC_WQ_H
+#define HIFC_WQ_H
+
+#define WQS_BLOCKS_PER_PAGE 4
+#define WQ_SIZE(wq) (u32)((u64)(wq)->q_depth * (wq)->wqebb_size)
+
+#define WQE_PAGE_NUM(wq, idx) (((idx) >> ((wq)->wqebbs_per_page_shift)) & \
+ ((wq)->num_q_pages - 1))
+
+#define WQE_PAGE_OFF(wq, idx) ((u64)((wq)->wqebb_size) * \
+ ((idx) & ((wq)->num_wqebbs_per_page - 1)))
+
+#define WQ_PAGE_ADDR_SIZE sizeof(u64)
+#define WQ_PAGE_ADDR_SIZE_SHIFT 3
+#define WQ_PAGE_ADDR(wq, idx) \
+ (u8 *)(*(u64 *)((u64)((wq)->shadow_block_vaddr) + \
+ (WQE_PAGE_NUM(wq, idx) << WQ_PAGE_ADDR_SIZE_SHIFT)))
+
+#define WQ_BLOCK_SIZE 4096UL
+#define WQS_PAGE_SIZE (WQS_BLOCKS_PER_PAGE * WQ_BLOCK_SIZE)
+#define WQ_MAX_PAGES (WQ_BLOCK_SIZE >> WQ_PAGE_ADDR_SIZE_SHIFT)
+
+#define CMDQ_BLOCKS_PER_PAGE 8
+#define CMDQ_BLOCK_SIZE 512UL
+#define CMDQ_PAGE_SIZE ALIGN((CMDQ_BLOCKS_PER_PAGE * \
+ CMDQ_BLOCK_SIZE), PAGE_SIZE)
+
+#define ADDR_4K_ALIGNED(addr) (((addr) & 0xfff) == 0)
+
+#define WQ_BASE_VADDR(wqs, wq) \
+ (u64 *)(((u64)((wqs)->page_vaddr[(wq)->page_idx])) \
+ + (wq)->block_idx * WQ_BLOCK_SIZE)
+
+#define WQ_BASE_PADDR(wqs, wq) (((wqs)->page_paddr[(wq)->page_idx]) \
+ + (u64)(wq)->block_idx * WQ_BLOCK_SIZE)
+
+#define WQ_BASE_ADDR(wqs, wq) \
+ (u64 *)(((u64)((wqs)->shadow_page_vaddr[(wq)->page_idx])) \
+ + (wq)->block_idx * WQ_BLOCK_SIZE)
+
+#define CMDQ_BASE_VADDR(cmdq_pages, wq) \
+ (u64 *)(((u64)((cmdq_pages)->cmdq_page_vaddr)) \
+ + (wq)->block_idx * CMDQ_BLOCK_SIZE)
+
+#define CMDQ_BASE_PADDR(cmdq_pages, wq) \
+ (((u64)((cmdq_pages)->cmdq_page_paddr)) \
+ + (u64)(wq)->block_idx * CMDQ_BLOCK_SIZE)
+
+#define CMDQ_BASE_ADDR(cmdq_pages, wq) \
+ (u64 *)(((u64)((cmdq_pages)->cmdq_shadow_page_vaddr)) \
+ + (wq)->block_idx * CMDQ_BLOCK_SIZE)
+
+#define MASKED_WQE_IDX(wq, idx) ((idx) & (wq)->mask)
+
+#define WQ_NUM_PAGES(num_wqs) \
+ (ALIGN((u32)num_wqs, WQS_BLOCKS_PER_PAGE) / WQS_BLOCKS_PER_PAGE)
+
+#define MAX_WQE_SIZE(max_sge, wqebb_size) \
+ ((max_sge <= 2) ? (wqebb_size) : \
+ ((ALIGN(((max_sge) - 2), 4) / 4 + 1) * (wqebb_size)))
+
+struct hifc_free_block {
+ u32 page_idx;
+ u32 block_idx;
+};
+
+struct hifc_wq {
+ /* The addresses are 64 bit in the HW */
+ u64 block_paddr;
+ u64 *shadow_block_vaddr;
+ u64 *block_vaddr;
+
+ u32 wqebb_size;
+ u32 wq_page_size;
+ u16 q_depth;
+ u32 max_wqe_size;
+ u32 num_wqebbs_per_page;
+
+ /* performance: replace mul/div as shift;
+ * num_wqebbs_per_page must be power of 2
+ */
+ u32 wqebbs_per_page_shift;
+ u32 page_idx;
+ u32 block_idx;
+
+ u32 num_q_pages;
+
+ struct hifc_dma_addr_align *mem_align;
+
+ int cons_idx;
+ int prod_idx;
+
+ atomic_t delta;
+ u16 mask;
+
+ u8 *shadow_wqe;
+ u16 *shadow_idx;
+};
+
+struct hifc_cmdq_pages {
+ /* The addresses are 64 bit in the HW */
+ u64 cmdq_page_paddr;
+ u64 *cmdq_page_vaddr;
+ u64 *cmdq_shadow_page_vaddr;
+
+ void *dev_hdl;
+};
+
+struct hifc_wqs {
+ /* The addresses are 64 bit in the HW */
+ u64 *page_paddr;
+ u64 **page_vaddr;
+ u64 **shadow_page_vaddr;
+
+ struct hifc_free_block *free_blocks;
+ u32 alloc_blk_pos;
+ u32 return_blk_pos;
+ int num_free_blks;
+
+ /* for allocate blocks */
+ spinlock_t alloc_blocks_lock;
+
+ u32 num_pages;
+
+ void *dev_hdl;
+};
+
+void hifc_wq_wqe_pg_clear(struct hifc_wq *wq);
+
+int hifc_cmdq_alloc(struct hifc_cmdq_pages *cmdq_pages,
+ struct hifc_wq *wq, void *dev_hdl,
+ int cmdq_blocks, u32 wq_page_size, u32 wqebb_size,
+ u16 q_depth, u32 max_wqe_size);
+
+void hifc_cmdq_free(struct hifc_cmdq_pages *cmdq_pages,
+ struct hifc_wq *wq, int cmdq_blocks);
+
+int hifc_wqs_alloc(struct hifc_wqs *wqs, int num_wqs, void *dev_hdl);
+
+void hifc_wqs_free(struct hifc_wqs *wqs);
+
+int hifc_wq_allocate(struct hifc_wqs *wqs, struct hifc_wq *wq,
+ u32 wqebb_size, u32 wq_page_size, u16 q_depth,
+ u32 max_wqe_size);
+
+void hifc_wq_free(struct hifc_wqs *wqs, struct hifc_wq *wq);
+
+void *hifc_get_wqebb_addr(struct hifc_wq *wq, u16 index);
+
+u64 hifc_get_first_wqe_page_addr(struct hifc_wq *wq);
+
+void *hifc_get_wqe(struct hifc_wq *wq, int num_wqebbs, u16 *prod_idx);
+
+void hifc_put_wqe(struct hifc_wq *wq, int num_wqebbs);
+
+void *hifc_read_wqe(struct hifc_wq *wq, int num_wqebbs, u16 *cons_idx);
+
+#endif
+
--
2.25.1
1
6
From: Peter Zijlstra <peterz(a)infradead.org>
mainline inclusion
from mainline-v5.2-rc1
commit dea2434c23c102b3e7d320849ec1cfeb432edb60
category:feature
bugzilla:NA
CVE:NA
-------------------
Write a comment explaining some of this..
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Acked-by: Will Deacon <will.deacon(a)arm.com>
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Aneesh Kumar K.V <aneesh.kumar(a)linux.vnet.ibm.com>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: H. Peter Anvin <hpa(a)zytor.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Nick Piggin <npiggin(a)gmail.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Rik van Riel <riel(a)surriel.com>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Signed-off-by: Chen Jun <chenjun102(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/asm-generic/tlb.h | 119 +++++++++++++++++++++++++++++++++++++-
1 file changed, 116 insertions(+), 3 deletions(-)
diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 147381aad7cc..632b1cdce357 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -22,6 +22,118 @@
#ifdef CONFIG_MMU
+/*
+ * Generic MMU-gather implementation.
+ *
+ * The mmu_gather data structure is used by the mm code to implement the
+ * correct and efficient ordering of freeing pages and TLB invalidations.
+ *
+ * This correct ordering is:
+ *
+ * 1) unhook page
+ * 2) TLB invalidate page
+ * 3) free page
+ *
+ * That is, we must never free a page before we have ensured there are no live
+ * translations left to it. Otherwise it might be possible to observe (or
+ * worse, change) the page content after it has been reused.
+ *
+ * The mmu_gather API consists of:
+ *
+ * - tlb_gather_mmu() / tlb_finish_mmu(); start and finish a mmu_gather
+ *
+ * Finish in particular will issue a (final) TLB invalidate and free
+ * all (remaining) queued pages.
+ *
+ * - tlb_start_vma() / tlb_end_vma(); marks the start / end of a VMA
+ *
+ * Defaults to flushing at tlb_end_vma() to reset the range; helps when
+ * there's large holes between the VMAs.
+ *
+ * - tlb_remove_page() / __tlb_remove_page()
+ * - tlb_remove_page_size() / __tlb_remove_page_size()
+ *
+ * __tlb_remove_page_size() is the basic primitive that queues a page for
+ * freeing. __tlb_remove_page() assumes PAGE_SIZE. Both will return a
+ * boolean indicating if the queue is (now) full and a call to
+ * tlb_flush_mmu() is required.
+ *
+ * tlb_remove_page() and tlb_remove_page_size() imply the call to
+ * tlb_flush_mmu() when required and has no return value.
+ *
+ * - tlb_remove_check_page_size_change()
+ *
+ * call before __tlb_remove_page*() to set the current page-size; implies a
+ * possible tlb_flush_mmu() call.
+ *
+ * - tlb_flush_mmu() / tlb_flush_mmu_tlbonly() / tlb_flush_mmu_free()
+ *
+ * tlb_flush_mmu_tlbonly() - does the TLB invalidate (and resets
+ * related state, like the range)
+ *
+ * tlb_flush_mmu_free() - frees the queued pages; make absolutely
+ * sure no additional tlb_remove_page()
+ * calls happen between _tlbonly() and this.
+ *
+ * tlb_flush_mmu() - the above two calls.
+ *
+ * - mmu_gather::fullmm
+ *
+ * A flag set by tlb_gather_mmu() to indicate we're going to free
+ * the entire mm; this allows a number of optimizations.
+ *
+ * - We can ignore tlb_{start,end}_vma(); because we don't
+ * care about ranges. Everything will be shot down.
+ *
+ * - (RISC) architectures that use ASIDs can cycle to a new ASID
+ * and delay the invalidation until ASID space runs out.
+ *
+ * - mmu_gather::need_flush_all
+ *
+ * A flag that can be set by the arch code if it wants to force
+ * flush the entire TLB irrespective of the range. For instance
+ * x86-PAE needs this when changing top-level entries.
+ *
+ * And requires the architecture to provide and implement tlb_flush().
+ *
+ * tlb_flush() may, in addition to the above mentioned mmu_gather fields, make
+ * use of:
+ *
+ * - mmu_gather::start / mmu_gather::end
+ *
+ * which provides the range that needs to be flushed to cover the pages to
+ * be freed.
+ *
+ * - mmu_gather::freed_tables
+ *
+ * set when we freed page table pages
+ *
+ * - tlb_get_unmap_shift() / tlb_get_unmap_size()
+ *
+ * returns the smallest TLB entry size unmapped in this range
+ *
+ * Additionally there are a few opt-in features:
+ *
+ * HAVE_RCU_TABLE_FREE
+ *
+ * This provides tlb_remove_table(), to be used instead of tlb_remove_page()
+ * for page directores (__p*_free_tlb()). This provides separate freeing of
+ * the page-table pages themselves in a semi-RCU fashion (see comment below).
+ * Useful if your architecture doesn't use IPIs for remote TLB invalidates
+ * and therefore doesn't naturally serialize with software page-table walkers.
+ *
+ * When used, an architecture is expected to provide __tlb_remove_table()
+ * which does the actual freeing of these pages.
+ *
+ * HAVE_RCU_TABLE_INVALIDATE
+ *
+ * This makes HAVE_RCU_TABLE_FREE call tlb_flush_mmu_tlbonly() before freeing
+ * the page-table pages. Required if you use HAVE_RCU_TABLE_FREE and your
+ * architecture uses the Linux page-tables natively.
+ *
+ */
+#define HAVE_GENERIC_MMU_GATHER
+
#ifdef CONFIG_HAVE_RCU_TABLE_FREE
/*
* Semi RCU freeing of the page directories.
@@ -89,14 +201,17 @@ struct mmu_gather_batch {
*/
#define MAX_GATHER_BATCH_COUNT (10000UL/MAX_GATHER_BATCH)
-/* struct mmu_gather is an opaque type used by the mm code for passing around
+/*
+ * struct mmu_gather is an opaque type used by the mm code for passing around
* any data needed by arch specific code for tlb_remove_page.
*/
struct mmu_gather {
struct mm_struct *mm;
+
#ifdef CONFIG_HAVE_RCU_TABLE_FREE
struct mmu_table_batch *batch;
#endif
+
unsigned long start;
unsigned long end;
/*
@@ -131,8 +246,6 @@ struct mmu_gather {
int page_size;
};
-#define HAVE_GENERIC_MMU_GATHER
-
void arch_tlb_gather_mmu(struct mmu_gather *tlb,
struct mm_struct *mm, unsigned long start, unsigned long end);
void tlb_flush_mmu(struct mmu_gather *tlb);
--
2.25.1
1
7
From: Yang Shi <shy828301(a)gmail.com>
mainline inclusion
from mainline-v5.9-rc4
commit 7867fd7cc44e63c6673cd0f8fea155456d34d0de
category: bugfix
bugzilla: 42216
CVE: NA
-------------------------------------------------
The syzbot reported the below use-after-free:
BUG: KASAN: use-after-free in madvise_willneed mm/madvise.c:293 [inline]
BUG: KASAN: use-after-free in madvise_vma mm/madvise.c:942 [inline]
BUG: KASAN: use-after-free in do_madvise.part.0+0x1c8b/0x1cf0 mm/madvise.c:1145
Read of size 8 at addr ffff8880a6163eb0 by task syz-executor.0/9996
CPU: 0 PID: 9996 Comm: syz-executor.0 Not tainted 5.9.0-rc1-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x18f/0x20d lib/dump_stack.c:118
print_address_description.constprop.0.cold+0xae/0x497 mm/kasan/report.c:383
__kasan_report mm/kasan/report.c:513 [inline]
kasan_report.cold+0x1f/0x37 mm/kasan/report.c:530
madvise_willneed mm/madvise.c:293 [inline]
madvise_vma mm/madvise.c:942 [inline]
do_madvise.part.0+0x1c8b/0x1cf0 mm/madvise.c:1145
do_madvise mm/madvise.c:1169 [inline]
__do_sys_madvise mm/madvise.c:1171 [inline]
__se_sys_madvise mm/madvise.c:1169 [inline]
__x64_sys_madvise+0xd9/0x110 mm/madvise.c:1169
do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Allocated by task 9992:
kmem_cache_alloc+0x138/0x3a0 mm/slab.c:3482
vm_area_alloc+0x1c/0x110 kernel/fork.c:347
mmap_region+0x8e5/0x1780 mm/mmap.c:1743
do_mmap+0xcf9/0x11d0 mm/mmap.c:1545
vm_mmap_pgoff+0x195/0x200 mm/util.c:506
ksys_mmap_pgoff+0x43a/0x560 mm/mmap.c:1596
do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Freed by task 9992:
kmem_cache_free.part.0+0x67/0x1f0 mm/slab.c:3693
remove_vma+0x132/0x170 mm/mmap.c:184
remove_vma_list mm/mmap.c:2613 [inline]
__do_munmap+0x743/0x1170 mm/mmap.c:2869
do_munmap mm/mmap.c:2877 [inline]
mmap_region+0x257/0x1780 mm/mmap.c:1716
do_mmap+0xcf9/0x11d0 mm/mmap.c:1545
vm_mmap_pgoff+0x195/0x200 mm/util.c:506
ksys_mmap_pgoff+0x43a/0x560 mm/mmap.c:1596
do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
entry_SYSCALL_64_after_hwframe+0x44/0xa9
It is because vma is accessed after releasing mmap_lock, but someone
else acquired the mmap_lock and the vma is gone.
Releasing mmap_lock after accessing vma should fix the problem.
Fixes: 692fe62433d4c ("mm: Handle MADV_WILLNEED through vfs_fadvise()")
Reported-by: syzbot+b90df26038d1d5d85c97(a)syzkaller.appspotmail.com
Signed-off-by: Yang Shi <shy828301(a)gmail.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Reviewed-by: Andrew Morton <akpm(a)linux-foundation.org>
Reviewed-by: Jan Kara <jack(a)suse.cz>
Cc: <stable(a)vger.kernel.org> [5.4+]
Link: https://lkml.kernel.org/r/20200816141204.162624-1-shy828301@gmail.com
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Liu Shixin <liushixin2(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
mm/madvise.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/madvise.c b/mm/madvise.c
index 464282e24a30..1369e6d062bc 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -308,9 +308,9 @@ static long madvise_willneed(struct vm_area_struct *vma,
*/
*prev = NULL; /* tell sys_madvise we drop mmap_sem */
get_file(file);
- up_read(¤t->mm->mmap_sem);
offset = (loff_t)(start - vma->vm_start)
+ ((loff_t)vma->vm_pgoff << PAGE_SHIFT);
+ up_read(¤t->mm->mmap_sem);
vfs_fadvise(file, offset, end - start, POSIX_FADV_WILLNEED);
fput(file);
down_read(¤t->mm->mmap_sem);
--
2.25.1
1
2
Alain Volmat (1):
cpufreq: sti-cpufreq: add stih418 support
Aleksandr Nogikh (1):
netem: fix zero division in tabledist
Alex Hung (1):
ACPI: video: use ACPI backlight for HP 635 Notebook
Alexander Sverdlin (2):
staging: octeon: repair "fixed-link" support
staging: octeon: Drop on uncorrectable alignment or FCS error
Alok Prasad (1):
RDMA/qedr: Fix memory leak in iWARP CM
Amit Cohen (1):
mlxsw: core: Fix use-after-free in mlxsw_emad_trans_finish()
Anand Jain (2):
btrfs: fix replace of seed device
btrfs: improve device scanning messages
Anant Thazhemadam (2):
net: 9p: initialize sun_server.sun_path to have addr's value only when
addr is valid
gfs2: add validation checks for size of superblock
Andrew Donnellan (1):
powerpc/rtas: Restrict RTAS requests from userspace
Andrew Gabbasov (1):
ravb: Fix bit fields checking in ravb_hwtstamp_get()
Andy Shevchenko (2):
device property: Keep secondary firmware node secondary by type
device property: Don't clear secondary pointer for shared primary
firmware node
Aneesh Kumar K.V (1):
powerpc/drmem: Make lmb_size 64 bit
Antonio Borneo (1):
drm/bridge/synopsys: dsi: add support for non-continuous HS clock
Arjun Roy (1):
tcp: Prevent low rmem stalls with SO_RCVLOWAT.
Ashish Sangwan (1):
NFS: fix nfs_path in case of a rename retry
Badhri Jagan Sridharan (1):
usb: typec: tcpm: During PR_SWAP, source caps should be sent only
after tSwapSourceStart
Bartosz Golaszewski (1):
rtc: rx8010: don't modify the global rtc ops
Ben Hutchings (1):
ACPI / extlog: Check for RDMSR failure
Chao Leng (1):
nvme-rdma: fix crash when connect rejected
Chao Yu (2):
f2fs: fix uninit-value in f2fs_lookup
f2fs: fix to check segment boundary during SIT page readahead
Chris Lew (1):
rpmsg: glink: Use complete_all for open states
Chris Wilson (1):
drm/i915: Force VT'd workarounds when running as a guest OS
Chuck Lever (1):
NFSD: Add missing NFSv2 .pc_func methods
Dan Carpenter (1):
memory: emif: Remove bogus debugfs error handling
Daniel W. S. Almeida (1):
media: uvcvideo: Fix dereference of out-of-bound list iterator
Darrick J. Wong (2):
xfs: fix realtime bitmap/summary file truncation when growing rt
volume
xfs: don't free rt blocks when we're doing a REMAP bunmapi call
Dave Airlie (1):
drm/ttm: fix eviction valuable range check.
Denis Efremov (1):
btrfs: use kvzalloc() to allocate clone_roots in btrfs_ioctl_send()
Diana Craciun (1):
bus/fsl_mc: Do not rely on caller to provide non NULL mc_io
Dinghao Liu (1):
ext4: fix error handling code in add_new_gdb
Douglas Anderson (2):
ARM: 8997/2: hw_breakpoint: Handle inexact watchpoint addresses
kgdb: Make "kgdbcon" work properly with "kgdb_earlycon"
Douglas Gilbert (1):
sgl_alloc_order: fix memory leak
Eric Biggers (7):
fscrypt: return -EXDEV for incompatible rename or link into encrypted
dir
fscrypt: clean up and improve dentry revalidation
fscrypt: fix race allowing rename() and link() of ciphertext dentries
fs, fscrypt: clear DCACHE_ENCRYPTED_NAME when unaliasing directory
fscrypt: only set dentry_operations on ciphertext dentries
fscrypt: fix race where ->lookup() marks plaintext dentry as
ciphertext
ext4: fix leaking sysfs kobject after failed mount
Fangzhi Zuo (1):
drm/amd/display: HDMI remote sink need mode validation for Linux
Filipe Manana (3):
btrfs: reschedule if necessary when logging directory items
btrfs: send, recompute reference path after orphanization of a
directory
btrfs: fix use-after-free on readahead extent after failure to create
it
Frank Wunderlich (1):
arm: dts: mt7623: add missing pause for switchport
Frederic Barrat (1):
cxl: Rework error message for incompatible slots
Geert Uytterhoeven (1):
ata: sata_rcar: Fix DMA boundary mask
Greg Kroah-Hartman (2):
Revert "block: ratelimit handle_bad_sector() message"
Linux 4.19.155
Gustavo A. R. Silva (1):
mtd: lpddr: Fix bad logic in print_drs_error
Hans Verkuil (2):
media: videodev2.h: RGB BT2020 and HSV are always full range
media: imx274: fix frame interval handling
Hans de Goede (1):
media: uvcvideo: Fix uvc_ctrl_fixup_xu_info() not having any effect
Heiner Kallweit (1):
r8169: fix issue with forced threading in combination with shared
interrupts
Helge Deller (2):
scsi: mptfusion: Fix null pointer dereferences in mptscsih_remove()
hil/parisc: Disable HIL driver when it gets stuck
Ian Abbott (1):
staging: comedi: cb_pcidas: Allow 2-channel commands for AO subdevice
Ido Schimmel (1):
mlxsw: core: Fix memory leak on module removal
Ilya Dryomov (1):
libceph: clear con->out_msg on Policy::stateful_server faults
Jamie Iles (1):
ACPI: debug: don't allow debugging when ACPI is disabled
Jan Kara (2):
ext4: Detect already used quota file early
udf: Fix memory leak when mounting
Jason Gerecke (1):
HID: wacom: Avoid entering wacom_wac_pen_report for pad / battery
Jason Gunthorpe (1):
RDMA/addr: Fix race with netevent_callback()/rdma_addr_cancel()
Jerome Brunet (1):
usb: cdc-acm: fix cooldown mechanism
Jia-Ju Bai (1):
p54: avoid accessing the data mapped to streaming DMA
Jiri Olsa (1):
perf python scripting: Fix printable strings in python3 scripts
Jiri Slaby (1):
x86/unwind/orc: Fix inactive tasks with stack pointer in %sp on GCC 10
compiled kernels
Jisheng Zhang (1):
arm64: berlin: Select DW_APB_TIMER_OF
Joel Stanley (1):
powerpc: Warn about use of smt_snooze_delay
Johannes Berg (1):
um: change sigio_spinlock to a mutex
John Ogness (1):
printk: reduce LOG_BUF_SHIFT range for H8300
Jonathan Cameron (5):
ACPI: Add out of bounds and numa_off protections to pxm_to_node()
iio:light:si1145: Fix timestamp alignment and prevent data leak.
iio:adc:ti-adc0832 Fix alignment issue with timestamp
iio:adc:ti-adc12138 Fix alignment issue with timestamp
iio:gyro:itg3200: Fix timestamp alignment and prevent data leak.
Josef Bacik (1):
btrfs: cleanup cow block on error
Josh Poimboeuf (1):
objtool: Support Clang non-section symbols in ORC generation
Juergen Gross (2):
x86/xen: disable Firmware First mode for correctable memory errors
xen/events: block rogue events for some time
Kim Phillips (3):
arch/x86/amd/ibs: Fix re-arming IBS Fetch
perf/x86/amd/ibs: Don't include randomized bits in get_ibs_op_count()
perf/x86/amd/ibs: Fix raw sample data accumulation
Krzysztof Kozlowski (8):
power: supply: bq27xxx: report "not charging" on all types
ARM: dts: s5pv210: remove DMA controller bus node name to fix dtschema
warnings
ARM: dts: s5pv210: move PMU node out of clock controller
ARM: dts: s5pv210: remove dedicated 'audio-subsystem' node
ia64: fix build error with !COREDUMP
i2c: imx: Fix external abort on interrupt in exit paths
ARM: samsung: fix PM debug build with DEBUG_LL but !MMU
ARM: s3c24xx: fix missing system reset
Lang Dai (1):
uio: free uio id after uio file node is freed
Li Jun (3):
usb: dwc3: core: add phy cleanup for probe error handling
usb: dwc3: core: don't trigger runtime pm when remove driver
usb: typec: tcpm: reset hard_reset_count for any disconnect
Luo Meng (1):
ext4: fix invalid inode checksum
Madhav Chauhan (1):
drm/amdgpu: don't map BO in reserved region
Madhuparna Bhowmik (2):
mmc: via-sdmmc: Fix data race bug
drivers: watchdog: rdc321x_wdt: Fix race condition bugs
Mahesh Salgaonkar (1):
powerpc/powernv/elog: Fix race while processing OPAL error log event.
Marc Zyngier (2):
arm64: Run ARCH_WORKAROUND_1 enabling code on all CPUs
KVM: arm64: Fix AArch32 handling of DBGD{CCINT, SCRext} and DBGVCR
Marek Behún (1):
leds: bcm6328, bcm6358: use devres LED registering function
Martin Fuzzey (1):
w1: mxc_w1: Fix timeout resolution problem leading to bus error
Masahiro Fujiwara (1):
gtp: fix an use-before-init in gtp_newlink()
Masami Hiramatsu (1):
ia64: kprobes: Use generic kretprobe trampoline handler
Mateusz Nosek (1):
futex: Fix incorrect should_fail_futex() handling
Matthew Wilcox (Oracle) (3):
ceph: promote to unsigned long long before shifting
9P: Cast to loff_t before multiplying
cachefiles: Handle readpage error correctly
Michael Chan (1):
bnxt_en: Log unknown link speed appropriately.
Michael Neuling (1):
powerpc: Fix undetected data corruption with P9N DD2.1 VSX CI load
emulation
Michael Schaller (1):
efivarfs: Replace invalid slashes with exclamation marks in dentries.
Miklos Szeredi (1):
fuse: fix page dereference after free
Nadezda Lutovinova (1):
drm/brige/megachips: Add checking if ge_b850v3_lvds_init() is working
correctly
Nicholas Piggin (3):
mm: fix exec activate_mm vs TLB shootdown and lazy tlb switching race
powerpc: select ARCH_WANT_IRQS_OFF_ACTIVATE_MM
sparc64: remove mm_cpumask clearing to fix kthread_use_mm race
Nick Desaulniers (1):
arm64: link with -z norelro regardless of CONFIG_RELOCATABLE
Olga Kornievskaia (1):
NFSv4.2: support EXCHGID4_FLAG_SUPP_FENCE_OPS 4.2 EXCHANGE_ID flag
Oliver Neukum (1):
USB: adutux: fix debugging
Oliver O'Halloran (1):
powerpc/powernv/smp: Fix spurious DBG() warning
Paul Cercueil (1):
dmaengine: dma-jz4780: Fix race in jz4780_dma_tx_status
Peter Chen (1):
usb: xhci: omit duplicate actions when suspending a runtime suspended
host.
Peter Zijlstra (1):
serial: pl011: Fix lockdep splat when handling magic-sysrq interrupt
Qiujun Huang (1):
ring-buffer: Return 0 on success from ring_buffer_resize()
Qu Wenruo (1):
btrfs: qgroup: fix wrong qgroup metadata reserve for delayed inode
Quinn Tran (1):
scsi: qla2xxx: Fix crash on session cleanup with unload
Raju Rangoju (1):
cxgb4: set up filter action after rewrites
Ran Wang (1):
usb: host: fsl-mph-dr-of: check return of dma_set_mask()
Randy Dunlap (1):
x86/PCI: Fix intel_mid_pci.c build error when ACPI is not enabled
Rasmus Villemoes (1):
scripts/setlocalversion: make git describe output more reliable
Raul E Rangel (1):
mmc: sdhci-acpi: AMDI0040: Set SDHCI_QUIRK2_PRESET_VALUE_BROKEN
Ronnie Sahlberg (1):
cifs: handle -EINTR in cifs_setattr
Sandeep Singh (1):
usb: xhci: Workaround for S3 issue on AMD SNPS 3.0 xHC
Sascha Hauer (1):
ata: sata_nv: Fix retrieving of active qcs
Sathishkumar Muruganandam (1):
ath10k: fix VHT NSS calculation when STBC is enabled
Song Liu (2):
bpf: Fix comment for helper bpf_current_task_under_cgroup()
md/raid5: fix oops during stripe resizing
Stefano Garzarella (1):
vringh: fix __vringh_iov() when riov and wiov are different
Sven Schnelle (1):
s390/stp: add locking to sysfs functions
Takashi Iwai (1):
drm/amd/display: Don't invoke kgdb_breakpoint() unconditionally
Tero Kristo (1):
clk: ti: clockdomain: fix static checker warning
Thinh Nguyen (2):
usb: dwc3: ep0: Fix ZLP for OUT ep0 requests
usb: dwc3: gadget: Check MPS of the request length
Tom Rix (2):
video: fbdev: pvr2fb: initialize variables
media: tw5864: check status of tw5864_frameinterval_get
Tony Lindgren (1):
ARM: dts: omap4: Fix sgx clock rate for 4430
Tung Nguyen (1):
tipc: fix memory leak caused by tipc_buf_append()
Valentin Schneider (1):
arm64: topology: Stop using MPIDR for topology information
Vinay Kumar Yadav (3):
chelsio/chtls: fix deadlock issue
chelsio/chtls: fix memory leaks in CPL handlers
chelsio/chtls: fix tls record info to user
Wei Huang (1):
acpi-cpufreq: Honor _PSD table setting on new AMD CPUs
Wen Gong (1):
ath10k: start recovery process when payload length exceeds max htc
length for sdio
Xia Jiang (1):
media: platform: Improve queue set up flow for bug fixing
Xie He (1):
drivers/net/wan/hdlc_fr: Correctly handle special skb->protocol values
Xiongfeng Wang (1):
power: supply: test_power: add missing newlines when printing
parameters by sysfs
Xiubo Li (1):
nbd: make the config put is called before the notifying the waiter
Yoshihiro Shimoda (1):
arm64: dts: renesas: ulcb: add full-pwr-cycle-in-suspend into eMMC
nodes
Zhang Qilong (1):
f2fs: add trace exit in exception path
Zhao Heming (1):
md/bitmap: md_bitmap_get_counter returns wrong blocks
Zhengyuan Liu (1):
arm64/mm: return cpu_all_mask when node is NUMA_NO_NODE
Zhihao Cheng (2):
ubifs: dent: Fix some potential memory leaks while iterating entries
ubi: check kthread_should_stop() after the setting of task state
Zong Li (1):
riscv: Define AT_VECTOR_SIZE_ARCH for ARCH_DLINFO
dmitry.torokhov(a)gmail.com (1):
ACPI: button: fix handling lid state changes when input device closed
Documentation/filesystems/fscrypt.rst | 12 +-
.../media/uapi/v4l/colorspaces-defs.rst | 9 +-
.../media/uapi/v4l/colorspaces-details.rst | 5 +-
Makefile | 2 +-
arch/Kconfig | 7 +
arch/arm/Kconfig | 2 +
arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts | 1 +
arch/arm/boot/dts/omap4.dtsi | 2 +-
arch/arm/boot/dts/omap443x.dtsi | 10 ++
arch/arm/boot/dts/s5pv210.dtsi | 127 +++++++--------
arch/arm/kernel/hw_breakpoint.c | 100 ++++++++----
arch/arm/plat-samsung/Kconfig | 1 +
arch/arm64/Kconfig.platforms | 1 +
arch/arm64/Makefile | 4 +-
arch/arm64/boot/dts/renesas/ulcb.dtsi | 1 +
arch/arm64/include/asm/kvm_host.h | 1 +
arch/arm64/include/asm/numa.h | 3 +
arch/arm64/kernel/cpu_errata.c | 8 +
arch/arm64/kernel/topology.c | 43 ++---
arch/arm64/kvm/sys_regs.c | 6 +-
arch/arm64/mm/numa.c | 6 +-
arch/ia64/kernel/Makefile | 2 +-
arch/ia64/kernel/kprobes.c | 77 +--------
arch/powerpc/Kconfig | 14 ++
arch/powerpc/include/asm/drmem.h | 4 +-
arch/powerpc/include/asm/mmu_context.h | 2 +-
arch/powerpc/kernel/rtas.c | 153 ++++++++++++++++++
arch/powerpc/kernel/sysfs.c | 42 ++---
arch/powerpc/kernel/traps.c | 2 +-
arch/powerpc/platforms/powernv/opal-elog.c | 33 +++-
arch/powerpc/platforms/powernv/smp.c | 2 +-
arch/riscv/include/uapi/asm/auxvec.h | 3 +
arch/s390/kernel/time.c | 118 ++++++++++----
arch/sparc/kernel/smp_64.c | 65 ++------
arch/um/kernel/sigio.c | 6 +-
arch/x86/events/amd/ibs.c | 53 ++++--
arch/x86/include/asm/msr-index.h | 1 +
arch/x86/kernel/unwind_orc.c | 9 +-
arch/x86/pci/intel_mid_pci.c | 1 +
arch/x86/xen/enlighten_pv.c | 9 ++
block/blk-core.c | 9 +-
drivers/acpi/acpi_dbg.c | 3 +
drivers/acpi/acpi_extlog.c | 6 +-
drivers/acpi/button.c | 13 +-
drivers/acpi/numa.c | 2 +-
drivers/acpi/video_detect.c | 9 ++
drivers/ata/sata_nv.c | 2 +-
drivers/ata/sata_rcar.c | 2 +-
drivers/base/core.c | 4 +-
drivers/block/nbd.c | 2 +-
drivers/bus/fsl-mc/mc-io.c | 7 +-
drivers/clk/ti/clockdomain.c | 2 +
drivers/cpufreq/acpi-cpufreq.c | 3 +-
drivers/cpufreq/sti-cpufreq.c | 6 +-
drivers/crypto/chelsio/chtls/chtls_cm.c | 29 ++--
drivers/crypto/chelsio/chtls/chtls_io.c | 7 +-
drivers/dma/dma-jz4780.c | 7 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 10 ++
drivers/gpu/drm/amd/display/dc/core/dc_link.c | 2 +-
drivers/gpu/drm/amd/display/dc/os_types.h | 2 +-
.../bridge/megachips-stdpxxxx-ge-b850v3-fw.c | 12 +-
drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c | 9 +-
drivers/gpu/drm/i915/i915_drv.h | 6 +-
drivers/gpu/drm/ttm/ttm_bo.c | 2 +-
drivers/hid/wacom_wac.c | 4 +-
drivers/i2c/busses/i2c-imx.c | 24 +--
drivers/iio/adc/ti-adc0832.c | 11 +-
drivers/iio/adc/ti-adc12138.c | 13 +-
drivers/iio/gyro/itg3200_buffer.c | 15 +-
drivers/iio/light/si1145.c | 19 ++-
drivers/infiniband/core/addr.c | 11 +-
drivers/infiniband/hw/qedr/qedr_iw_cm.c | 1 +
drivers/input/serio/hil_mlc.c | 21 ++-
drivers/input/serio/hp_sdc_mlc.c | 8 +-
drivers/leds/leds-bcm6328.c | 2 +-
drivers/leds/leds-bcm6358.c | 2 +-
drivers/md/md-bitmap.c | 2 +-
drivers/md/raid5.c | 4 +-
drivers/media/i2c/imx274.c | 8 +-
drivers/media/pci/tw5864/tw5864-video.c | 6 +
.../media/platform/mtk-jpeg/mtk_jpeg_core.c | 7 +
drivers/media/usb/uvc/uvc_ctrl.c | 27 ++--
drivers/memory/emif.c | 33 +---
drivers/message/fusion/mptscsih.c | 13 +-
drivers/misc/cxl/pci.c | 4 +-
drivers/mmc/host/sdhci-acpi.c | 37 +++++
drivers/mmc/host/via-sdmmc.c | 3 +
drivers/mtd/ubi/wl.c | 13 ++
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 6 +-
.../net/ethernet/chelsio/cxgb4/cxgb4_filter.c | 56 ++++---
drivers/net/ethernet/chelsio/cxgb4/t4_tcb.h | 4 +
drivers/net/ethernet/mellanox/mlxsw/core.c | 5 +
drivers/net/ethernet/realtek/r8169.c | 4 +-
drivers/net/ethernet/renesas/ravb_main.c | 10 +-
drivers/net/gtp.c | 16 +-
drivers/net/wan/hdlc_fr.c | 98 +++++------
drivers/net/wireless/ath/ath10k/htt_rx.c | 8 +-
drivers/net/wireless/ath/ath10k/sdio.c | 4 +
drivers/net/wireless/intersil/p54/p54pci.c | 4 +-
drivers/nvme/host/rdma.c | 1 -
drivers/power/supply/bq27xxx_battery.c | 6 +-
drivers/power/supply/test_power.c | 6 +
drivers/rpmsg/qcom_glink_native.c | 6 +-
drivers/rtc/rtc-rx8010.c | 24 ++-
drivers/scsi/qla2xxx/qla_target.c | 13 +-
drivers/staging/comedi/drivers/cb_pcidas.c | 1 +
drivers/staging/octeon/ethernet-mdio.c | 6 -
drivers/staging/octeon/ethernet-rx.c | 34 ++--
drivers/staging/octeon/ethernet.c | 9 ++
drivers/tty/serial/amba-pl011.c | 11 +-
drivers/uio/uio.c | 4 +-
drivers/usb/class/cdc-acm.c | 12 +-
drivers/usb/class/cdc-acm.h | 3 +-
drivers/usb/dwc3/core.c | 15 +-
drivers/usb/dwc3/ep0.c | 11 +-
drivers/usb/dwc3/gadget.c | 4 +-
drivers/usb/host/fsl-mph-dr-of.c | 9 +-
drivers/usb/host/xhci-pci.c | 17 ++
drivers/usb/host/xhci.c | 7 +-
drivers/usb/host/xhci.h | 1 +
drivers/usb/misc/adutux.c | 1 +
drivers/usb/typec/tcpm.c | 8 +-
drivers/vhost/vringh.c | 9 +-
drivers/video/fbdev/pvr2fb.c | 2 +
drivers/w1/masters/mxc_w1.c | 14 +-
drivers/watchdog/rdc321x_wdt.c | 5 +-
drivers/xen/events/events_base.c | 27 +++-
drivers/xen/events/events_internal.h | 3 +-
fs/9p/vfs_file.c | 4 +-
fs/btrfs/ctree.c | 6 +
fs/btrfs/delayed-inode.c | 3 +-
fs/btrfs/dev-replace.c | 2 +-
fs/btrfs/reada.c | 2 +
fs/btrfs/send.c | 74 ++++++++-
fs/btrfs/tree-log.c | 8 +
fs/btrfs/volumes.c | 14 +-
fs/cachefiles/rdwr.c | 3 +-
fs/ceph/addr.c | 2 +-
fs/cifs/inode.c | 13 +-
fs/crypto/crypto.c | 58 +++----
fs/crypto/fname.c | 1 +
fs/crypto/hooks.c | 34 ++--
fs/crypto/policy.c | 3 +-
fs/dcache.c | 15 ++
fs/efivarfs/super.c | 3 +
fs/exec.c | 15 +-
fs/ext4/ext4.h | 62 +++++--
fs/ext4/inode.c | 11 +-
fs/ext4/namei.c | 76 ++++++---
fs/ext4/resize.c | 4 +-
fs/ext4/super.c | 6 +
fs/f2fs/checkpoint.c | 8 +-
fs/f2fs/dir.c | 8 +-
fs/f2fs/namei.c | 17 +-
fs/fuse/dev.c | 28 ++--
fs/gfs2/ops_fstype.c | 18 ++-
fs/nfs/namespace.c | 12 +-
fs/nfs/nfs4proc.c | 9 +-
fs/nfsd/nfsproc.c | 16 ++
fs/ubifs/debug.c | 1 +
fs/ubifs/dir.c | 8 +-
fs/udf/super.c | 21 ++-
fs/xfs/libxfs/xfs_bmap.c | 19 ++-
fs/xfs/xfs_rtalloc.c | 10 +-
include/linux/dcache.h | 2 +-
include/linux/fscrypt.h | 34 ++--
include/linux/fscrypt_notsupp.h | 9 +-
include/linux/fscrypt_supp.h | 6 +-
include/linux/hil_mlc.h | 2 +-
include/linux/mtd/pfow.h | 2 +-
include/linux/usb/pd.h | 1 +
include/uapi/linux/bpf.h | 4 +-
include/uapi/linux/nfs4.h | 3 +
include/uapi/linux/videodev2.h | 17 +-
init/Kconfig | 3 +-
kernel/debug/debug_core.c | 22 ++-
kernel/futex.c | 4 +-
kernel/trace/ring_buffer.c | 8 +-
lib/scatterlist.c | 2 +-
net/9p/trans_fd.c | 2 +-
net/ceph/messenger.c | 5 +
net/ipv4/tcp.c | 2 +
net/ipv4/tcp_input.c | 3 +-
net/sched/sch_netem.c | 9 +-
net/tipc/msg.c | 5 +-
scripts/setlocalversion | 21 ++-
tools/include/uapi/linux/bpf.h | 4 +-
tools/objtool/orc_gen.c | 33 +++-
tools/perf/util/print_binary.c | 2 +-
189 files changed, 1746 insertions(+), 928 deletions(-)
--
2.25.1
1
174

[PATCH 1/5] partitions/efi: Fix partition name parsing in GUID partition entry
by Yang Yingliang 16 Nov '20
by Yang Yingliang 16 Nov '20
16 Nov '20
From: Nikolai Merinov <n.merinov(a)inango-systems.com>
mainline inclusion
from mainline-5.7-rc1
commit d5528d5e91041e68e8eab9792ce627705a0ed273
category: bugfix
bugzilla: 32454
CVE: NA
---------------------------
GUID partition entry defined to have a partition name as 36 UTF-16LE
code units. This means that on big-endian platforms ASCII symbols
would be read with 0xXX00 efi_char16_t character code. In order to
correctly extract ASCII characters from a partition name field we
should be converted from 16LE to CPU architecture.
The problem exists on all big endian platforms.
[ mingo: Minor edits. ]
Fixes: eec7ecfede74 ("genhd, efi: add efi partition metadata to hd_structs")
Reviewed-by: Christoph Hellwig <hch(a)lst.de>
Signed-off-by: Nikolai Merinov <n.merinov(a)inango-systems.com>
Signed-off-by: Ard Biesheuvel <ardb(a)kernel.org>
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Link: https://lore.kernel.org/r/20200308080859.21568-29-ardb@kernel.org
Link: https://lore.kernel.org/r/797777312.1324734.1582544319435.JavaMail.zimbra@i…
Signed-off-by: yangerkun <yangerkun(a)huawei.com>
Reviewed-by: Yufen Yu <yuyufen(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
block/partitions/efi.c | 35 ++++++++++++++++++++++++++---------
block/partitions/efi.h | 2 +-
2 files changed, 27 insertions(+), 10 deletions(-)
diff --git a/block/partitions/efi.c b/block/partitions/efi.c
index 39f70d968754..b9beaa0a9b36 100644
--- a/block/partitions/efi.c
+++ b/block/partitions/efi.c
@@ -670,6 +670,31 @@ static int find_valid_gpt(struct parsed_partitions *state, gpt_header **gpt,
return 0;
}
+/**
+ * utf16_le_to_7bit(): Naively converts a UTF-16LE string to 7-bit ASCII characters
+ * @in: input UTF-16LE string
+ * @size: size of the input string
+ * @out: output string ptr, should be capable to store @size+1 characters
+ *
+ * Description: Converts @size UTF16-LE symbols from @in string to 7-bit
+ * ASCII characters and stores them to @out. Adds trailing zero to @out array.
+ */
+static void utf16_le_to_7bit(const __le16 *in, unsigned int size, u8 *out)
+{
+ unsigned int i = 0;
+
+ out[size] = 0;
+
+ while (i < size) {
+ u8 c = le16_to_cpu(in[i]) & 0xff;
+
+ if (c && !isprint(c))
+ c = '!';
+ out[i] = c;
+ i++;
+ }
+}
+
/**
* efi_partition(struct parsed_partitions *state)
* @state: disk parsed partitions
@@ -706,7 +731,6 @@ int efi_partition(struct parsed_partitions *state)
for (i = 0; i < le32_to_cpu(gpt->num_partition_entries) && i < state->limit-1; i++) {
struct partition_meta_info *info;
- unsigned label_count = 0;
unsigned label_max;
u64 start = le64_to_cpu(ptes[i].starting_lba);
u64 size = le64_to_cpu(ptes[i].ending_lba) -
@@ -727,14 +751,7 @@ int efi_partition(struct parsed_partitions *state)
/* Naively convert UTF16-LE to 7 bits. */
label_max = min(ARRAY_SIZE(info->volname) - 1,
ARRAY_SIZE(ptes[i].partition_name));
- info->volname[label_max] = 0;
- while (label_count < label_max) {
- u8 c = ptes[i].partition_name[label_count] & 0xff;
- if (c && !isprint(c))
- c = '!';
- info->volname[label_count] = c;
- label_count++;
- }
+ utf16_le_to_7bit(ptes[i].partition_name, label_max, info->volname);
state->parts[i + 1].has_info = true;
}
kfree(ptes);
diff --git a/block/partitions/efi.h b/block/partitions/efi.h
index abd0b19288a6..42db2513ecfa 100644
--- a/block/partitions/efi.h
+++ b/block/partitions/efi.h
@@ -102,7 +102,7 @@ typedef struct _gpt_entry {
__le64 starting_lba;
__le64 ending_lba;
gpt_entry_attributes attributes;
- efi_char16_t partition_name[72 / sizeof (efi_char16_t)];
+ __le16 partition_name[72/sizeof(__le16)];
} __packed gpt_entry;
typedef struct _gpt_mbr_record {
--
2.25.1
1
4

14 Nov '20
hulk inclusion
category: feature
feature: digest-lists
---------------------------
The EVM ignore mode works similarly to the metadata modification mode. They
both allow an operation to be performed even if the operation causes
metadata to become invalid.
Currently, evm_reset_status() notifies to IMA that an operation modified
metadata only when the metadata modification mode was chosen. With this
patch, evm_reset_status() does the same also when the EVM ignore mode is
selected.
Signed-off-by: Roberto Sassu <roberto.sassu(a)huawei.com>
---
security/integrity/evm/evm_main.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
index 5155ff4c4ef2..2d3c1670d8d3 100644
--- a/security/integrity/evm/evm_main.c
+++ b/security/integrity/evm/evm_main.c
@@ -570,7 +570,8 @@ static void evm_reset_status(struct inode *inode, int bit)
iint = integrity_iint_find(inode);
if (iint) {
- if (evm_initialized & EVM_ALLOW_METADATA_WRITES)
+ if ((evm_initialized & EVM_ALLOW_METADATA_WRITES) ||
+ evm_ignoremode)
set_bit(bit, &iint->atomic_flags);
iint->evm_status = INTEGRITY_UNKNOWN;
--
2.27.GIT
1
1
2
3

[PATCH] tty: make FONTX ioctl use the tty pointer they were actually passed
by Yang Yingliang 05 Nov '20
by Yang Yingliang 05 Nov '20
05 Nov '20
From: Linus Torvalds <torvalds(a)linux-foundation.org>
mainline inclusion
from mainline-v5.10-rc3
commit 90bfdeef83f1d6c696039b6a917190dcbbad3220
category: bugfix
bugzilla: NA
CVE: CVE-2020-25668
--------------------------------
Some of the font tty ioctl's always used the current foreground VC for
their operations. Don't do that then.
This fixes a data race on fg_console.
Side note: both Michael Ellerman and Jiri Slaby point out that all these
ioctls are deprecated, and should probably have been removed long ago,
and everything seems to be using the KDFONTOP ioctl instead.
In fact, Michael points out that it looks like busybox's loadfont
program seems to have switched over to using KDFONTOP exactly _because_
of this bug (ahem.. 12 years ago ;-).
Reported-by: Minh Yuan <yuanmingbuaa(a)gmail.com>
Acked-by: Michael Ellerman <mpe(a)ellerman.id.au>
Acked-by: Jiri Slaby <jirislaby(a)kernel.org>
Cc: Greg KH <greg(a)kroah.com>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Conflicts:
drivers/tty/vt/vt_ioctl.c
[yyl: There is no vt_io_fontreset(), change the vc_cons to vc in do_fontx_ioctl()]
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/tty/vt/vt_ioctl.c | 32 +++++++++++++++++---------------
1 file changed, 17 insertions(+), 15 deletions(-)
diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c
index 6a82030cf1ef..2e959563af53 100644
--- a/drivers/tty/vt/vt_ioctl.c
+++ b/drivers/tty/vt/vt_ioctl.c
@@ -244,7 +244,7 @@ int vt_waitactive(int n)
static inline int
-do_fontx_ioctl(int cmd, struct consolefontdesc __user *user_cfd, int perm, struct console_font_op *op)
+do_fontx_ioctl(struct vc_data *vc, int cmd, struct consolefontdesc __user *user_cfd, int perm, struct console_font_op *op)
{
struct consolefontdesc cfdarg;
int i;
@@ -262,15 +262,16 @@ do_fontx_ioctl(int cmd, struct consolefontdesc __user *user_cfd, int perm, struc
op->height = cfdarg.charheight;
op->charcount = cfdarg.charcount;
op->data = cfdarg.chardata;
- return con_font_op(vc_cons[fg_console].d, op);
- case GIO_FONTX: {
+ return con_font_op(vc, op);
+
+ case GIO_FONTX:
op->op = KD_FONT_OP_GET;
op->flags = KD_FONT_FLAG_OLD;
op->width = 8;
op->height = cfdarg.charheight;
op->charcount = cfdarg.charcount;
op->data = cfdarg.chardata;
- i = con_font_op(vc_cons[fg_console].d, op);
+ i = con_font_op(vc, op);
if (i)
return i;
cfdarg.charheight = op->height;
@@ -278,7 +279,6 @@ do_fontx_ioctl(int cmd, struct consolefontdesc __user *user_cfd, int perm, struc
if (copy_to_user(user_cfd, &cfdarg, sizeof(struct consolefontdesc)))
return -EFAULT;
return 0;
- }
}
return -EINVAL;
}
@@ -924,7 +924,7 @@ int vt_ioctl(struct tty_struct *tty,
op.height = 0;
op.charcount = 256;
op.data = up;
- ret = con_font_op(vc_cons[fg_console].d, &op);
+ ret = con_font_op(vc, &op);
break;
}
@@ -935,7 +935,7 @@ int vt_ioctl(struct tty_struct *tty,
op.height = 32;
op.charcount = 256;
op.data = up;
- ret = con_font_op(vc_cons[fg_console].d, &op);
+ ret = con_font_op(vc, &op);
break;
}
@@ -952,7 +952,7 @@ int vt_ioctl(struct tty_struct *tty,
case PIO_FONTX:
case GIO_FONTX:
- ret = do_fontx_ioctl(cmd, up, perm, &op);
+ ret = do_fontx_ioctl(vc, cmd, up, perm, &op);
break;
case PIO_FONTRESET:
@@ -969,11 +969,11 @@ int vt_ioctl(struct tty_struct *tty,
{
op.op = KD_FONT_OP_SET_DEFAULT;
op.data = NULL;
- ret = con_font_op(vc_cons[fg_console].d, &op);
+ ret = con_font_op(vc, &op);
if (ret)
break;
console_lock();
- con_set_default_unimap(vc_cons[fg_console].d);
+ con_set_default_unimap(vc);
console_unlock();
break;
}
@@ -1100,8 +1100,9 @@ struct compat_consolefontdesc {
};
static inline int
-compat_fontx_ioctl(int cmd, struct compat_consolefontdesc __user *user_cfd,
- int perm, struct console_font_op *op)
+compat_fontx_ioctl(struct vc_data *vc, int cmd,
+ struct compat_consolefontdesc __user *user_cfd,
+ int perm, struct console_font_op *op)
{
struct compat_consolefontdesc cfdarg;
int i;
@@ -1119,7 +1120,8 @@ compat_fontx_ioctl(int cmd, struct compat_consolefontdesc __user *user_cfd,
op->height = cfdarg.charheight;
op->charcount = cfdarg.charcount;
op->data = compat_ptr(cfdarg.chardata);
- return con_font_op(vc_cons[fg_console].d, op);
+ return con_font_op(vc, op);
+
case GIO_FONTX:
op->op = KD_FONT_OP_GET;
op->flags = KD_FONT_FLAG_OLD;
@@ -1127,7 +1129,7 @@ compat_fontx_ioctl(int cmd, struct compat_consolefontdesc __user *user_cfd,
op->height = cfdarg.charheight;
op->charcount = cfdarg.charcount;
op->data = compat_ptr(cfdarg.chardata);
- i = con_font_op(vc_cons[fg_console].d, op);
+ i = con_font_op(vc, op);
if (i)
return i;
cfdarg.charheight = op->height;
@@ -1218,7 +1220,7 @@ long vt_compat_ioctl(struct tty_struct *tty,
*/
case PIO_FONTX:
case GIO_FONTX:
- ret = compat_fontx_ioctl(cmd, up, perm, &op);
+ ret = compat_fontx_ioctl(vc, cmd, up, perm, &op);
break;
case KDFONTOP:
--
2.25.1
1
0
openEuler Summit 是由 openEuler 社区举办的开发者交流会,首届线下 openEuler Summit 将于 12 月24-25日在北京举行。
openEuler Summit 广泛邀请操作系统生态的开发者、用户、社区贡献者、软件爱好者共同解读 openEuler 的最新版,探讨未来的技术路线,让技术、生态、商业在这里产生奇妙的化学反应。
开源是一种态度、分享是一种精神。Call for Speaker、Call for Sponsor、Call for SIG、Call for Demo 现已全面开放报名。
我们诚挚的邀请您报名演讲、演示 Demo、发表案例、参与社区建设。
期待您的到来。
[cid:image001.png@01D6B34F.EDA0B950]
[cid:image002.png@01D6B34F.EDA0B950]
[cid:image003.png@01D6B34F.EDA0B950]
[cid:image004.png@01D6B34F.EDA0B950]
Fred 李永乐
1
0
From: Jiri Slaby <jslaby(a)suse.cz>
mainline inclusion
from mainline-v5.10-rc2
commit 6ca03f90527e499dd5e32d6522909e2ad390896b
category: bugfix
bugzilla: NA
CVE: CVE-2020-25656
--------------------------------
Use 'strlen' of the string, add one for NUL terminator and simply do
'copy_to_user' instead of the explicit 'for' loop. This makes the
KDGKBSENT case more compact.
The only thing we need to take care about is NULL 'func_table[i]'. Use
an empty string in that case.
The original check for overflow could never trigger as the func_buf
strings are always shorter or equal to 'struct kbsentry's.
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Jiri Slaby <jslaby(a)suse.cz>
Link: https://lore.kernel.org/r/20201019085517.10176-1-jslaby@suse.cz
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/tty/vt/keyboard.c | 28 +++++++++-------------------
1 file changed, 9 insertions(+), 19 deletions(-)
diff --git a/drivers/tty/vt/keyboard.c b/drivers/tty/vt/keyboard.c
index a7455f8a4235..56315d23f34e 100644
--- a/drivers/tty/vt/keyboard.c
+++ b/drivers/tty/vt/keyboard.c
@@ -1994,9 +1994,7 @@ int vt_do_kdsk_ioctl(int cmd, struct kbentry __user *user_kbe, int perm,
int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm)
{
struct kbsentry *kbs;
- char *p;
u_char *q;
- u_char __user *up;
int sz, fnw_sz;
int delta;
char *first_free, *fj, *fnw;
@@ -2022,23 +2020,15 @@ int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm)
i = kbs->kb_func;
switch (cmd) {
- case KDGKBSENT:
- sz = sizeof(kbs->kb_string) - 1; /* sz should have been
- a struct member */
- up = user_kdgkb->kb_string;
- p = func_table[i];
- if(p)
- for ( ; *p && sz; p++, sz--)
- if (put_user(*p, up++)) {
- ret = -EFAULT;
- goto reterr;
- }
- if (put_user('\0', up)) {
- ret = -EFAULT;
- goto reterr;
- }
- kfree(kbs);
- return ((p && *p) ? -EOVERFLOW : 0);
+ case KDGKBSENT: {
+ /* size should have been a struct member */
+ unsigned char *from = func_table[i] ? : "";
+
+ ret = copy_to_user(user_kdgkb->kb_string, from,
+ strlen(from) + 1) ? -EFAULT : 0;
+
+ goto reterr;
+ }
case KDSKBSENT:
if (!perm) {
ret = -EPERM;
--
2.25.1
1
1
From: Todd Kjos <tkjos(a)google.com>
stable inclusion
from linux-4.19.153
commit 35cc2facc2a5ff52b9aa03f2dc81dcb000d97da3
CVE: CVE-2020-0423
--------------------------------
commit f3277cbfba763cd2826396521b9296de67cf1bbc upstream.
When releasing a thread todo list when tearing down
a binder_proc, the following race was possible which
could result in a use-after-free:
1. Thread 1: enter binder_release_work from binder_thread_release
2. Thread 2: binder_update_ref_for_handle() -> binder_dec_node_ilocked()
3. Thread 2: dec nodeA --> 0 (will free node)
4. Thread 1: ACQ inner_proc_lock
5. Thread 2: block on inner_proc_lock
6. Thread 1: dequeue work (BINDER_WORK_NODE, part of nodeA)
7. Thread 1: REL inner_proc_lock
8. Thread 2: ACQ inner_proc_lock
9. Thread 2: todo list cleanup, but work was already dequeued
10. Thread 2: free node
11. Thread 2: REL inner_proc_lock
12. Thread 1: deref w->type (UAF)
The problem was that for a BINDER_WORK_NODE, the binder_work element
must not be accessed after releasing the inner_proc_lock while
processing the todo list elements since another thread might be
handling a deref on the node containing the binder_work element
leading to the node being freed.
Signed-off-by: Todd Kjos <tkjos(a)google.com>
Link: https://lore.kernel.org/r/20201009232455.4054810-1-tkjos@google.com
Cc: <stable(a)vger.kernel.org> # 4.14, 4.19, 5.4, 5.8
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/android/binder.c | 35 ++++++++++-------------------------
1 file changed, 10 insertions(+), 25 deletions(-)
diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index cf4367135a00..18c32637047f 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -285,7 +285,7 @@ struct binder_device {
struct binder_work {
struct list_head entry;
- enum {
+ enum binder_work_type {
BINDER_WORK_TRANSACTION = 1,
BINDER_WORK_TRANSACTION_COMPLETE,
BINDER_WORK_RETURN_ERROR,
@@ -895,27 +895,6 @@ static struct binder_work *binder_dequeue_work_head_ilocked(
return w;
}
-/**
- * binder_dequeue_work_head() - Dequeues the item at head of list
- * @proc: binder_proc associated with list
- * @list: list to dequeue head
- *
- * Removes the head of the list if there are items on the list
- *
- * Return: pointer dequeued binder_work, NULL if list was empty
- */
-static struct binder_work *binder_dequeue_work_head(
- struct binder_proc *proc,
- struct list_head *list)
-{
- struct binder_work *w;
-
- binder_inner_proc_lock(proc);
- w = binder_dequeue_work_head_ilocked(list);
- binder_inner_proc_unlock(proc);
- return w;
-}
-
static void
binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer);
static void binder_free_thread(struct binder_thread *thread);
@@ -4229,13 +4208,17 @@ static void binder_release_work(struct binder_proc *proc,
struct list_head *list)
{
struct binder_work *w;
+ enum binder_work_type wtype;
while (1) {
- w = binder_dequeue_work_head(proc, list);
+ binder_inner_proc_lock(proc);
+ w = binder_dequeue_work_head_ilocked(list);
+ wtype = w ? w->type : 0;
+ binder_inner_proc_unlock(proc);
if (!w)
return;
- switch (w->type) {
+ switch (wtype) {
case BINDER_WORK_TRANSACTION: {
struct binder_transaction *t;
@@ -4269,9 +4252,11 @@ static void binder_release_work(struct binder_proc *proc,
kfree(death);
binder_stats_deleted(BINDER_STAT_DEATH);
} break;
+ case BINDER_WORK_NODE:
+ break;
default:
pr_err("unexpected work type, %d, not freed\n",
- w->type);
+ wtype);
break;
}
}
--
2.25.1
1
0

[PATCH 001/110] mm: fix double page fault on arm64 if PTE_AF is cleared
by Yang Yingliang 04 Nov '20
by Yang Yingliang 04 Nov '20
04 Nov '20
From: Jia He <justin.he(a)arm.com>
stable inclusion
from linux-4.19.149
commit 8579a0440381353e0a71dd6a4d4371be8457eac4
--------------------------------
[ Upstream commit 83d116c53058d505ddef051e90ab27f57015b025 ]
When we tested pmdk unit test [1] vmmalloc_fork TEST3 on arm64 guest, there
will be a double page fault in __copy_from_user_inatomic of cow_user_page.
To reproduce the bug, the cmd is as follows after you deployed everything:
make -C src/test/vmmalloc_fork/ TEST_TIME=60m check
Below call trace is from arm64 do_page_fault for debugging purpose:
[ 110.016195] Call trace:
[ 110.016826] do_page_fault+0x5a4/0x690
[ 110.017812] do_mem_abort+0x50/0xb0
[ 110.018726] el1_da+0x20/0xc4
[ 110.019492] __arch_copy_from_user+0x180/0x280
[ 110.020646] do_wp_page+0xb0/0x860
[ 110.021517] __handle_mm_fault+0x994/0x1338
[ 110.022606] handle_mm_fault+0xe8/0x180
[ 110.023584] do_page_fault+0x240/0x690
[ 110.024535] do_mem_abort+0x50/0xb0
[ 110.025423] el0_da+0x20/0x24
The pte info before __copy_from_user_inatomic is (PTE_AF is cleared):
[ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003,
pmd=000000023d4b3003, pte=360000298607bd3
As told by Catalin: "On arm64 without hardware Access Flag, copying from
user will fail because the pte is old and cannot be marked young. So we
always end up with zeroed page after fork() + CoW for pfn mappings. we
don't always have a hardware-managed access flag on arm64."
This patch fixes it by calling pte_mkyoung. Also, the parameter is
changed because vmf should be passed to cow_user_page()
Add a WARN_ON_ONCE when __copy_from_user_inatomic() returns error
in case there can be some obscure use-case (by Kirill).
[1] https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork
Signed-off-by: Jia He <justin.he(a)arm.com>
Reported-by: Yibo Cai <Yibo.Cai(a)arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas(a)arm.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
Signed-off-by: Catalin Marinas <catalin.marinas(a)arm.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
mm/memory.c | 104 ++++++++++++++++++++++++++++++++++++++++++++--------
1 file changed, 89 insertions(+), 15 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index c65d3993ac63..c4c667c0b89d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -117,6 +117,18 @@ int randomize_va_space __read_mostly =
2;
#endif
+#ifndef arch_faults_on_old_pte
+static inline bool arch_faults_on_old_pte(void)
+{
+ /*
+ * Those arches which don't have hw access flag feature need to
+ * implement their own helper. By default, "true" means pagefault
+ * will be hit on old pte.
+ */
+ return true;
+}
+#endif
+
static int __init disable_randmaps(char *s)
{
randomize_va_space = 0;
@@ -2089,32 +2101,82 @@ static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
return same;
}
-static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
+static inline bool cow_user_page(struct page *dst, struct page *src,
+ struct vm_fault *vmf)
{
+ bool ret;
+ void *kaddr;
+ void __user *uaddr;
+ bool force_mkyoung;
+ struct vm_area_struct *vma = vmf->vma;
+ struct mm_struct *mm = vma->vm_mm;
+ unsigned long addr = vmf->address;
+
debug_dma_assert_idle(src);
+ if (likely(src)) {
+ copy_user_highpage(dst, src, addr, vma);
+ return true;
+ }
+
/*
* If the source page was a PFN mapping, we don't have
* a "struct page" for it. We do a best-effort copy by
* just copying from the original user address. If that
* fails, we just zero-fill it. Live with it.
*/
- if (unlikely(!src)) {
- void *kaddr = kmap_atomic(dst);
- void __user *uaddr = (void __user *)(va & PAGE_MASK);
+ kaddr = kmap_atomic(dst);
+ uaddr = (void __user *)(addr & PAGE_MASK);
+
+ /*
+ * On architectures with software "accessed" bits, we would
+ * take a double page fault, so mark it accessed here.
+ */
+ force_mkyoung = arch_faults_on_old_pte() && !pte_young(vmf->orig_pte);
+ if (force_mkyoung) {
+ pte_t entry;
+
+ vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl);
+ if (!likely(pte_same(*vmf->pte, vmf->orig_pte))) {
+ /*
+ * Other thread has already handled the fault
+ * and we don't need to do anything. If it's
+ * not the case, the fault will be triggered
+ * again on the same address.
+ */
+ ret = false;
+ goto pte_unlock;
+ }
+ entry = pte_mkyoung(vmf->orig_pte);
+ if (ptep_set_access_flags(vma, addr, vmf->pte, entry, 0))
+ update_mmu_cache(vma, addr, vmf->pte);
+ }
+
+ /*
+ * This really shouldn't fail, because the page is there
+ * in the page tables. But it might just be unreadable,
+ * in which case we just give up and fill the result with
+ * zeroes.
+ */
+ if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) {
/*
- * This really shouldn't fail, because the page is there
- * in the page tables. But it might just be unreadable,
- * in which case we just give up and fill the result with
- * zeroes.
+ * Give a warn in case there can be some obscure
+ * use-case
*/
- if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE))
- clear_page(kaddr);
- kunmap_atomic(kaddr);
- flush_dcache_page(dst);
- } else
- copy_user_highpage(dst, src, va, vma);
+ WARN_ON_ONCE(1);
+ clear_page(kaddr);
+ }
+
+ ret = true;
+
+pte_unlock:
+ if (force_mkyoung)
+ pte_unmap_unlock(vmf->pte, vmf->ptl);
+ kunmap_atomic(kaddr);
+ flush_dcache_page(dst);
+
+ return ret;
}
static gfp_t __get_fault_gfp_mask(struct vm_area_struct *vma)
@@ -2268,7 +2330,19 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
vmf->address);
if (!new_page)
goto oom;
- cow_user_page(new_page, old_page, vmf->address, vma);
+
+ if (!cow_user_page(new_page, old_page, vmf)) {
+ /*
+ * COW failed, if the fault was solved by other,
+ * it's fine. If not, userspace would re-fault on
+ * the same address and we will handle the fault
+ * from the second attempt.
+ */
+ put_page(new_page);
+ if (old_page)
+ put_page(old_page);
+ return 0;
+ }
}
if (mem_cgroup_try_charge_delay(new_page, mm, GFP_KERNEL, &memcg, false))
--
2.25.1
1
109
hulk inclusion
category: bugfix
bugzilla: NA
CVE: NA
------------------------------
remove unnecessary __GENKSYMS__ define
Fixes: fc7b0914a6fa ("iscsi: use dynamic single thread workqueue...")
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/scsi/libiscsi.h | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/include/scsi/libiscsi.h b/include/scsi/libiscsi.h
index a5f8094e3334..e03b9244499e 100644
--- a/include/scsi/libiscsi.h
+++ b/include/scsi/libiscsi.h
@@ -253,9 +253,7 @@ struct iscsi_conn {
/* custom statistics */
uint32_t eh_abort_cnt;
uint32_t fmr_unalign_cnt;
-#ifndef __GENKSYMS__
- int intimate_cpu; /* offset:588, KABI is ok */
-#endif
+ int intimate_cpu;
};
struct iscsi_pool {
--
2.25.1
1
0
From: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: NA
CVE: NA
----------------------------
One of the ILP32 patchset rename 'compat_user_mode' and
'compat_thumb_mode' to 'a32_user_mode' and 'a32_user_mode'. But these
two macros are used in some opensource userspace application. To keep
compatibility, we redefine these two macros.
Fixes: 23b2f00 ("arm64: rename functions that reference compat term")
Signed-off-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/include/asm/ptrace.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
index 3fafb1af3d66..f1662df255ca 100644
--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -213,6 +213,8 @@ static inline void forget_syscall(struct pt_regs *regs)
#define a32_thumb_mode(regs) (0)
#endif
+#define compat_thumb_mode(regs) a32_thumb_mode(regs)
+
#define user_mode(regs) \
(((regs)->pstate & PSR_MODE_MASK) == PSR_MODE_EL0t)
@@ -220,6 +222,8 @@ static inline void forget_syscall(struct pt_regs *regs)
(((regs)->pstate & (PSR_MODE32_BIT | PSR_MODE_MASK)) == \
(PSR_MODE32_BIT | PSR_MODE_EL0t))
+#define compat_user_mode(regs) a32_user_mode(regs)
+
#define processor_mode(regs) \
((regs)->pstate & PSR_MODE_MASK)
--
2.25.1
1
0

[PATCH 1/7] xen/events: add a proper barrier to 2-level uevent unmasking
by Yang Yingliang 02 Nov '20
by Yang Yingliang 02 Nov '20
02 Nov '20
From: Juergen Gross <jgross(a)suse.com>
mainline inclusion
from mainline-v5.10
commit 4d3fe31bd993ef504350989786858aefdb877daa
category: bugfix
bugzilla: NA
CVE: CVE-2020-27673
--------------------------------
A follow-up patch will require certain write to happen before an event
channel is unmasked.
While the memory barrier is not strictly necessary for all the callers,
the main one will need it. In order to avoid an extra memory barrier
when using fifo event channels, mandate evtchn_unmask() to provide
write ordering.
The 2-level event handling unmask operation is missing an appropriate
barrier, so add it. Fifo event channels are fine in this regard due to
using sync_cmpxchg().
This is part of XSA-332.
Cc: stable(a)vger.kernel.org
Suggested-by: Julien Grall <julien(a)xen.org>
Signed-off-by: Juergen Gross <jgross(a)suse.com>
Reviewed-by: Julien Grall <jgrall(a)amazon.com>
Reviewed-by: Wei Liu <wl(a)xen.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/xen/events/events_2l.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c
index 5478c761cd41..f026624898e7 100644
--- a/drivers/xen/events/events_2l.c
+++ b/drivers/xen/events/events_2l.c
@@ -91,6 +91,8 @@ static void evtchn_2l_unmask(unsigned port)
BUG_ON(!irqs_disabled());
+ smp_wmb(); /* All writes before unmask must be visible. */
+
if (unlikely((cpu != cpu_from_evtchn(port))))
do_hypercall = 1;
else {
--
2.25.1
1
6
From: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: NA
CVE: NA
------------------------------
Firmware may not trigger SDEI event as required frequency. SDEI event
may be triggered too soon, which cause false hardlockup in kernel. Check
the time stamp in sdei_watchdog_callbak and skip the hardlockup check if
it is invoked too soon.
Signed-off-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/kernel/watchdog_sdei.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/arch/arm64/kernel/watchdog_sdei.c b/arch/arm64/kernel/watchdog_sdei.c
index 05b9a9a223a7..aa980b090598 100644
--- a/arch/arm64/kernel/watchdog_sdei.c
+++ b/arch/arm64/kernel/watchdog_sdei.c
@@ -23,6 +23,7 @@
static int sdei_watchdog_event_num;
static bool disable_sdei_nmi_watchdog;
static bool sdei_watchdog_registered;
+static DEFINE_PER_CPU(ktime_t, last_check_time);
int watchdog_sdei_enable(unsigned int cpu)
{
@@ -35,6 +36,7 @@ int watchdog_sdei_enable(unsigned int cpu)
refresh_hld_last_timestamp();
#endif
+ __this_cpu_write(last_check_time, ktime_get_mono_fast_ns());
sdei_api_set_secure_timer_period(watchdog_thresh);
ret = sdei_api_event_enable(sdei_watchdog_event_num);
@@ -63,6 +65,23 @@ void watchdog_sdei_disable(unsigned int cpu)
static int sdei_watchdog_callback(u32 event,
struct pt_regs *regs, void *arg)
{
+ ktime_t delta, now = ktime_get_mono_fast_ns();
+
+ delta = now - __this_cpu_read(last_check_time);
+ __this_cpu_write(last_check_time, now);
+
+ /*
+ * Set delta to 4/5 of the actual watchdog threshold period so the
+ * hrtimer is guaranteed to fire at least once within the real
+ * watchdog threshold.
+ */
+ if (delta < watchdog_thresh * (u64)NSEC_PER_SEC * 4 / 5) {
+ pr_err(FW_BUG "SDEI Watchdog event triggered too soon, "
+ "time to last check:%lld ns\n", delta);
+ WARN_ON(1);
+ return 0;
+ }
+
watchdog_hardlockup_check(regs);
return 0;
--
2.25.1
1
0
Abhishek Pandit-Subedi (1):
Bluetooth: Only mark socket zapped after unlocking
Adam Goode (1):
media: uvcvideo: Ensure all probed info is returned to v4l2
Aditya Pakki (1):
media: st-delta: Fix reference count leak in delta_run_work
Adrian Hunter (1):
perf intel-pt: Fix "context_switch event has no tid" error
Al Grant (1):
perf: correct SNOOPX field offset
Alex Williamson (1):
vfio/pci: Clear token on bypass registration failure
Alexander Aring (1):
fs: dlm: fix configfs memory leak
Athira Rajeev (1):
powerpc/perf: Exclude pmc5/6 from the irrelevant PMU group constraints
Brooke Basile (1):
ath9k: hif_usb: fix race condition between usb_get_urb() and
usb_kill_anchored_urbs()
Can Guo (1):
scsi: ufs: ufs-qcom: Fix race conditions caused by
ufs_qcom_testbus_config()
Chris Chiu (1):
rtl8xxxu: prevent potential memory leak
Christian Eggers (1):
eeprom: at25: set minimum read/write access stride to 1
Christoph Hellwig (1):
PM: hibernate: remove the bogus call to get_gendisk() in
software_resume()
Claudiu Beznea (1):
clk: at91: clk-main: update key before writing AT91_CKGR_MOR
Colin Ian King (1):
IB/rdmavt: Fix sizeof mismatch
Cong Wang (1):
ip_gre: set dev->hard_header_len and dev->needed_headroom properly
Cristian Ciocaltea (1):
ARM: dts: owl-s500: Fix incorrect PPI interrupt specifiers
Dan Aloni (1):
svcrdma: fix bounce buffers for unaligned offsets and multiple pages
Dan Carpenter (3):
rpmsg: smd: Fix a kobj leak in in qcom_smd_parse_edge()
Input: imx6ul_tsc - clean up some errors in imx6ul_tsc_resume()
memory: omap-gpmc: Fix a couple off by ones
Daniel Thompson (1):
kdb: Fix pager search for multi-line strings
Darrick J. Wong (2):
ext4: limit entries returned when counting fsmap records
xfs: make sure the rt allocator doesn't run off the end
Dinghao Liu (7):
watchdog: Fix memleak in watchdog_cdev_register
watchdog: Use put_device on error
media: vsp1: Fix runtime PM imbalance on error
media: platform: s3c-camif: Fix runtime PM imbalance on error
media: platform: sti: hva: Fix runtime PM imbalance on error
media: bdisp: Fix runtime PM imbalance on error
media: venus: core: Fix runtime PM imbalance in venus_probe
Dirk Behme (1):
i2c: rcar: Auto select RESET_CONTROLLER
Doug Horn (1):
Fix use after free in get_capset_info callback.
Eli Billauer (1):
usb: core: Solve race condition in anchor cleanup functions
Eric Biggers (1):
reiserfs: only call unlock_new_inode() if I_NEW
Finn Thain (2):
powerpc/tau: Check processor type before enabling TAU interrupt
powerpc/tau: Disable TAU between measurements
Francesco Ruggeri (1):
netfilter: conntrack: connection timeout after re-register
Greg Kroah-Hartman (1):
Linux 4.19.154
Guenter Roeck (1):
watchdog: sp5100: Fix definition of EFCH_PM_DECODEEN3
Hamish Martin (1):
usb: ohci: Default to per-port over-current protection
Hans de Goede (1):
i2c: core: Restore acpi_walk_dep_device_list() getting called after
registering the ACPI i2c devs
Hauke Mehrtens (1):
pwm: img: Fix null pointer access in probe
Horia Geantă (1):
ARM: dts: imx6sl: fix rng node
Jamie Iles (1):
f2fs: wait for sysfs kobject removal before freeing f2fs_sb_info
Jan Kara (3):
udf: Limit sparing table size
udf: Avoid accessing uninitialized data on failed inode read
reiserfs: Fix memory leak in reiserfs_parse_options()
Jason Gunthorpe (2):
RDMA/cma: Remove dead code for kernel rdmacm multicast
RDMA/cma: Consolidate the destruction of a cma_multicast in one place
Jassi Brar (1):
mailbox: avoid timer start from callback
Jernej Skrabec (1):
ARM: dts: sun8i: r40: bananapi-m2-ultra: Fix dcdc1 regulator
Jing Xiangfeng (3):
rapidio: fix the missed put_device() for rio_mport_add_riodev
scsi: mvumi: Fix error return in mvumi_io_attach()
scsi: ibmvfc: Fix error return in ibmvfc_probe()
Joakim Zhang (1):
can: flexcan: flexcan_chip_stop(): add error handling and propagate
error value
Johan Hovold (1):
USB: cdc-acm: handle broken union descriptors
Juri Lelli (1):
sched/features: Fix !CONFIG_JUMP_LABEL case
Kaige Li (1):
NTB: hw: amd: fix an issue about leak system resources
Kajol Jain (1):
powerpc/perf/hv-gpci: Fix starting index value
Keita Suzuki (2):
misc: rtsx: Fix memory leak in rtsx_pci_probe
brcmsmac: fix memory leak in wlc_phy_attach_lcnphy
Krzysztof Kozlowski (5):
Input: ep93xx_keypad - fix handling of platform_get_irq() error
Input: omap4-keypad - fix handling of platform_get_irq() error
Input: twl4030_keypad - fix handling of platform_get_irq() error
Input: sun4i-ps2 - fix handling of platform_get_irq() error
memory: fsl-corenet-cf: Fix handling of platform_get_irq() error
Leon Romanovsky (1):
overflow: Include header file with SIZE_MAX declaration
Lijun Ou (1):
RDMA/hns: Set the unsupported wr opcode
Lorenzo Colitti (1):
usb: gadget: f_ncm: allow using NCM in SuperSpeed Plus gadgets.
Mark Tomlinson (1):
PCI: iproc: Set affinity mask on MSI interrupts
Martijn de Gouw (1):
SUNRPC: fix copying of multiple pages in gss_read_proxy_verf()
Matthew Wilcox (Oracle) (1):
ramfs: fix nommu mmap with gaps in the page cache
Mauro Carvalho Chehab (2):
media: saa7134: avoid a shift overflow
usb: dwc3: simple: add support for Hikey 970
Michal Simek (1):
arm64: dts: zynqmp: Remove additional compatible string for i2c IPs
Navid Emamdoost (1):
clk: bcm2835: add missing release if devm_clk_hw_register fails
Nicholas Piggin (1):
powerpc/64s/radix: Fix mm_cpumask trimming race vs kthread_use_mm
Nilesh Javali (2):
scsi: qedi: Protect active command list to avoid list corruption
scsi: qedi: Fix list_del corruption while removing active I/O
Oliver Neukum (2):
media: ati_remote: sanity check for both endpoints
USB: cdc-wdm: Make wdm_flush() interruptible and add wdm_fsync().
Pablo Neira Ayuso (1):
netfilter: nf_fwd_netdev: clear timestamp in forwarding path
Pali Rohár (1):
mmc: sdio: Check for CISTPL_VERS_1 buffer size
Pavel Machek (2):
crypto: ccp - fix error handling
media: firewire: fix memory leak
Peilin Ye (1):
ipvs: Fix uninit-value in do_ip_vs_set_ctl()
Peng Fan (1):
tty: serial: fsl_lpuart: fix lpuart32_poll_get_char
Qiushi Wu (4):
media: sti: Fix reference count leaks
media: exynos4-is: Fix several reference count leaks due to
pm_runtime_get_sync
media: exynos4-is: Fix a reference count leak due to
pm_runtime_get_sync
media: exynos4-is: Fix a reference count leak
Robert Hoo (1):
KVM: x86: emulating RDPID failure shall return #UD rather than #GP
Roman Bolshakov (1):
scsi: target: core: Add CONTROL field for trace events
Rustam Kovhaev (1):
ntfs: add check for mft record size in superblock
Sherry Sun (2):
mic: vop: copy data to kernel space then write to io memory
misc: vop: add round_up(x,4) for vring_size to avoid kernel panic
Souptick Joarder (1):
rapidio: fix error handling path
Srikar Dronamraju (1):
cpufreq: powernv: Fix frame-size-overflow in
powernv_cpufreq_reboot_notifier
Stephan Gerhold (2):
arm64: dts: qcom: pm8916: Remove invalid reg size from wcd_codec
arm64: dts: qcom: msm8916: Fix MDP/DSI interrupts
Stephen Boyd (1):
clk: rockchip: Initialize hw to error to avoid undefined behavior
Tetsuo Handa (2):
block: ratelimit handle_bad_sector() message
mwifiex: don't call del_timer_sync() on uninitialized timer
Thomas Pedersen (1):
mac80211: handle lack of sband->bitrates in rates
Tobias Jordan (1):
lib/crc32.c: fix trivial typo in preprocessor condition
Tong Zhang (1):
tty: ipwireless: fix error handling
Valentin Vidic (1):
net: korina: cast KSEG0 address to pointer in kfree
Vasant Hegde (1):
powerpc/powernv/dump: Fix race while processing OPAL dump
Vincent Mailhol (1):
usb: cdc-acm: add quirk to blacklist ETAS ES58X devices
Wang Yufen (1):
brcm80211: fix possible memleak in brcmf_proto_msgbuf_attach
Weihang Li (1):
RDMA/hns: Fix missing sq_sig_type when querying QP
Xiaolong Huang (1):
media: media/pci: prevent memory leak in bttv_probe
Xiaoyang Xu (1):
vfio iommu type1: Fix memory leak in vfio_iommu_type1_pin_pages
YueHaibing (2):
Input: stmfts - fix a & vs && typo
memory: omap-gpmc: Fix build error without CONFIG_OF
Zekun Shen (1):
ath10k: check idx validity in __ath10k_htt_rx_ring_fill_n()
Zqiang (1):
usb: gadget: function: printer: fix use-after-free in __lock_acquire
zhenwei pi (1):
nvmet: fix uninitialized work for zero kato
Makefile | 2 +-
arch/arm/boot/dts/imx6sl.dtsi | 2 +
arch/arm/boot/dts/owl-s500.dtsi | 6 +-
.../boot/dts/sun8i-r40-bananapi-m2-ultra.dts | 10 +-
arch/arm64/boot/dts/qcom/msm8916.dtsi | 4 +-
arch/arm64/boot/dts/qcom/pm8916.dtsi | 2 +-
arch/arm64/boot/dts/xilinx/zynqmp.dtsi | 4 +-
arch/powerpc/include/asm/tlb.h | 13 ---
arch/powerpc/kernel/tau_6xx.c | 96 +++++++------------
arch/powerpc/mm/tlb-radix.c | 23 +++--
arch/powerpc/perf/hv-gpci-requests.h | 6 +-
arch/powerpc/perf/isa207-common.c | 10 ++
arch/powerpc/platforms/Kconfig | 14 +--
arch/powerpc/platforms/powernv/opal-dump.c | 41 +++++---
arch/x86/kvm/emulate.c | 2 +-
block/blk-core.c | 9 +-
drivers/clk/at91/clk-main.c | 11 ++-
drivers/clk/bcm/clk-bcm2835.c | 4 +-
drivers/clk/rockchip/clk-half-divider.c | 2 +-
drivers/cpufreq/powernv-cpufreq.c | 9 +-
drivers/crypto/ccp/ccp-ops.c | 2 +-
drivers/gpu/drm/virtio/virtgpu_kms.c | 2 +
drivers/gpu/drm/virtio/virtgpu_vq.c | 10 +-
drivers/i2c/busses/Kconfig | 1 +
drivers/i2c/i2c-core-acpi.c | 11 ++-
drivers/infiniband/core/cma.c | 84 +++++++---------
drivers/infiniband/hw/hns/hns_roce_hw_v1.c | 1 -
drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 1 +
drivers/infiniband/sw/rdmavt/vt.c | 4 +-
drivers/input/keyboard/ep93xx_keypad.c | 4 +-
drivers/input/keyboard/omap4-keypad.c | 6 +-
drivers/input/keyboard/twl4030_keypad.c | 8 +-
drivers/input/serio/sun4i-ps2.c | 9 +-
drivers/input/touchscreen/imx6ul_tsc.c | 27 +++---
drivers/input/touchscreen/stmfts.c | 2 +-
drivers/mailbox/mailbox.c | 12 ++-
drivers/media/firewire/firedtv-fw.c | 6 +-
drivers/media/pci/bt8xx/bttv-driver.c | 13 ++-
drivers/media/pci/saa7134/saa7134-tvaudio.c | 3 +-
drivers/media/platform/exynos4-is/fimc-isp.c | 4 +-
drivers/media/platform/exynos4-is/fimc-lite.c | 2 +-
drivers/media/platform/exynos4-is/media-dev.c | 4 +-
drivers/media/platform/exynos4-is/mipi-csis.c | 4 +-
drivers/media/platform/qcom/venus/core.c | 5 +-
drivers/media/platform/s3c-camif/camif-core.c | 5 +-
drivers/media/platform/sti/bdisp/bdisp-v4l2.c | 3 +-
drivers/media/platform/sti/delta/delta-v4l2.c | 4 +-
drivers/media/platform/sti/hva/hva-hw.c | 4 +-
drivers/media/platform/vsp1/vsp1_drv.c | 11 ++-
drivers/media/rc/ati_remote.c | 4 +
drivers/media/usb/uvc/uvc_v4l2.c | 30 ++++++
drivers/memory/fsl-corenet-cf.c | 6 +-
drivers/memory/omap-gpmc.c | 8 +-
drivers/misc/cardreader/rtsx_pcr.c | 4 +-
drivers/misc/eeprom/at25.c | 2 +-
drivers/misc/mic/vop/vop_main.c | 2 +-
drivers/misc/mic/vop/vop_vringh.c | 24 +++--
drivers/mmc/core/sdio_cis.c | 3 +
drivers/net/can/flexcan.c | 34 +++++--
drivers/net/ethernet/korina.c | 4 +-
drivers/net/wireless/ath/ath10k/htt_rx.c | 8 ++
drivers/net/wireless/ath/ath9k/hif_usb.c | 19 ++++
.../broadcom/brcm80211/brcmfmac/msgbuf.c | 2 +
.../broadcom/brcm80211/brcmsmac/phy/phy_lcn.c | 4 +-
drivers/net/wireless/marvell/mwifiex/usb.c | 3 +-
.../wireless/realtek/rtl8xxxu/rtl8xxxu_core.c | 10 +-
drivers/ntb/hw/amd/ntb_hw_amd.c | 1 +
drivers/nvme/target/core.c | 3 +-
drivers/pci/controller/pcie-iproc-msi.c | 13 ++-
drivers/pwm/pwm-img.c | 3 +-
drivers/rapidio/devices/rio_mport_cdev.c | 18 ++--
drivers/rpmsg/qcom_smd.c | 32 +++++--
drivers/scsi/ibmvscsi/ibmvfc.c | 1 +
drivers/scsi/mvumi.c | 1 +
drivers/scsi/qedi/qedi_fw.c | 23 ++++-
drivers/scsi/qedi/qedi_iscsi.c | 2 +
drivers/scsi/ufs/ufs-qcom.c | 5 -
drivers/tty/ipwireless/network.c | 4 +-
drivers/tty/ipwireless/tty.c | 2 +-
drivers/tty/serial/fsl_lpuart.c | 2 +-
drivers/usb/class/cdc-acm.c | 23 +++++
drivers/usb/class/cdc-wdm.c | 72 ++++++++++----
drivers/usb/core/urb.c | 89 ++++++++++-------
drivers/usb/dwc3/dwc3-of-simple.c | 1 +
drivers/usb/gadget/function/f_ncm.c | 2 +-
drivers/usb/gadget/function/f_printer.c | 16 +++-
drivers/usb/host/ohci-hcd.c | 16 ++--
drivers/vfio/pci/vfio_pci_intrs.c | 4 +-
drivers/vfio/vfio_iommu_type1.c | 3 +-
drivers/watchdog/sp5100_tco.h | 2 +-
drivers/watchdog/watchdog_dev.c | 6 +-
fs/dlm/config.c | 3 +
fs/ext4/fsmap.c | 3 +
fs/f2fs/sysfs.c | 1 +
fs/ntfs/inode.c | 6 ++
fs/ramfs/file-nommu.c | 2 +-
fs/reiserfs/inode.c | 3 +-
fs/reiserfs/super.c | 8 +-
fs/udf/inode.c | 25 ++---
fs/udf/super.c | 6 ++
fs/xfs/xfs_rtalloc.c | 11 +++
include/linux/overflow.h | 1 +
include/scsi/scsi_common.h | 7 ++
include/trace/events/target.h | 12 +--
include/uapi/linux/perf_event.h | 2 +-
kernel/debug/kdb/kdb_io.c | 8 +-
kernel/power/hibernate.c | 11 ---
kernel/sched/core.c | 2 +-
kernel/sched/sched.h | 13 ++-
lib/crc32.c | 2 +-
net/bluetooth/l2cap_sock.c | 7 +-
net/ipv4/ip_gre.c | 15 ++-
net/mac80211/cfg.c | 3 +-
net/mac80211/sta_info.c | 4 +
net/netfilter/ipvs/ip_vs_ctl.c | 7 +-
net/netfilter/nf_conntrack_proto_tcp.c | 19 ++--
net/netfilter/nf_dup_netdev.c | 1 +
net/netfilter/nft_fwd_netdev.c | 1 +
net/sunrpc/auth_gss/svcauth_gss.c | 27 ++++--
net/sunrpc/xprtrdma/svc_rdma_sendto.c | 3 +-
samples/mic/mpssd/mpssd.c | 4 +-
tools/perf/util/intel-pt.c | 8 +-
122 files changed, 808 insertions(+), 450 deletions(-)
--
2.25.1
1
119
Alex Dewar (1):
VMCI: check return value of get_user_pages_fast() for errors
Arnd Bergmann (1):
mtd: lpddr: fix excessive stack usage with clang
Artem Savkov (1):
pty: do tty_flip_buffer_push without port->lock in pty_write
Arvind Sankar (1):
x86/fpu: Allow multiple bits in clearcpuid= parameter
Bryan O'Donoghue (1):
wcn36xx: Fix reported 802.11n rx_highest rate wcn3660/wcn3680
Christophe JAILLET (5):
crypto: ixp4xx - Fix the size used in a 'dma_free_coherent()' call
ath10k: Fix the size used in a 'dma_free_coherent()' call in an error
handling path
mwifiex: Do not use GFP_KERNEL in atomic context
staging: rtl8192u: Do not use GFP_KERNEL in atomic context
scsi: qla4xxx: Fix an error handling path in
'qla4xxx_get_host_stats()'
Colin Ian King (3):
x86/events/amd/iommu: Fix sizeof mismatch
video: fbdev: vga16fb: fix setting of pixclock because a pass-by-value
error
qtnfmac: fix resource leaks on unsupported iftype error return path
Cong Wang (1):
tipc: fix the skb_unshare() in tipc_buf_append()
Dan Carpenter (8):
ALSA: bebob: potential info leak in hwdep_read()
cifs: remove bogus debug code
ath6kl: prevent potential array overflow in ath6kl_add_new_sta()
ath9k: Fix potential out of bounds in ath9k_htc_txcompletion_cb()
HID: roccat: add bounds checking in kone_sysfs_write_settings()
ath6kl: wmi: prevent a shift wrapping bug in
ath6kl_wmi_delete_pstream_cmd()
mfd: sm501: Fix leaks in probe()
scsi: be2iscsi: Fix a theoretical leak in beiscsi_create_eqs()
Darrick J. Wong (2):
xfs: limit entries returned when counting fsmap records
xfs: fix high key handling in the rt allocator's query_range function
David Ahern (1):
ipv4: Restore flowi4_oif update before call to xfrm_lookup_route
David Wilder (2):
ibmveth: Switch order of ibmveth_helper calls.
ibmveth: Identify ingress large send packets.
Davide Caratti (1):
net/sched: act_tunnel_key: fix OOB write in case of IPv6 ERSPAN
tunnels
Defang Bo (1):
nfc: Ensure presence of NFC_ATTR_FIRMWARE_NAME attribute in
nfc_genl_fw_download()
Dinghao Liu (4):
EDAC/i5100: Fix error handling order in i5100_init_one()
media: omap3isp: Fix memleak in isp_probe
media: mx2_emmaprp: Fix memleak in emmaprp_probe
video: fbdev: radeon: Fix memleak in radeonfb_pci_register
Dmitry Torokhov (1):
HID: hid-input: fix stylus battery reporting
Emmanuel Grumbach (1):
iwlwifi: mvm: split a print to avoid a WARNING in ROC
Eran Ben Elisha (1):
net/mlx5: Don't call timecounter cyc2time directly from 1PPS flow
Eric Dumazet (2):
icmp: randomize the global rate limiter
quota: clear padding in v2r1_mem2diskdqb()
Finn Thain (3):
powerpc/tau: Use appropriate temperature sample interval
powerpc/tau: Convert from timer to workqueue
powerpc/tau: Remove duplicated set_thresholds() call
Greg Kroah-Hartman (1):
Linux 4.19.153
Guenter Roeck (1):
hwmon: (pmbus/max34440) Fix status register reads for MAX344{51,60,61}
Guillaume Tucker (1):
ARM: 9007/1: l2c: fix prefetch bits init in L2X0_AUX_CTRL using DT
values
Hans de Goede (2):
pwm: lpss: Fix off by one error in base_unit math in
pwm_lpss_prepare()
pwm: lpss: Add range limit check for the base_unit register value
Heiner Kallweit (2):
r8169: fix data corruption issue on RTL8402
r8169: fix operation under forced interrupt threading
Herbert Xu (2):
crypto: algif_aead - Do not set MAY_BACKLOG on the async path
crypto: algif_skcipher - EBUSY on aio should be an error
Håkon Bugge (2):
IB/mlx4: Fix starvation in paravirt mux/demux
IB/mlx4: Adjust delayed work when a dup is observed
Jason Gunthorpe (2):
RDMA/ucma: Fix locking for ctx->events_reported
RDMA/ucma: Add missing locking around rdma_leave_multicast()
Jian-Hong Pan (1):
ALSA: hda/realtek: Enable audio jacks of ASUS D700SA with ALC887
Johannes Berg (1):
nl80211: fix non-split wiphy information
John Donnelly (1):
scsi: target: tcmu: Fix warning: 'page' may be used uninitialized
Jonathan Lemon (1):
mlx4: handle non-napi callers to napi_poll
Julian Anastasov (1):
ipvs: clear skb->tstamp in forwarding path
Karsten Graul (1):
net/smc: fix valid DMBE buffer sizes
Krzysztof Kozlowski (1):
EDAC/ti: Fix handling of platform_get_irq() error
Laurent Pinchart (2):
media: uvcvideo: Set media controller entity functions
media: uvcvideo: Silence shift-out-of-bounds warning
Libing Zhou (1):
x86/nmi: Fix nmi_handle() duration miscalculation
Linus Walleij (4):
net: dsa: rtl8366: Check validity of passed VLANs
net: dsa: rtl8366: Refactor VLAN/PVID init
net: dsa: rtl8366: Skip PVID setting if not requested
net: dsa: rtl8366rb: Support all 4096 VLANs
Lorenzo Colitti (2):
usb: gadget: f_ncm: fix ncm_bitrate for SuperSpeed and above.
usb: gadget: u_ether: enable qmult on SuperSpeed Plus as well
Maciej Żenczykowski (1):
net/ipv4: always honour route mtu during forwarding
Madhuparna Bhowmik (1):
crypto: picoxcell - Fix potential race condition bug
Marek Vasut (2):
net: fec: Fix phy_device lookup for phy_reset_after_clk_enable()
net: fec: Fix PHY init after phy_reset_after_clk_enable()
Mark Salter (1):
drivers/perf: xgene_pmu: Fix uninitialized resource struct
Mark Tomlinson (1):
mtd: mtdoops: Don't write panic data twice
Michal Kalderon (2):
RDMA/qedr: Fix use of uninitialized field
RDMA/qedr: Fix inline size returned for iWARP
Michał Mirosław (1):
regulator: resolve supply after creating regulator
Minas Harutyunyan (1):
usb: dwc2: Fix INTR OUT transfers in DDMA mode.
Nathan Chancellor (1):
usb: dwc2: Fix parameter type in function pointer prototype
Nathan Lynch (1):
powerpc/pseries: explicitly reschedule during drmem_lmb list traversal
Neal Cardwell (1):
tcp: fix to update snd_wl1 in bulk receiver fast path
Necip Fazil Yildiran (2):
pinctrl: bcm: fix kconfig dependency warning when !GPIOLIB
arc: plat-hsdk: fix kconfig dependency warning when !RESET_CONTROLLER
Nicholas Mc Guire (2):
powerpc/pseries: Fix missing of_node_put() in rng_init()
powerpc/icp-hv: Fix missing of_node_put() in success path
Ong Boon Leong (1):
net: stmmac: use netif_tx_start|stop_all_queues() function
Pablo Neira Ayuso (1):
netfilter: nf_log: missing vlan offload tag and proto
Pali Rohár (1):
cpufreq: armada-37xx: Add missing MODULE_DEVICE_TABLE
Qiushi Wu (7):
media: rcar-vin: Fix a reference count leak.
media: rockchip/rga: Fix a reference count leak.
media: platform: fcp: Fix a reference count leak.
media: camss: Fix a reference count leak.
media: s5p-mfc: Fix a reference count leak
media: stm32-dcmi: Fix a reference count leak
media: ti-vpe: Fix a missing check and reference count leak
Ralph Campbell (1):
mm/memcg: fix device private memcg accounting
Rohit Maheshwari (1):
net/tls: sendfile fails with ktls offload
Rohit kumar (2):
ASoC: qcom: lpass-platform: fix memory leak
ASoC: qcom: lpass-cpu: fix concurrency issue
Samuel Holland (1):
Bluetooth: hci_uart: Cancel init work before unregistering
Sean Christopherson (1):
KVM: x86/mmu: Commit zap of remaining invalid pages when recovering
lpages
Shyam Prasad N (1):
cifs: Return the error from crypt_message when enc/dec key not found.
Souptick Joarder (2):
drivers/virt/fsl_hypervisor: Fix error handling path
misc: mic: scif: Fix error handling path
Srinivas Kandagatla (3):
slimbus: core: check get_addr before removing laddr ida
slimbus: core: do not enter to clock pause mode in core
slimbus: qcom-ngd-ctrl: disable ngd in qmi server down callback
Suravee Suthikulpanit (1):
KVM: SVM: Initialize prev_ga_tag before use
Suren Baghdasaryan (1):
mm, oom_adj: don't loop through tasks in __set_oom_adj when not
necessary
Sylwester Nawrocki (1):
media: Revert "media: exynos4-is: Add missed check for
pinctrl_lookup_state()"
Takashi Iwai (1):
ALSA: seq: oss: Avoid mutex lock for a long-time ioctl
Tero Kristo (1):
crypto: omap-sham - fix digcnt register handling with export/import
Thomas Gleixner (1):
net: enic: Cure the enic api locking trainwreck
Thomas Preston (2):
pinctrl: mcp23s08: Fix mcp23x17_regmap initialiser
pinctrl: mcp23s08: Fix mcp23x17 precious range
Tianjia Zhang (3):
crypto: mediatek - Fix wrong return value in mtk_desc_ring_alloc()
scsi: qla2xxx: Fix wrong return value in qla_nvme_register_hba()
scsi: csiostor: Fix wrong return value in csio_hw_prep_fw()
Todd Kjos (1):
binder: fix UAF when releasing todo list
Tom Rix (8):
media: tuner-simple: fix regression in simple_set_radio_freq
media: m5mols: Check function pointer in m5mols_sensor_power
media: tc358743: initialize variable
media: tc358743: cleanup tc358743_cec_isr
brcmfmac: check ndev pointer
drm/gma500: fix error check
video: fbdev: sis: fix null ptr dereference
mwifiex: fix double free
Tong Zhang (1):
tty: serial: earlycon dependency
Tyrel Datwyler (1):
tty: hvcs: Don't NULL tty->driver_data until hvcs_cleanup()
Vadim Pasternak (1):
platform/x86: mlx-platform: Remove PSU EEPROM configuration
Valentin Vidic (1):
net: korina: fix kfree of rx/tx descriptor array
Venkateswara Naralasetty (1):
ath10k: provide survey info as accumulated data
Vinay Kumar Yadav (3):
chelsio/chtls: fix socket lock
chelsio/chtls: correct netdevice for vlan interface
chelsio/chtls: correct function return and return type
Wilken Gottwalt (1):
net: usb: qmi_wwan: add Cellient MPL200 card
Xiaoliang Pang (1):
cypto: mediatek - fix leaks in mtk_desc_ring_alloc
Xie He (2):
net: hdlc: In hdlc_rcv, check to make sure dev is an HDLC device
net: hdlc_raw_eth: Clear the IFF_TX_SKB_SHARING flag after calling
ether_setup
Yonghong Song (1):
net: fix pos incrementment in ipv6_route_seq_next
dinghao.liu(a)zju.edu.cn (1):
backlight: sky81452-backlight: Fix refcount imbalance on error
Łukasz Stelmach (2):
spi: spi-s3c64xx: swap s3c64xx_spi_set_cs() and
s3c64xx_enable_datapath()
spi: spi-s3c64xx: Check return values
.../admin-guide/kernel-parameters.txt | 2 +-
Documentation/networking/ip-sysctl.txt | 4 +-
Makefile | 2 +-
arch/arc/plat-hsdk/Kconfig | 1 +
arch/arm/mm/cache-l2x0.c | 16 +-
arch/powerpc/include/asm/drmem.h | 18 +-
arch/powerpc/include/asm/reg.h | 2 +-
arch/powerpc/kernel/tau_6xx.c | 55 ++--
arch/powerpc/platforms/pseries/rng.c | 1 +
arch/powerpc/sysdev/xics/icp-hv.c | 1 +
arch/x86/events/amd/iommu.c | 2 +-
arch/x86/kernel/fpu/init.c | 30 +-
arch/x86/kernel/nmi.c | 5 +-
arch/x86/kvm/mmu.c | 1 +
arch/x86/kvm/svm.c | 1 +
crypto/algif_aead.c | 7 +-
crypto/algif_skcipher.c | 2 +-
drivers/android/binder.c | 35 +--
drivers/bluetooth/hci_ldisc.c | 1 +
drivers/bluetooth/hci_serdev.c | 2 +
drivers/cpufreq/armada-37xx-cpufreq.c | 6 +
drivers/crypto/chelsio/chtls/chtls_cm.c | 3 +
drivers/crypto/chelsio/chtls/chtls_io.c | 5 +-
drivers/crypto/ixp4xx_crypto.c | 2 +-
drivers/crypto/mediatek/mtk-platform.c | 8 +-
drivers/crypto/omap-sham.c | 3 +
drivers/crypto/picoxcell_crypto.c | 9 +-
drivers/edac/i5100_edac.c | 11 +-
drivers/edac/ti_edac.c | 3 +-
drivers/gpu/drm/gma500/cdv_intel_dp.c | 2 +-
drivers/hid/hid-input.c | 4 +-
drivers/hid/hid-roccat-kone.c | 23 +-
drivers/hwmon/pmbus/max34440.c | 3 -
drivers/infiniband/core/ucma.c | 6 +-
drivers/infiniband/hw/mlx4/cm.c | 3 +
drivers/infiniband/hw/mlx4/mad.c | 34 ++-
drivers/infiniband/hw/mlx4/mlx4_ib.h | 2 +
drivers/infiniband/hw/qedr/main.c | 2 +-
drivers/infiniband/hw/qedr/verbs.c | 2 +-
drivers/media/i2c/m5mols/m5mols_core.c | 3 +-
drivers/media/i2c/tc358743.c | 14 +-
drivers/media/platform/exynos4-is/media-dev.c | 4 +-
drivers/media/platform/mx2_emmaprp.c | 7 +-
drivers/media/platform/omap3isp/isp.c | 6 +-
.../media/platform/qcom/camss/camss-csiphy.c | 4 +-
drivers/media/platform/rcar-fcp.c | 4 +-
drivers/media/platform/rcar-vin/rcar-dma.c | 4 +-
drivers/media/platform/rockchip/rga/rga-buf.c | 1 +
drivers/media/platform/s5p-mfc/s5p_mfc_pm.c | 4 +-
drivers/media/platform/stm32/stm32-dcmi.c | 4 +-
drivers/media/platform/ti-vpe/vpe.c | 2 +
drivers/media/tuners/tuner-simple.c | 5 +-
drivers/media/usb/uvc/uvc_ctrl.c | 6 +-
drivers/media/usb/uvc/uvc_entity.c | 35 +++
drivers/mfd/sm501.c | 8 +-
drivers/misc/mic/scif/scif_rma.c | 4 +-
drivers/misc/vmw_vmci/vmci_queue_pair.c | 10 +-
drivers/mtd/lpddr/lpddr2_nvm.c | 35 ++-
drivers/mtd/mtdoops.c | 11 +-
drivers/net/dsa/realtek-smi.h | 4 +-
drivers/net/dsa/rtl8366.c | 280 ++++++++++--------
drivers/net/dsa/rtl8366rb.c | 2 +-
drivers/net/ethernet/cisco/enic/enic.h | 1 +
drivers/net/ethernet/cisco/enic/enic_api.c | 6 +
drivers/net/ethernet/cisco/enic/enic_main.c | 27 +-
drivers/net/ethernet/freescale/fec_main.c | 35 ++-
drivers/net/ethernet/ibm/ibmveth.c | 19 +-
drivers/net/ethernet/korina.c | 3 +-
drivers/net/ethernet/mellanox/mlx4/en_rx.c | 3 +
drivers/net/ethernet/mellanox/mlx4/en_tx.c | 2 +-
.../ethernet/mellanox/mlx5/core/lib/clock.c | 5 +-
drivers/net/ethernet/realtek/r8169.c | 54 ++--
.../net/ethernet/stmicro/stmmac/stmmac_main.c | 33 +--
drivers/net/usb/qmi_wwan.c | 1 +
drivers/net/wan/hdlc.c | 10 +-
drivers/net/wan/hdlc_raw_eth.c | 1 +
drivers/net/wireless/ath/ath10k/ce.c | 2 +-
drivers/net/wireless/ath/ath10k/mac.c | 2 +-
drivers/net/wireless/ath/ath6kl/main.c | 3 +
drivers/net/wireless/ath/ath6kl/wmi.c | 5 +
drivers/net/wireless/ath/ath9k/htc_hst.c | 2 +
drivers/net/wireless/ath/wcn36xx/main.c | 2 +-
.../broadcom/brcm80211/brcmfmac/core.c | 2 +-
.../net/wireless/intel/iwlwifi/mvm/mac80211.c | 9 +-
drivers/net/wireless/marvell/mwifiex/scan.c | 2 +-
drivers/net/wireless/marvell/mwifiex/sdio.c | 2 +
.../net/wireless/quantenna/qtnfmac/commands.c | 2 +
drivers/perf/xgene_pmu.c | 32 +-
drivers/pinctrl/bcm/Kconfig | 1 +
drivers/pinctrl/pinctrl-mcp23s08.c | 24 +-
drivers/platform/x86/mlx-platform.c | 15 +-
drivers/pwm/pwm-lpss.c | 7 +-
drivers/regulator/core.c | 21 +-
drivers/scsi/be2iscsi/be_main.c | 4 +-
drivers/scsi/csiostor/csio_hw.c | 2 +-
drivers/scsi/qla2xxx/qla_nvme.c | 2 +-
drivers/scsi/qla4xxx/ql4_os.c | 2 +-
drivers/slimbus/core.c | 6 +-
drivers/slimbus/qcom-ngd-ctrl.c | 4 +
drivers/spi/spi-s3c64xx.c | 52 +++-
.../staging/rtl8192u/ieee80211/ieee80211_rx.c | 2 +-
drivers/target/target_core_user.c | 2 +-
drivers/tty/hvc/hvcs.c | 14 +-
drivers/tty/pty.c | 2 +-
drivers/tty/serial/Kconfig | 1 +
drivers/usb/dwc2/gadget.c | 40 ++-
drivers/usb/dwc2/params.c | 2 +-
drivers/usb/gadget/function/f_ncm.c | 6 +-
drivers/usb/gadget/function/u_ether.c | 2 +-
drivers/video/backlight/sky81452-backlight.c | 1 +
drivers/video/fbdev/aty/radeon_base.c | 2 +-
drivers/video/fbdev/sis/init.c | 11 +-
drivers/video/fbdev/vga16fb.c | 14 +-
drivers/virt/fsl_hypervisor.c | 17 +-
fs/cifs/asn1.c | 16 +-
fs/cifs/smb2ops.c | 2 +-
fs/proc/base.c | 3 +-
fs/quota/quota_v2.c | 1 +
fs/xfs/libxfs/xfs_rtbitmap.c | 11 +-
fs/xfs/xfs_fsmap.c | 3 +
include/linux/oom.h | 1 +
include/linux/sched/coredump.h | 1 +
include/net/ip.h | 6 +
include/net/netfilter/nf_log.h | 1 +
kernel/fork.c | 21 ++
mm/memcontrol.c | 5 +-
mm/oom_kill.c | 2 +
net/ipv4/icmp.c | 7 +-
net/ipv4/netfilter/nf_log_arp.c | 19 +-
net/ipv4/netfilter/nf_log_ipv4.c | 6 +-
net/ipv4/route.c | 4 +-
net/ipv4/tcp_input.c | 2 +
net/ipv6/ip6_fib.c | 4 +-
net/ipv6/netfilter/nf_log_ipv6.c | 8 +-
net/netfilter/ipvs/ip_vs_xmit.c | 6 +
net/netfilter/nf_log_common.c | 12 +
net/nfc/netlink.c | 2 +-
net/sched/act_tunnel_key.c | 2 +-
net/smc/smc_core.c | 2 +-
net/tipc/msg.c | 3 +-
net/tls/tls_device.c | 11 +-
net/wireless/nl80211.c | 5 +-
sound/core/seq/oss/seq_oss.c | 7 +-
sound/firewire/bebob/bebob_hwdep.c | 3 +-
sound/pci/hda/patch_realtek.c | 42 +++
sound/soc/qcom/lpass-cpu.c | 16 -
sound/soc/qcom/lpass-platform.c | 3 +-
147 files changed, 968 insertions(+), 566 deletions(-)
--
2.25.1
1
145
openEuler 21.03 5.10 kernel defconfig 基于 openEuler 20.09 适配而来,请大家评审。
有问题可以在下面 issue 中讨论:
https://gitee.com/openeuler/kernel/issues/I22VGE
1
1
本次 kernel sig 例会无议题,本次会议取消,欢迎积极申报议题
1
0

29 Oct '20
In AArch64, guest will read the same values of the ID regsiters with
host. Both of them read the values from arm64_ftr_regs. This patch
series add support to emulate and configure ID registers so that we can
control the value of ID registers that guest read.
Peng Liang (5):
arm64: add a helper function to traverse arm64_ftr_regs
kvm: arm64: emulate the ID registers
kvm: arm64: make ID registers configurable
kvm: arm64: add KVM_CAP_ARM_CPU_FEATURE extension
kvm: fix compile error when including linux/kvm.h
arch/arm64/include/asm/cpufeature.h | 2 +
arch/arm64/include/asm/kvm_host.h | 2 +
arch/arm64/kernel/cpufeature.c | 13 ++++++
arch/arm64/kvm/sys_regs.c | 71 ++++++++++++++++++++++-------
include/uapi/linux/kvm.h | 13 ++++++
virt/kvm/arm/arm.c | 23 ++++++++++
6 files changed, 107 insertions(+), 17 deletions(-)
--
2.25.1
1
5

[PATCH 1/5] khugepaged: do not stop collapse if less than half PTEs are referenced
by Yang Yingliang 29 Oct '20
by Yang Yingliang 29 Oct '20
29 Oct '20
From: "Kirill A. Shutemov" <kirill.shutemov(a)linux.intel.com>
mainline inclusion
from mainline-v5.8-rc1
commit ffe945e633b527d5a4577b42cbadec3c7cbcf096
category: bugfix
bugzilla: 36230
CVE: NA
-------------------------------------------------
__collapse_huge_page_swapin() checks the number of referenced PTE to
decide if the memory range is hot enough to justify swapin.
We have few problems with the approach:
- It is way too late: we can do the check much earlier and safe time.
khugepaged_scan_pmd() already knows if we have any pages to swap in
and number of referenced page.
- It stops collapse altogether if there's not enough referenced pages,
not only swappingin.
Fix it by making the right check early. We also can avoid additional
page table scanning if khugepaged_scan_pmd() haven't found any swap
entries.
Fixes: 0db501f7a34c ("mm, thp: convert from optimistic swapin collapsing to conservative")
Signed-off-by: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Tested-by: Zi Yan <ziy(a)nvidia.com>
Reviewed-by: William Kucharski <william.kucharski(a)oracle.com>
Reviewed-by: Zi Yan <ziy(a)nvidia.com>
Acked-by: Yang Shi <yang.shi(a)linux.alibaba.com>
Cc: Andrea Arcangeli <aarcange(a)redhat.com>
Cc: John Hubbard <jhubbard(a)nvidia.com>
Cc: Mike Kravetz <mike.kravetz(a)oracle.com>
Cc: Ralph Campbell <rcampbell(a)nvidia.com>
Link: http://lkml.kernel.org/r/20200416160026.16538-3-kirill.shutemov@linux.intel…
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Liu Shixin <liushixin2(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
mm/khugepaged.c | 27 +++++++++++----------------
1 file changed, 11 insertions(+), 16 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 5883fd75d6fc..0ad9f2b2b33e 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -905,11 +905,6 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
.pgoff = linear_page_index(vma, address),
};
- /* we only decide to swapin, if there is enough young ptes */
- if (referenced < HPAGE_PMD_NR/2) {
- trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0);
- return false;
- }
vmf.pte = pte_offset_map(pmd, address);
for (; vmf.address < address + HPAGE_PMD_NR*PAGE_SIZE;
vmf.pte++, vmf.address += PAGE_SIZE) {
@@ -949,7 +944,7 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
static void collapse_huge_page(struct mm_struct *mm,
unsigned long address,
struct page **hpage,
- int node, int referenced)
+ int node, int referenced, int unmapped)
{
pmd_t *pmd, _pmd;
pte_t *pte;
@@ -1007,7 +1002,8 @@ static void collapse_huge_page(struct mm_struct *mm,
* If it fails, we release mmap_sem and jump out_nolock.
* Continuing to collapse causes inconsistency.
*/
- if (!__collapse_huge_page_swapin(mm, vma, address, pmd, referenced)) {
+ if (unmapped && !__collapse_huge_page_swapin(mm, vma, address,
+ pmd, referenced)) {
mem_cgroup_cancel_charge(new_page, memcg, true);
up_read(&mm->mmap_sem);
goto out_nolock;
@@ -1214,22 +1210,21 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
mmu_notifier_test_young(vma->vm_mm, address))
referenced++;
}
- if (writable) {
- if (referenced) {
- result = SCAN_SUCCEED;
- ret = 1;
- } else {
- result = SCAN_LACK_REFERENCED_PAGE;
- }
- } else {
+ if (!writable) {
result = SCAN_PAGE_RO;
+ } else if (!referenced || (unmapped && referenced < HPAGE_PMD_NR/2)) {
+ result = SCAN_LACK_REFERENCED_PAGE;
+ } else {
+ result = SCAN_SUCCEED;
+ ret = 1;
}
out_unmap:
pte_unmap_unlock(pte, ptl);
if (ret) {
node = khugepaged_find_target_node();
/* collapse_huge_page will return with the mmap_sem released */
- collapse_huge_page(mm, address, hpage, node, referenced);
+ collapse_huge_page(mm, address, hpage, node,
+ referenced, unmapped);
}
out:
trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced,
--
2.25.1
1
4
Juergen Gross (4):
xen/events: add a new "late EOI" evtchn framework
xen/events: switch user event channels to lateeoi model
xen/events: use a common cpu hotplug hook for event channels
xen/events: defer eoi in case of excessive number of events
.../admin-guide/kernel-parameters.txt | 8 +
drivers/xen/events/events_2l.c | 7 +-
drivers/xen/events/events_base.c | 363 +++++++++++++++++-
drivers/xen/events/events_fifo.c | 70 ++--
drivers/xen/events/events_internal.h | 17 +-
drivers/xen/evtchn.c | 7 +-
include/xen/events.h | 21 +
7 files changed, 420 insertions(+), 73 deletions(-)
--
2.25.1
1
4
openEuler 选择 5.10 作为下一个长期维护的内核版本 (公示) - /本文//链接 <https://gitee.com/openeuler/community/blob/master/sig/Kernel/kernel-upgrade…>/
Linux 5.10 预计是 Linux 社区今年的 LTS (long-term support) 版本。openEuler 社区经过多轮沟通,选择 5.10 作为 openEuler 内核的下一个长期维护版本(openEuler Long-term support Kernel, OLK)。openEuler 社区提供不少于4年的维护时间和不少于2年的扩展维护时间。按照规划 openEuler 21.03、21.09 以及 openEuler 22.03 LTS 都将选择该内核版本。 (具体时间以openEuler社区官网公布为准)
升级安排:
* v5.10-rc1 版本发布后,openEuler master 分支切换成 5.10。
* 由于主线 rc 版本尚在更新,openeuler_defconfig 暂时在 src-openeuler/kernel 仓库中提供,待 5.10 正式版本发布之后,再提交到源码仓库。
* openeuler_defconfig 基于 openEuler 20.09 的 config 修改适配,后续根据需求进行调整。
* 版本号格式,切换成 5.10.0-<devel_release>.<maintainence_release>。
* 首先支持 arm64 和 x86_64 架构,risc-v 的支持需要 risc-v sig 做适配。
* 上游社区 5.10 正式发布后(预计12月下旬), openeuler/kernel 正式建立 OLK-5.10 分支 (OLK: openEuler Long-term support Kernel),作为 5.10 的长期维护分支,接受补丁。
* 上游 5.10 正式发布前,如果有补丁需要发送 5.10,可以在 kernel(a)openeuler.org <mailto:kernel@openeuler.org> 中发 RFC 补丁,提前 Review 和讨论。
* 我们也可能提前建立 OLK-5.10 分支,以提前合入部分补丁做验证,但是在上游社区 5.10 正式发布之前,该分支可能会经常做 rebase 和 force push。
对 OLK-5.10 或 openEuler 21.03 kernel 的需求
* 需求可以在 https://gitee.com/openeuler/kernel/issues 中提出。
* 如对 defconfig 有诉求也请在上述链接中提 issue。
* openEuler 21.03 计划使用该内核版本,预计2021年1月份之后,将从 OLK-5.10 拉出 openEuler-21.03 维护分支,同时补丁将受限合入 openEuler 21.03。
注意事项和说明
* OLK 是 openEuler Long-term support Kernel 的缩写。openEuler LTS 版本和部分创新版本的内核基于 OLK 拉出分支进行维护。
* openEuler 5.10 内核不是基于 openEuler 4.19 内核的演进,而是基于上游社区内核的重新选型,因此如果您之前有合入 openEuler 4.19 的补丁,且这些补丁没有进入上游 5.10 内核,则需要您重新适配后推送到 openEuler 5.10 内核。
* openEuler 5.10 和 openEuler 4.19 两个版本 kabi 不兼容,您在 4.19 编译的 ko 不能直接在 5.10 上安装使用,需要重新适配和编译。
* openEuler 4.19 内核仍然处于维护周期内,如果您正在使用 4.19 内核,也不必紧张,您仍然能收到 4.19 的更新和增强。
问题反馈
* kernel 升级期间,如果遇到兼容性、功能等问题,可以在 https://gitee.com/openeuler/kernel/issues 提交 issue。有疑问也可以在 PR 评论中,或者提交 issue 讨论。
* 你也可以发邮件到 kernel(a)openeuler.org <mailto:kernel@openeuler.org> 报议题在 kernel sig 例会中讨论。
* 如果 kernel sig 范围内协调或解决不了的问题,你也可以在 tc 报议题讨论。
相关讨论纪要
kernel-sig 切换 5.10 的会议纪要:
https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/thread/6…
openEuler kernel 版本号在 TC 的议题及会议纪要:
* 议题及讨论记录:
https://mailweb.openeuler.org/hyperkitty/list/tc@openeuler.org/thread/KOHJN… https://mailweb.openeuler.org/hyperkitty/list/tc@openeuler.org/thread/VNJQ6…
* 纪要: https://mailweb.openeuler.org/hyperkitty/list/tc@openeuler.org/thread/ZLAWQ…
1
0

27 Oct '20
From: Juergen Gross <jgross(a)suse.com>
mainline inclusion
from mainline-v5.10
commit 073d0552ead5bfc7a3a9c01de590e924f11b5dd2
category: bugfix
bugzilla: NA
CVE: CVE-2020-27675
--------------------------------
Today it can happen that an event channel is being removed from the
system while the event handling loop is active. This can lead to a
race resulting in crashes or WARN() splats when trying to access the
irq_info structure related to the event channel.
Fix this problem by using a rwlock taken as reader in the event
handling loop and as writer when deallocating the irq_info structure.
As the observed problem was a NULL dereference in evtchn_from_irq()
make this function more robust against races by testing the irq_info
pointer to be not NULL before dereferencing it.
And finally make all accesses to evtchn_to_irq[row][col] atomic ones
in order to avoid seeing partial updates of an array element in irq
handling. Note that irq handling can be entered only for event channels
which have been valid before, so any not populated row isn't a problem
in this regard, as rows are only ever added and never removed.
This is XSA-331.
Cc: stable(a)vger.kernel.org
Reported-by: Marek Marczykowski-Górecki <marmarek(a)invisiblethingslab.com>
Reported-by: Jinoh Kang <luke1337(a)theori.io>
Signed-off-by: Juergen Gross <jgross(a)suse.com>
Reviewed-by: Stefano Stabellini <sstabellini(a)kernel.org>
Reviewed-by: Wei Liu <wl(a)xen.org>
Conflicts:
drivers/xen/events/events_base.c
[yyl: adjust context]
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/xen/events/events_base.c | 40 ++++++++++++++++++++++++++++----
1 file changed, 35 insertions(+), 5 deletions(-)
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 95e5a9300ff0..75eade7d2017 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -32,6 +32,7 @@
#include <linux/slab.h>
#include <linux/irqnr.h>
#include <linux/pci.h>
+#include <linux/spinlock.h>
#ifdef CONFIG_X86
#include <asm/desc.h>
@@ -69,6 +70,23 @@ const struct evtchn_ops *evtchn_ops;
*/
static DEFINE_MUTEX(irq_mapping_update_lock);
+/*
+ * Lock protecting event handling loop against removing event channels.
+ * Adding of event channels is no issue as the associated IRQ becomes active
+ * only after everything is setup (before request_[threaded_]irq() the handler
+ * can't be entered for an event, as the event channel will be unmasked only
+ * then).
+ */
+static DEFINE_RWLOCK(evtchn_rwlock);
+
+/*
+ * Lock hierarchy:
+ *
+ * irq_mapping_update_lock
+ * evtchn_rwlock
+ * IRQ-desc lock
+ */
+
static LIST_HEAD(xen_irq_list_head);
/* IRQ <-> VIRQ mapping. */
@@ -101,7 +119,7 @@ static void clear_evtchn_to_irq_row(unsigned row)
unsigned col;
for (col = 0; col < EVTCHN_PER_ROW; col++)
- evtchn_to_irq[row][col] = -1;
+ WRITE_ONCE(evtchn_to_irq[row][col], -1);
}
static void clear_evtchn_to_irq_all(void)
@@ -138,7 +156,7 @@ static int set_evtchn_to_irq(unsigned evtchn, unsigned irq)
clear_evtchn_to_irq_row(row);
}
- evtchn_to_irq[row][col] = irq;
+ WRITE_ONCE(evtchn_to_irq[row][col], irq);
return 0;
}
@@ -148,7 +166,7 @@ int get_evtchn_to_irq(unsigned evtchn)
return -1;
if (evtchn_to_irq[EVTCHN_ROW(evtchn)] == NULL)
return -1;
- return evtchn_to_irq[EVTCHN_ROW(evtchn)][EVTCHN_COL(evtchn)];
+ return READ_ONCE(evtchn_to_irq[EVTCHN_ROW(evtchn)][EVTCHN_COL(evtchn)]);
}
/* Get info for IRQ */
@@ -246,10 +264,14 @@ static void xen_irq_info_cleanup(struct irq_info *info)
*/
unsigned int evtchn_from_irq(unsigned irq)
{
- if (unlikely(WARN(irq >= nr_irqs, "Invalid irq %d!\n", irq)))
+ const struct irq_info *info = NULL;
+
+ if (likely(irq < nr_irqs))
+ info = info_for_irq(irq);
+ if (!info)
return 0;
- return info_for_irq(irq)->evtchn;
+ return info->evtchn;
}
unsigned irq_from_evtchn(unsigned int evtchn)
@@ -425,16 +447,21 @@ static int __must_check xen_allocate_irq_gsi(unsigned gsi)
static void xen_free_irq(unsigned irq)
{
struct irq_info *info = irq_get_chip_data(irq);
+ unsigned long flags;
if (WARN_ON(!info))
return;
+ write_lock_irqsave(&evtchn_rwlock, flags);
+
list_del(&info->list);
irq_set_chip_data(irq, NULL);
WARN_ON(info->refcnt > 0);
+ write_unlock_irqrestore(&evtchn_rwlock, flags);
+
kfree(info);
/* Legacy IRQ descriptors are managed by the arch. */
@@ -1220,6 +1247,8 @@ static void __xen_evtchn_do_upcall(void)
int cpu = get_cpu();
unsigned count;
+ read_lock(&evtchn_rwlock);
+
do {
vcpu_info->evtchn_upcall_pending = 0;
@@ -1236,6 +1265,7 @@ static void __xen_evtchn_do_upcall(void)
out:
+ read_unlock(&evtchn_rwlock);
put_cpu();
}
--
2.25.1
1
0
From: Joerg Roedel <jroedel(a)suse.de>
mainline inclusion
from mainline-v4.20-rc1
commit 6954cf9bfda153f9544c63761aabf0199710aec3
category:bugfix
bugzilla:NA
CVE:NA
-------------------
Remove the iommu_ prefix from the function and a few other
static data structures so that the iommu_release_device name
can be re-used in iommu core code.
Signed-off-by: Joerg Roedel <jroedel(a)suse.de>
Signed-off-by: Chen Jun <chenjun102(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/iommu/iommu-sysfs.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/iommu/iommu-sysfs.c b/drivers/iommu/iommu-sysfs.c
index 36d1a7ce7fc4..71c2249d3260 100644
--- a/drivers/iommu/iommu-sysfs.c
+++ b/drivers/iommu/iommu-sysfs.c
@@ -22,25 +22,25 @@ static struct attribute *devices_attr[] = {
NULL,
};
-static const struct attribute_group iommu_devices_attr_group = {
+static const struct attribute_group devices_attr_group = {
.name = "devices",
.attrs = devices_attr,
};
-static const struct attribute_group *iommu_dev_groups[] = {
- &iommu_devices_attr_group,
+static const struct attribute_group *dev_groups[] = {
+ &devices_attr_group,
NULL,
};
-static void iommu_release_device(struct device *dev)
+static void release_device(struct device *dev)
{
kfree(dev);
}
static struct class iommu_class = {
.name = "iommu",
- .dev_release = iommu_release_device,
- .dev_groups = iommu_dev_groups,
+ .dev_release = release_device,
+ .dev_groups = dev_groups,
};
static int __init iommu_dev_init(void)
--
2.25.1
1
6

[PATCH 1/8] bcache: use a separate data structure for the on-disk super block
by Yang Yingliang 26 Oct '20
by Yang Yingliang 26 Oct '20
26 Oct '20
From: Christoph Hellwig <hch(a)lst.de>
mainline inclusion
from mainline-5.6-rc1
commit a702a692cd7559053ea573f4e2c84828f0e62824
category: feature
bugzilla: 43003
CVE: NA
---------------------------
Split out an on-disk version struct cache_sb with the proper endianness
annotations. This fixes a fair chunk of sparse warnings, but there are
some left due to the way the checksum is defined.
Signed-off-by: Christoph Hellwig <hch(a)lst.de>
Signed-off-by: Coly Li <colyli(a)suse.de>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Acked-by: Hanjun Guo <guohanjun(a)huawei.com>
Reviewed-by: Yufen Yu <yuyufen(a)huawei.com>
Signed-off-by: zhangyi (F) <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/md/bcache/super.c | 6 ++---
include/uapi/linux/bcache.h | 51 +++++++++++++++++++++++++++++++++++++
2 files changed, 54 insertions(+), 3 deletions(-)
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 78854837f398..8f8a0a69cc0e 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -63,14 +63,14 @@ static const char *read_super(struct cache_sb *sb, struct block_device *bdev,
struct page **res)
{
const char *err;
- struct cache_sb *s;
+ struct cache_sb_disk *s;
struct buffer_head *bh = __bread(bdev, 1, SB_SIZE);
unsigned int i;
if (!bh)
return "IO error";
- s = (struct cache_sb *) bh->b_data;
+ s = (struct cache_sb_disk *)bh->b_data;
sb->offset = le64_to_cpu(s->offset);
sb->version = le64_to_cpu(s->version);
@@ -209,7 +209,7 @@ static void write_bdev_super_endio(struct bio *bio)
static void __write_super(struct cache_sb *sb, struct bio *bio)
{
- struct cache_sb *out = page_address(bio_first_page_all(bio));
+ struct cache_sb_disk *out = page_address(bio_first_page_all(bio));
unsigned int i;
bio->bi_iter.bi_sector = SB_SECTOR;
diff --git a/include/uapi/linux/bcache.h b/include/uapi/linux/bcache.h
index 5d4f58e059fd..1d8b3a9fc080 100644
--- a/include/uapi/linux/bcache.h
+++ b/include/uapi/linux/bcache.h
@@ -156,6 +156,57 @@ static inline struct bkey *bkey_idx(const struct bkey *k, unsigned int nr_keys)
#define BDEV_DATA_START_DEFAULT 16 /* sectors */
+struct cache_sb_disk {
+ __le64 csum;
+ __le64 offset; /* sector where this sb was written */
+ __le64 version;
+
+ __u8 magic[16];
+
+ __u8 uuid[16];
+ union {
+ __u8 set_uuid[16];
+ __le64 set_magic;
+ };
+ __u8 label[SB_LABEL_SIZE];
+
+ __le64 flags;
+ __le64 seq;
+ __le64 pad[8];
+
+ union {
+ struct {
+ /* Cache devices */
+ __le64 nbuckets; /* device size */
+
+ __le16 block_size; /* sectors */
+ __le16 bucket_size; /* sectors */
+
+ __le16 nr_in_set;
+ __le16 nr_this_dev;
+ };
+ struct {
+ /* Backing devices */
+ __le64 data_offset;
+
+ /*
+ * block_size from the cache device section is still used by
+ * backing devices, so don't add anything here until we fix
+ * things to not need it for backing devices anymore
+ */
+ };
+ };
+
+ __le32 last_mount; /* time overflow in y2106 */
+
+ __le16 first_bucket;
+ union {
+ __le16 njournal_buckets;
+ __le16 keys;
+ };
+ __le64 d[SB_JOURNAL_BUCKETS]; /* journal buckets */
+};
+
struct cache_sb {
__u64 csum;
__u64 offset; /* sector where this sb was written */
--
2.25.1
1
7
From: Pavel Begunkov <asml.silence(a)gmail.com>
mainline inclusion
from mainline-v5.6-rc1
commit 28ca0d6d39ab1d01c86762c82a585b7cedd2920c
category: bugfix
bugzilla: 35619
CVE: NA
--------------------------------
As other *continue() helpers, this continues iteration from a given
position.
Signed-off-by: Pavel Begunkov <asml.silence(a)gmail.com>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Zhang Xiaoxu <zhangxiaoxu5(a)huawei.com>
Reviewed-by: zhangyi (F) <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/linux/list.h | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/include/linux/list.h b/include/linux/list.h
index de04cc5ed536..0e540581d52c 100644
--- a/include/linux/list.h
+++ b/include/linux/list.h
@@ -455,6 +455,16 @@ static inline void list_splice_tail_init(struct list_head *list,
#define list_for_each(pos, head) \
for (pos = (head)->next; pos != (head); pos = pos->next)
+/**
+ * list_for_each_continue - continue iteration over a list
+ * @pos: the &struct list_head to use as a loop cursor.
+ * @head: the head for your list.
+ *
+ * Continue to iterate over a list, continuing after the current position.
+ */
+#define list_for_each_continue(pos, head) \
+ for (pos = pos->next; pos != (head); pos = pos->next)
+
/**
* list_for_each_prev - iterate over a list backwards
* @pos: the &struct list_head to use as a loop cursor.
--
2.25.1
1
1

22 Oct '20
From: Joerg Roedel <jroedel(a)suse.de>
mainline inclusion
from mainline-v4.20-rc1
commit dbba197edf32209d110727a02d3a91de4c88520f
category:feature
bugzilla:NA
CVE:NA
-------------------
Some places in the kernel check the iommu_group pointer in
'struct device' in order to find out whether a device is
mapped by an IOMMU.
This is not good way to make this check, as the pointer will
be moved to 'struct dev_iommu_data'. This way to make the
check is also not very readable.
Introduce an explicit function to perform this check.
Acked-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Acked-by: Robin Murphy <robin.murphy(a)arm.com>
Signed-off-by: Joerg Roedel <jroedel(a)suse.de>
Signed-off-by: Chen Jun <chenjun102(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/linux/device.h | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/include/linux/device.h b/include/linux/device.h
index ee4ed3af30d0..cb9df20a9c97 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -1090,6 +1090,16 @@ static inline struct device *kobj_to_dev(struct kobject *kobj)
return container_of(kobj, struct device, kobj);
}
+/**
+ * device_iommu_mapped - Returns true when the device DMA is translated
+ * by an IOMMU
+ * @dev: Device to perform the check on
+ */
+static inline bool device_iommu_mapped(struct device *dev)
+{
+ return (dev->iommu_group != NULL);
+}
+
/* Get the wakeup routines, which depend on struct device */
#include <linux/pm_wakeup.h>
--
2.25.1
1
5

[PATCH 1/6] net/hinic: Fix the driver does not report an error when setting MAC fails
by Yang Yingliang 22 Oct '20
by Yang Yingliang 22 Oct '20
22 Oct '20
From: Chiqijun <chiqijun(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: 4472
-----------------------------------------------------------------------
When using the ip link command to configure the MAC for the VF in the
PF, the status 4 will be returned when the MAC is set on the VF; when
the PF driver receives the status 4 returned by the firmwre, the MAC
setting failed and an error should be reported.
Signed-off-by: Chiqijun <chiqijun(a)huawei.com>
Reviewed-by: Wangxiaoyun <cloud.wangxiaoyun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
.../net/ethernet/huawei/hinic/hinic_main.c | 7 +++-
.../net/ethernet/huawei/hinic/hinic_nic_cfg.c | 41 ++++++++++++-------
2 files changed, 31 insertions(+), 17 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_main.c b/drivers/net/ethernet/huawei/hinic/hinic_main.c
index b5d0bdadf509..cf7b6ceef060 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_main.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_main.c
@@ -1305,8 +1305,11 @@ static int hinic_set_mac_addr(struct net_device *netdev, void *addr)
err = hinic_update_mac(nic_dev->hwdev, netdev->dev_addr, saddr->sa_data,
0, func_id);
- if (err)
- return err;
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to update mac, err: %d\n",
+ err);
+ return err == HINIC_PF_SET_VF_ALREADY ? -EPERM : err;
+ }
memcpy(netdev->dev_addr, saddr->sa_data, ETH_ALEN);
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.c b/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.c
index a42dea0ad707..2b349e17260b 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.c
@@ -231,6 +231,23 @@ int hinic_get_fw_support_func(void *hwdev)
#define HINIC_ADD_VLAN_IN_MAC 0x8000
#define HINIC_VLAN_ID_MASK 0x7FFF
+#define PF_SET_VF_MAC(hwdev, status) \
+ (HINIC_FUNC_TYPE(hwdev) == TYPE_VF && \
+ (status) == HINIC_PF_SET_VF_ALREADY)
+
+static int hinic_check_mac_status(struct hinic_hwdev *hwdev, u8 status,
+ u16 vlan_id)
+{
+ if ((status && status != HINIC_MGMT_STATUS_EXIST) ||
+ (vlan_id & CHECK_IPSU_15BIT && status == HINIC_MGMT_STATUS_EXIST)) {
+ if (PF_SET_VF_MAC(hwdev, status))
+ return 0;
+
+ return -EINVAL;
+ }
+
+ return 0;
+}
int hinic_set_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id)
{
@@ -255,17 +272,14 @@ int hinic_set_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id)
err = l2nic_msg_to_mgmt_sync(hwdev, HINIC_PORT_CMD_SET_MAC, &mac_info,
sizeof(mac_info), &mac_info, &out_size);
if (err || !out_size ||
- (mac_info.status && mac_info.status != HINIC_MGMT_STATUS_EXIST &&
- mac_info.status != HINIC_PF_SET_VF_ALREADY) ||
- (mac_info.vlan_id & CHECK_IPSU_15BIT &&
- mac_info.status == HINIC_MGMT_STATUS_EXIST)) {
+ hinic_check_mac_status(hwdev, mac_info.status, mac_info.vlan_id)) {
nic_err(nic_hwdev->dev_hdl,
"Failed to update MAC, err: %d, status: 0x%x, out size: 0x%x\n",
err, mac_info.status, out_size);
- return -EINVAL;
+ return -EIO;
}
- if (mac_info.status == HINIC_PF_SET_VF_ALREADY) {
+ if (PF_SET_VF_MAC(nic_hwdev, mac_info.status)) {
nic_warn(nic_hwdev->dev_hdl, "PF has already set VF mac, Ignore set operation\n");
return HINIC_PF_SET_VF_ALREADY;
}
@@ -302,13 +316,13 @@ int hinic_del_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id)
err = l2nic_msg_to_mgmt_sync(hwdev, HINIC_PORT_CMD_DEL_MAC, &mac_info,
sizeof(mac_info), &mac_info, &out_size);
if (err || !out_size ||
- (mac_info.status && mac_info.status != HINIC_PF_SET_VF_ALREADY)) {
+ (mac_info.status && !PF_SET_VF_MAC(nic_hwdev, mac_info.status))) {
nic_err(nic_hwdev->dev_hdl,
"Failed to delete MAC, err: %d, status: 0x%x, out size: 0x%x\n",
err, mac_info.status, out_size);
- return -EINVAL;
+ return -EIO;
}
- if (mac_info.status == HINIC_PF_SET_VF_ALREADY) {
+ if (PF_SET_VF_MAC(nic_hwdev, mac_info.status)) {
nic_warn(nic_hwdev->dev_hdl, "PF has already set VF mac, Ignore delete operation\n");
return HINIC_PF_SET_VF_ALREADY;
}
@@ -343,17 +357,14 @@ int hinic_update_mac(void *hwdev, u8 *old_mac, u8 *new_mac, u16 vlan_id,
&mac_info, sizeof(mac_info),
&mac_info, &out_size);
if (err || !out_size ||
- (mac_info.status && mac_info.status != HINIC_MGMT_STATUS_EXIST &&
- mac_info.status != HINIC_PF_SET_VF_ALREADY) ||
- (mac_info.vlan_id & CHECK_IPSU_15BIT &&
- mac_info.status == HINIC_MGMT_STATUS_EXIST)) {
+ hinic_check_mac_status(hwdev, mac_info.status, mac_info.vlan_id)) {
nic_err(nic_hwdev->dev_hdl,
"Failed to update MAC, err: %d, status: 0x%x, out size: 0x%x\n",
err, mac_info.status, out_size);
- return -EINVAL;
+ return -EIO;
}
- if (mac_info.status == HINIC_PF_SET_VF_ALREADY) {
+ if (PF_SET_VF_MAC(nic_hwdev, mac_info.status)) {
nic_warn(nic_hwdev->dev_hdl, "PF has already set VF MAC. Ignore update operation\n");
return HINIC_PF_SET_VF_ALREADY;
}
--
2.25.1
1
5

[PATCH 01/20] nvme-rdma: remove redundant reference between ib_device and tagset
by Yang Yingliang 22 Oct '20
by Yang Yingliang 22 Oct '20
22 Oct '20
From: Max Gurtovoy <maxg(a)mellanox.com>
mainline inclusion
from mainline-v5.2-rc1
commit 87fd125344d68adf7699ec7396aa9f905ce79a80
category: bugfix
bugzilla: NA
CVE: NA
Link: https://gitee.com/openeuler/kernel/issues/I1WGZE
-------------------------------------------------
In the past, before adding f41725bb ("nvme-rdma: Use mr pool") commit,
we needed a reference on the ib_device as long as the tagset
was alive, as the MRs in the request structures needed a valid ib_device.
Now, we allocate/deallocate MR pool per QP and consume on demand.
Also remove nvme_rdma_free_tagset function and use blk_mq_free_tag_set
instead, as it unneeded anymore.
This commit also fixes a memory leakage and possible segmentation fault.
When configuring the system with NIC teaming (aka bonding), we use 1
network interface to create an HA connection to the target side. In case
one connection breaks down, nvme-rdma driver will get notification from
rdma-cm layer that underlying address was change and will start error
recovery process. During this process, we'll reconnect to the target
via the second interface in the bond without destroying the tagset.
This will cause a leakage of the initial rdma device (ndev) and miscount
in the reference count of the new created rdma device (new ndev). In
the final destruction (or in another error flow), we'll get a warning
dump from the ib_dealloc_pd that we still have inflight MR's related to
that pd. This happens becasue of the miscount of the reference tag of
the rdma device and causing access violation to it's elements (some
queues are not destroyed yet).
Signed-off-by: Max Gurtovoy <maxg(a)mellanox.com>
Signed-off-by: Israel Rukshin <israelr(a)mellanox.com>
Signed-off-by: Christoph Hellwig <hch(a)lst.de>
Reviewed-by: Chao Leng <lengchao(a)huawei.com>
Reviewed-by: Jike Cheng <chengjike.cheng(a)huawei.com>
Conflicts:
drivers/nvme/host/rdma.c
[lrz: adjust context]
Signed-off-by: Ruozhu Li <liruozhu(a)huawei.com>
Signed-off-by: Lijie <lijie34(a)huawei.com>
Reviewed-by: Tao Hou <houtao1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/nvme/host/rdma.c | 34 +++++-----------------------------
1 file changed, 5 insertions(+), 29 deletions(-)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 077c67816665..0bfc1875575d 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -666,15 +666,6 @@ static int nvme_rdma_alloc_io_queues(struct nvme_rdma_ctrl *ctrl)
return ret;
}
-static void nvme_rdma_free_tagset(struct nvme_ctrl *nctrl,
- struct blk_mq_tag_set *set)
-{
- struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl);
-
- blk_mq_free_tag_set(set);
- nvme_rdma_dev_put(ctrl->device);
-}
-
static struct blk_mq_tag_set *nvme_rdma_alloc_tagset(struct nvme_ctrl *nctrl,
bool admin)
{
@@ -712,24 +703,9 @@ static struct blk_mq_tag_set *nvme_rdma_alloc_tagset(struct nvme_ctrl *nctrl,
ret = blk_mq_alloc_tag_set(set);
if (ret)
- goto out;
-
- /*
- * We need a reference on the device as long as the tag_set is alive,
- * as the MRs in the request structures need a valid ib_device.
- */
- ret = nvme_rdma_dev_get(ctrl->device);
- if (!ret) {
- ret = -EINVAL;
- goto out_free_tagset;
- }
+ return ERR_PTR(ret);
return set;
-
-out_free_tagset:
- blk_mq_free_tag_set(set);
-out:
- return ERR_PTR(ret);
}
static void nvme_rdma_destroy_admin_queue(struct nvme_rdma_ctrl *ctrl,
@@ -737,7 +713,7 @@ static void nvme_rdma_destroy_admin_queue(struct nvme_rdma_ctrl *ctrl,
{
if (remove) {
blk_cleanup_queue(ctrl->ctrl.admin_q);
- nvme_rdma_free_tagset(&ctrl->ctrl, ctrl->ctrl.admin_tagset);
+ blk_mq_free_tag_set(ctrl->ctrl.admin_tagset);
}
if (ctrl->async_event_sqe.data) {
cancel_work_sync(&ctrl->ctrl.async_event_work);
@@ -815,7 +791,7 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
blk_cleanup_queue(ctrl->ctrl.admin_q);
out_free_tagset:
if (new)
- nvme_rdma_free_tagset(&ctrl->ctrl, ctrl->ctrl.admin_tagset);
+ blk_mq_free_tag_set(ctrl->ctrl.admin_tagset);
out_free_async_qe:
if (ctrl->async_event_sqe.data) {
nvme_rdma_free_qe(ctrl->device->dev, &ctrl->async_event_sqe,
@@ -832,7 +808,7 @@ static void nvme_rdma_destroy_io_queues(struct nvme_rdma_ctrl *ctrl,
{
if (remove) {
blk_cleanup_queue(ctrl->ctrl.connect_q);
- nvme_rdma_free_tagset(&ctrl->ctrl, ctrl->ctrl.tagset);
+ blk_mq_free_tag_set(ctrl->ctrl.tagset);
}
nvme_rdma_free_io_queues(ctrl);
}
@@ -873,7 +849,7 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
blk_cleanup_queue(ctrl->ctrl.connect_q);
out_free_tag_set:
if (new)
- nvme_rdma_free_tagset(&ctrl->ctrl, ctrl->ctrl.tagset);
+ blk_mq_free_tag_set(ctrl->ctrl.tagset);
out_free_io_queues:
nvme_rdma_free_io_queues(ctrl);
return ret;
--
2.25.1
1
19

【议题收集】【Meeting Notice】openEuler kernel sig meeting Time: 2020-10-16 10:00-12:00
by Xie XiuQi 20 Oct '20
by Xie XiuQi 20 Oct '20
20 Oct '20
openEuler kernel sig meeting 定于 2020-10-16 10:00-12:00 召开,
欢迎回复邮件申报议题。
--- 会议信息 ---
会议链接:https://zoom.us/j/94156903933
会议 ID : 94156903933
--- 会议纪要归档 ---
Meeting Record 20200918: https://gitee.com/openeuler/kernel/issues/I1WGN0
1
3
Anant Thazhemadam (1):
staging: comedi: check validity of wMaxPacketSize of usb endpoints
found
Arnaud Patard (1):
drivers/net/ethernet/marvell/mvmdio.c: Fix non OF case
Dmitry Golovin (1):
ARM: 8939/1: kbuild: use correct nm executable
Dominik Przychodni (1):
crypto: qat - check cipher length for aead AES-CBC-HMAC-SHA
Greg Kroah-Hartman (1):
Linux 4.19.152
Herbert Xu (1):
crypto: bcm - Verify GCM/CCM key length in setkey
Jan Kara (2):
reiserfs: Initialize inode keys properly
reiserfs: Fix oops during mount
Jason A. Donenfeld (1):
ARM: 8867/1: vdso: pass --be8 to linker if necessary
Leo Yan (1):
perf cs-etm: Move definition of 'traceid_list' global variable from
header file
Leonid Bloch (1):
USB: serial: option: Add Telit FT980-KS composition
Luiz Augusto von Dentz (2):
Bluetooth: Consolidate encryption handling in hci_encrypt_cfm
Bluetooth: Disconnect if E0 is used for Level 4
Masahiro Yamada (1):
ARM: 8858/1: vdso: use $(LD) instead of $(CC) to link VDSO
Mychaela N. Falconia (1):
USB: serial: ftdi_sio: add support for FreeCalypso JTAG+UART adapters
Oliver Neukum (1):
media: usbtv: Fix refcounting mixup
Patrick Steinhardt (1):
Bluetooth: Fix update of connection state in `hci_encrypt_cfm`
Scott Chen (1):
USB: serial: pl2303: add device-id for HP GC device
Wilken Gottwalt (1):
USB: serial: option: add Cellient MPL200 card
Makefile | 2 +-
arch/arm/boot/compressed/Makefile | 4 +-
arch/arm/vdso/Makefile | 22 +++++------
drivers/crypto/bcm/cipher.c | 15 +++++++-
drivers/crypto/qat/qat_common/qat_algs.c | 10 ++++-
drivers/media/usb/usbtv/usbtv-core.c | 3 +-
drivers/net/ethernet/marvell/mvmdio.c | 22 ++++++++---
drivers/staging/comedi/drivers/vmk80xx.c | 3 ++
drivers/usb/serial/ftdi_sio.c | 5 +++
drivers/usb/serial/ftdi_sio_ids.h | 7 ++++
drivers/usb/serial/option.c | 5 +++
drivers/usb/serial/pl2303.c | 1 +
drivers/usb/serial/pl2303.h | 1 +
fs/reiserfs/inode.c | 6 +--
fs/reiserfs/xattr.c | 7 ++++
include/net/bluetooth/hci_core.h | 30 ++++++++++++---
net/bluetooth/hci_conn.c | 17 +++++++++
net/bluetooth/hci_event.c | 48 ++++++------------------
tools/perf/util/cs-etm.c | 3 ++
tools/perf/util/cs-etm.h | 3 --
20 files changed, 138 insertions(+), 76 deletions(-)
--
2.25.1
1
19
Anshuman Khandual (1):
mm/ioremap: probe platform for p4d huge map support
Christoph Hellwig (8):
mm: remove __get_vm_area
mm: unexport unmap_kernel_range_noflush
mm: only allow page table mappings for built-in zsmalloc
mm: pass addr as unsigned long to vb_free
mm: remove vmap_page_range_noflush and vunmap_page_range
mm: rename vmap_page_range to map_kernel_range
mm: don't return the number of pages from map_kernel_range{, _noflush}
mm: remove map_vm_range
Daniel Axtens (1):
mm/memory.c: add apply_to_existing_page_range() helper
Ingo Molnar (1):
mm/vmalloc: Add empty <asm/vmalloc.h> headers and use them from
<linux/vmalloc.h>
Mike Rapoport (1):
mm: move lib/ioremap.c to mm/
Nicholas Piggin (9):
mm/vmalloc: fix vmalloc_to_page for huge vmap mappings
mm: apply_to_pte_range warn and fail if a large pte is encountered
mm/vmalloc: rename vmap_*_range vmap_pages_*_range
mm/ioremap: rename ioremap_*_range to vmap_*_range
mm: HUGE_VMAP arch support cleanup
arm64: inline huge vmap supported functions
mm: Move vmap_range from mm/ioremap.c to mm/vmalloc.c
mm/vmalloc: add vmap_range_noflush variant
mm/vmalloc: Hugepage vmalloc mappings
Steven Price (2):
mm: add generic p?d_leaf() macros
arm64: mm: add p?d_leaf() definitions
Will Deacon (3):
ioremap: rework pXd_free_pYd_page() API
lib/ioremap: ensure phys_addr actually corresponds to a physical
address
lib/ioremap: ensure break-before-make is used for huge p4d mappings
Documentation/core-api/cachetlb.rst | 2 +-
arch/Kconfig | 4 +
arch/alpha/include/asm/vmalloc.h | 4 +
arch/arc/include/asm/vmalloc.h | 4 +
arch/arm/include/asm/vmalloc.h | 4 +
arch/arm64/include/asm/pgtable.h | 2 +
arch/arm64/include/asm/vmalloc.h | 29 ++
arch/arm64/mm/mmu.c | 16 -
arch/c6x/include/asm/vmalloc.h | 4 +
arch/csky/include/asm/vmalloc.h | 4 +
arch/h8300/include/asm/vmalloc.h | 4 +
arch/hexagon/include/asm/vmalloc.h | 4 +
arch/ia64/include/asm/vmalloc.h | 4 +
arch/m68k/include/asm/vmalloc.h | 4 +
arch/microblaze/include/asm/vmalloc.h | 4 +
arch/mips/include/asm/vmalloc.h | 4 +
arch/nds32/include/asm/vmalloc.h | 4 +
arch/nios2/include/asm/vmalloc.h | 4 +
arch/openrisc/include/asm/vmalloc.h | 4 +
arch/parisc/include/asm/vmalloc.h | 4 +
arch/powerpc/include/asm/vmalloc.h | 12 +
arch/powerpc/kernel/pci_64.c | 3 +-
arch/riscv/include/asm/vmalloc.h | 4 +
arch/s390/include/asm/vmalloc.h | 4 +
arch/sh/include/asm/vmalloc.h | 4 +
arch/sh/kernel/cpu/sh4/sq.c | 3 +-
arch/sparc/include/asm/vmalloc.h | 4 +
arch/um/include/asm/vmalloc.h | 4 +
arch/unicore32/include/asm/vmalloc.h | 4 +
arch/x86/include/asm/vmalloc.h | 12 +
arch/x86/mm/ioremap.c | 11 +-
arch/x86/mm/pgtable.c | 8 +
arch/xtensa/include/asm/vmalloc.h | 4 +
include/asm-generic/pgtable.h | 25 ++
include/linux/io.h | 8 -
include/linux/mm.h | 3 +
include/linux/vmalloc.h | 24 +-
init/main.c | 1 -
kernel/dma/mapping.c | 3 +-
lib/Makefile | 1 -
lib/ioremap.c | 183 ---------
mm/Kconfig | 2 +-
mm/Makefile | 2 +-
mm/ioremap.c | 34 ++
mm/memory.c | 152 ++++++--
mm/vmalloc.c | 518 +++++++++++++++++++-------
mm/zsmalloc.c | 4 +-
47 files changed, 752 insertions(+), 398 deletions(-)
create mode 100644 arch/alpha/include/asm/vmalloc.h
create mode 100644 arch/arc/include/asm/vmalloc.h
create mode 100644 arch/arm/include/asm/vmalloc.h
create mode 100644 arch/arm64/include/asm/vmalloc.h
create mode 100644 arch/c6x/include/asm/vmalloc.h
create mode 100644 arch/csky/include/asm/vmalloc.h
create mode 100644 arch/h8300/include/asm/vmalloc.h
create mode 100644 arch/hexagon/include/asm/vmalloc.h
create mode 100644 arch/ia64/include/asm/vmalloc.h
create mode 100644 arch/m68k/include/asm/vmalloc.h
create mode 100644 arch/microblaze/include/asm/vmalloc.h
create mode 100644 arch/mips/include/asm/vmalloc.h
create mode 100644 arch/nds32/include/asm/vmalloc.h
create mode 100644 arch/nios2/include/asm/vmalloc.h
create mode 100644 arch/openrisc/include/asm/vmalloc.h
create mode 100644 arch/parisc/include/asm/vmalloc.h
create mode 100644 arch/powerpc/include/asm/vmalloc.h
create mode 100644 arch/riscv/include/asm/vmalloc.h
create mode 100644 arch/s390/include/asm/vmalloc.h
create mode 100644 arch/sh/include/asm/vmalloc.h
create mode 100644 arch/sparc/include/asm/vmalloc.h
create mode 100644 arch/um/include/asm/vmalloc.h
create mode 100644 arch/unicore32/include/asm/vmalloc.h
create mode 100644 arch/x86/include/asm/vmalloc.h
create mode 100644 arch/xtensa/include/asm/vmalloc.h
delete mode 100644 lib/ioremap.c
create mode 100644 mm/ioremap.c
--
2.25.1
1
26
Luiz Augusto von Dentz (4):
Bluetooth: A2MP: Fix not initializing all members
Bluetooth: L2CAP: Fix calling sk_filter on non-socket based channel
Bluetooth: Disable High Speed by default
Bluetooth: MGMT: Fix not checking if BT_HS is enabled
include/net/bluetooth/l2cap.h | 2 ++
net/bluetooth/Kconfig | 1 -
net/bluetooth/a2mp.c | 22 +++++++++++++++++++++-
net/bluetooth/l2cap_core.c | 7 ++++---
net/bluetooth/l2cap_sock.c | 14 ++++++++++++++
net/bluetooth/mgmt.c | 7 ++++++-
6 files changed, 47 insertions(+), 6 deletions(-)
--
2.25.1
1
4
Aaron Ma (1):
platform/x86: thinkpad_acpi: re-initialize ACPI buffer size when reuse
Anant Thazhemadam (3):
net: wireless: nl80211: fix out-of-bounds access in nl80211_del_key()
net: team: fix memory leak in __team_options_register
net: usb: rtl8150: set random MAC address when set_ethernet_addr()
fails
Antony Antony (4):
xfrm: clone XFRMA_SET_MARK in xfrm_do_migrate
xfrm: clone XFRMA_REPLAY_ESN_VAL in xfrm_do_migrate
xfrm: clone XFRMA_SEC_CTX in xfrm_do_migrate
xfrm: clone whole liftime_cur structure in xfrm_do_migrate
Aya Levin (2):
net/mlx5e: Fix VLAN cleanup flow
net/mlx5e: Fix VLAN create flow
Chaitanya Kulkarni (1):
nvme-core: put ctrl ref when module ref get fail
Coly Li (1):
mmc: core: don't set limits.discard_granularity as 0
Cristian Ciocaltea (1):
i2c: owl: Clear NACK and BUS error bits
David Howells (3):
rxrpc: Downgrade the BUG() for unsupported token type in rxrpc_read()
rxrpc: Fix some missing _bh annotations on locking conn->state_lock
rxrpc: Fix server keyring leak
Dinh Nguyen (1):
arm64: dts: stratix10: add status to qspi dts node
Dumitru Ceara (1):
openvswitch: handle DNAT tuple collision
Eric Dumazet (4):
macsec: avoid use-after-free in macsec_handle_frame()
sctp: fix sctp_auth_init_hmacs() error path
team: set dev->needed_headroom in team_setup_by_port()
bonding: set dev->needed_headroom in bond_setup_by_slave()
Geert Uytterhoeven (1):
Revert "ravb: Fixed to be able to unload modules"
Greg Kroah-Hartman (1):
Linux 4.19.151
Hans de Goede (2):
platform/x86: intel-vbtn: Fix SW_TABLET_MODE always reporting 1 on the
HP Pavilion 11 x360
platform/x86: intel-vbtn: Switch to an allow-list for SW_TABLET_MODE
reporting
Herbert Xu (1):
xfrm: Use correct address family in xfrm_state_find
Hugh Dickins (1):
mm/khugepaged: fix filemap page_to_pgoff(page) != offset
Jean Delvare (1):
i2c: i801: Exclude device from suspend direct complete optimization
Jerome Brunet (1):
i2c: meson: fix clock setting overwrite
Kajol Jain (1):
perf: Fix task_function_call() error handling
Karol Herbst (1):
drm/nouveau/mem: guard against NULL pointer access in mem_del
Linus Torvalds (1):
usermodehelper: reset umask to default before executing user process
Marc Dionne (1):
rxrpc: Fix rxkad token xdr encoding
Miquel Raynal (1):
mtd: rawnand: sunxi: Fix the probe error path
Necip Fazil Yildiran (1):
platform/x86: fix kconfig dependency warning for FUJITSU_LAPTOP
Nicolas Belin (1):
i2c: meson: fixup rate calculation with filter delay
Peilin Ye (3):
fbdev, newport_con: Move FONT_EXTRA_WORDS macros into linux/font.h
Fonts: Support FONT_EXTRA_WORDS macros for built-in fonts
fbcon: Fix global-out-of-bounds read in fbcon_get_font()
Philip Yang (1):
drm/amdgpu: prevent double kfree ttm->sg
Randy Dunlap (1):
mdio: fix mdio-thunder.c dependency & build error
Sabrina Dubroca (1):
xfrmi: drop ignore_df check before updating pmtu
Tetsuo Handa (1):
driver core: Fix probe_count imbalance in really_probe()
Tom Rix (1):
platform/x86: thinkpad_acpi: initialize tp_nvram_state variable
Vijay Balakrishna (1):
mm: khugepaged: recalculate min_free_kbytes after memory hotplug as
expected by khugepaged
Vladimir Zapolskiy (1):
cifs: Fix incomplete memory allocation on setxattr path
Voon Weifeng (1):
net: stmmac: removed enabling eee in EEE set callback
Wilken Gottwalt (1):
net: usb: ax88179_178a: fix missing stop entry in driver_info
Makefile | 2 +-
.../dts/altera/socfpga_stratix10_socdk.dts | 1 +
drivers/base/dd.c | 5 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 1 +
drivers/gpu/drm/nouveau/nouveau_mem.c | 2 +
drivers/i2c/busses/i2c-i801.c | 1 +
drivers/i2c/busses/i2c-meson.c | 42 +++++--
drivers/i2c/busses/i2c-owl.c | 6 +
drivers/mmc/core/queue.c | 2 +-
drivers/mtd/nand/raw/sunxi_nand.c | 2 +-
drivers/net/bonding/bond_main.c | 1 +
.../net/ethernet/mellanox/mlx5/core/en_fs.c | 14 ++-
drivers/net/ethernet/renesas/ravb_main.c | 110 +++++++++---------
.../ethernet/stmicro/stmmac/stmmac_ethtool.c | 15 +--
drivers/net/macsec.c | 4 +-
drivers/net/phy/Kconfig | 1 +
drivers/net/team/team.c | 3 +-
drivers/net/usb/ax88179_178a.c | 1 +
drivers/net/usb/rtl8150.c | 16 ++-
drivers/nvme/host/core.c | 4 +-
drivers/platform/x86/Kconfig | 1 +
drivers/platform/x86/intel-vbtn.c | 64 +++++++---
drivers/platform/x86/thinkpad_acpi.c | 6 +-
drivers/video/console/newport_con.c | 7 +-
drivers/video/fbdev/core/fbcon.c | 12 ++
drivers/video/fbdev/core/fbcon.h | 7 --
drivers/video/fbdev/core/fbcon_rotate.c | 1 +
drivers/video/fbdev/core/tileblit.c | 1 +
fs/cifs/smb2ops.c | 2 +-
include/linux/font.h | 13 +++
include/linux/khugepaged.h | 5 +
include/net/xfrm.h | 16 +--
kernel/events/core.c | 5 +-
kernel/umh.c | 9 ++
lib/fonts/font_10x18.c | 9 +-
lib/fonts/font_6x10.c | 9 +-
lib/fonts/font_6x11.c | 9 +-
lib/fonts/font_7x14.c | 9 +-
lib/fonts/font_8x16.c | 9 +-
lib/fonts/font_8x8.c | 9 +-
lib/fonts/font_acorn_8x8.c | 9 +-
lib/fonts/font_mini_4x6.c | 8 +-
lib/fonts/font_pearl_8x8.c | 9 +-
lib/fonts/font_sun12x22.c | 9 +-
lib/fonts/font_sun8x16.c | 7 +-
mm/khugepaged.c | 25 +++-
mm/page_alloc.c | 3 +
net/openvswitch/conntrack.c | 22 ++--
net/rxrpc/conn_event.c | 6 +-
net/rxrpc/key.c | 18 ++-
net/sctp/auth.c | 1 +
net/wireless/nl80211.c | 3 +
net/xfrm/xfrm_interface.c | 2 +-
net/xfrm/xfrm_state.c | 42 ++++++-
54 files changed, 391 insertions(+), 209 deletions(-)
--
2.25.1
1
49

16 Oct '20
From: Jiri Olsa <jolsa(a)redhat.com>
mainline inclusion
from mainline-v5.10
commit f91072ed1b7283b13ca57fcfbece5a3b92726143
category: bugfix
bugzilla: NA
CVE: CVE-2020-14351
--------------------------------
There's a possible race in perf_mmap_close() when checking ring buffer's
mmap_count refcount value. The problem is that the mmap_count check is
not atomic because we call atomic_dec() and atomic_read() separately.
perf_mmap_close:
...
atomic_dec(&rb->mmap_count);
...
if (atomic_read(&rb->mmap_count))
goto out_put;
<ring buffer detach>
free_uid
out_put:
ring_buffer_put(rb); /* could be last */
The race can happen when we have two (or more) events sharing same ring
buffer and they go through atomic_dec() and then they both see 0 as refcount
value later in atomic_read(). Then both will go on and execute code which
is meant to be run just once.
The code that detaches ring buffer is probably fine to be executed more
than once, but the problem is in calling free_uid(), which will later on
demonstrate in related crashes and refcount warnings, like:
refcount_t: addition on 0; use-after-free.
...
RIP: 0010:refcount_warn_saturate+0x6d/0xf
...
Call Trace:
prepare_creds+0x190/0x1e0
copy_creds+0x35/0x172
copy_process+0x471/0x1a80
_do_fork+0x83/0x3a0
__do_sys_wait4+0x83/0x90
__do_sys_clone+0x85/0xa0
do_syscall_64+0x5b/0x1e0
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Using atomic decrease and check instead of separated calls.
Tested-by: Michael Petlan <mpetlan(a)redhat.com>
Signed-off-by: Jiri Olsa <jolsa(a)kernel.org>
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Acked-by: Peter Zijlstra <a.p.zijlstra(a)chello.nl>
Acked-by: Namhyung Kim <namhyung(a)kernel.org>
Acked-by: Wade Mealing <wmealing(a)redhat.com>
Fixes: 9bb5d40cd93c ("perf: Fix mmap() accounting hole");
Link: https://lore.kernel.org/r/20200916115311.GE2301783@krava
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Conflicts:
kernel/events/core.c
[yyl: adjust context]
Reviewed-by: Jian Cheng <cj.chengjian(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/events/core.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 30a81d1b7b5c..452775f68129 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5496,11 +5496,11 @@ static void perf_pmu_output_stop(struct perf_event *event);
static void perf_mmap_close(struct vm_area_struct *vma)
{
struct perf_event *event = vma->vm_file->private_data;
-
struct ring_buffer *rb = ring_buffer_get(event);
struct user_struct *mmap_user = rb->mmap_user;
int mmap_locked = rb->mmap_locked;
unsigned long size = perf_data_size(rb);
+ bool detach_rest = false;
if (event->pmu->event_unmapped)
event->pmu->event_unmapped(event, vma->vm_mm);
@@ -5531,7 +5531,8 @@ static void perf_mmap_close(struct vm_area_struct *vma)
mutex_unlock(&event->mmap_mutex);
}
- atomic_dec(&rb->mmap_count);
+ if (atomic_dec_and_test(&rb->mmap_count))
+ detach_rest = true;
if (!atomic_dec_and_mutex_lock(&event->mmap_count, &event->mmap_mutex))
goto out_put;
@@ -5540,7 +5541,7 @@ static void perf_mmap_close(struct vm_area_struct *vma)
mutex_unlock(&event->mmap_mutex);
/* If there's still other mmap()s of this buffer, we're done. */
- if (atomic_read(&rb->mmap_count))
+ if (!detach_rest)
goto out_put;
/*
--
2.25.1
1
0

15 Oct '20
From: Mark Gray <mark.d.gray(a)redhat.com>
stable inclusion
from linux-4.19.148
commit c797110d97c48054d1491251fd713900ff51615c
CVE: CVE-2020-25645
--------------------------------
[ Upstream commit 34beb21594519ce64a55a498c2fe7d567bc1ca20 ]
This patch adds transport ports information for route lookup so that
IPsec can select Geneve tunnel traffic to do encryption. This is
needed for OVS/OVN IPsec with encrypted Geneve tunnels.
This can be tested by configuring a host-host VPN using an IKE
daemon and specifying port numbers. For example, for an
Openswan-type configuration, the following parameters should be
configured on both hosts and IPsec set up as-per normal:
$ cat /etc/ipsec.conf
conn in
...
left=$IP1
right=$IP2
...
leftprotoport=udp/6081
rightprotoport=udp
...
conn out
...
left=$IP1
right=$IP2
...
leftprotoport=udp
rightprotoport=udp/6081
...
The tunnel can then be setup using "ip" on both hosts (but
changing the relevant IP addresses):
$ ip link add tun type geneve id 1000 remote $IP2
$ ip addr add 192.168.0.1/24 dev tun
$ ip link set tun up
This can then be tested by pinging from $IP1:
$ ping 192.168.0.2
Without this patch the traffic is unencrypted on the wire.
Fixes: 2d07dc79fe04 ("geneve: add initial netdev driver for GENEVE tunnels")
Signed-off-by: Qiuyu Xiao <qiuyu.xiao.qyx(a)gmail.com>
Signed-off-by: Mark Gray <mark.d.gray(a)redhat.com>
Reviewed-by: Greg Rose <gvrose8192(a)gmail.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/geneve.c | 37 +++++++++++++++++++++++++++----------
1 file changed, 27 insertions(+), 10 deletions(-)
diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
index 1506e913cebb..fdb251e1f471 100644
--- a/drivers/net/geneve.c
+++ b/drivers/net/geneve.c
@@ -721,7 +721,8 @@ static struct rtable *geneve_get_v4_rt(struct sk_buff *skb,
struct net_device *dev,
struct geneve_sock *gs4,
struct flowi4 *fl4,
- const struct ip_tunnel_info *info)
+ const struct ip_tunnel_info *info,
+ __be16 dport, __be16 sport)
{
bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
struct geneve_dev *geneve = netdev_priv(dev);
@@ -737,6 +738,8 @@ static struct rtable *geneve_get_v4_rt(struct sk_buff *skb,
fl4->flowi4_proto = IPPROTO_UDP;
fl4->daddr = info->key.u.ipv4.dst;
fl4->saddr = info->key.u.ipv4.src;
+ fl4->fl4_dport = dport;
+ fl4->fl4_sport = sport;
tos = info->key.tos;
if ((tos == 1) && !geneve->collect_md) {
@@ -771,7 +774,8 @@ static struct dst_entry *geneve_get_v6_dst(struct sk_buff *skb,
struct net_device *dev,
struct geneve_sock *gs6,
struct flowi6 *fl6,
- const struct ip_tunnel_info *info)
+ const struct ip_tunnel_info *info,
+ __be16 dport, __be16 sport)
{
bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
struct geneve_dev *geneve = netdev_priv(dev);
@@ -787,6 +791,9 @@ static struct dst_entry *geneve_get_v6_dst(struct sk_buff *skb,
fl6->flowi6_proto = IPPROTO_UDP;
fl6->daddr = info->key.u.ipv6.dst;
fl6->saddr = info->key.u.ipv6.src;
+ fl6->fl6_dport = dport;
+ fl6->fl6_sport = sport;
+
prio = info->key.tos;
if ((prio == 1) && !geneve->collect_md) {
prio = ip_tunnel_get_dsfield(ip_hdr(skb), skb);
@@ -833,14 +840,15 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
__be16 df;
int err;
- rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info);
+ sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
+ rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
+ geneve->info.key.tp_dst, sport);
if (IS_ERR(rt))
return PTR_ERR(rt);
skb_tunnel_check_pmtu(skb, &rt->dst,
GENEVE_IPV4_HLEN + info->options_len);
- sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
if (geneve->collect_md) {
tos = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb);
ttl = key->ttl;
@@ -875,13 +883,14 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
__be16 sport;
int err;
- dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info);
+ sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
+ dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info,
+ geneve->info.key.tp_dst, sport);
if (IS_ERR(dst))
return PTR_ERR(dst);
skb_tunnel_check_pmtu(skb, dst, GENEVE_IPV6_HLEN + info->options_len);
- sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
if (geneve->collect_md) {
prio = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb);
ttl = key->ttl;
@@ -957,13 +966,18 @@ static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
{
struct ip_tunnel_info *info = skb_tunnel_info(skb);
struct geneve_dev *geneve = netdev_priv(dev);
+ __be16 sport;
if (ip_tunnel_info_af(info) == AF_INET) {
struct rtable *rt;
struct flowi4 fl4;
+
struct geneve_sock *gs4 = rcu_dereference(geneve->sock4);
+ sport = udp_flow_src_port(geneve->net, skb,
+ 1, USHRT_MAX, true);
- rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info);
+ rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
+ geneve->info.key.tp_dst, sport);
if (IS_ERR(rt))
return PTR_ERR(rt);
@@ -973,9 +987,13 @@ static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
} else if (ip_tunnel_info_af(info) == AF_INET6) {
struct dst_entry *dst;
struct flowi6 fl6;
+
struct geneve_sock *gs6 = rcu_dereference(geneve->sock6);
+ sport = udp_flow_src_port(geneve->net, skb,
+ 1, USHRT_MAX, true);
- dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info);
+ dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info,
+ geneve->info.key.tp_dst, sport);
if (IS_ERR(dst))
return PTR_ERR(dst);
@@ -986,8 +1004,7 @@ static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
return -EINVAL;
}
- info->key.tp_src = udp_flow_src_port(geneve->net, skb,
- 1, USHRT_MAX, true);
+ info->key.tp_src = sport;
info->key.tp_dst = geneve->info.key.tp_dst;
return 0;
}
--
2.25.1
1
0
Al Grant (1):
perf tools: Correct SNOOPX field offset
Al Viro (2):
do_epoll_ctl(): clean the failure exits up a bit
fix regression in "epoll: Keep a reference on files added to the check
list"
Alex Williamson (1):
vfio/type1: Add proper error unwind for vfio_iommu_replay()
Alvin Šipraga (1):
macvlan: validate setting of multiple remote source MAC addresses
Bodo Stroesser (1):
scsi: target: tcmu: Fix crash in tcmu_flush_dcache_range on ARM
Brian Foster (3):
xfs: acquire superblock freeze protection on eofblocks scans
xfs: reset buffer write failure state on successful completion
xfs: fix duplicate verification from xfs_qm_dqflush()
Charan Teja Reddy (1):
mm, page_alloc: fix core hung in free_pcppages_bulk()
Chris Wilson (1):
locking/lockdep: Fix overflow in presentation of average lock-time
Chuck Lever (1):
NFS: Zero-stateid SETATTR should first return delegation
Daniel Borkmann (1):
uaccess: Add non-pagefault user-space write function
Darrick J. Wong (4):
xfs: fix partially uninitialized structure in xfs_reflink_remap_extent
xfs: fix inode quota reservation checks
xfs: fix xfs_bmap_validate_extent_raw when checking attr fork of rt
files
xfs: initialize the shortform attr header padding entry
Dave Chinner (1):
xfs: Don't allow logging of XFS_ISTALE inodes
David Milburn (2):
nvme-fc: cancel async events before freeing event struct
nvme-rdma: cancel async events before freeing event struct
Doug Berger (1):
mm: include CMA pages in lowmem_reserve at boot
Eiichi Tsukata (1):
xfs: Fix UBSAN null-ptr-deref in xfs_sysfs_init
Eric Biggers (1):
xfs: clear PF_MEMALLOC before exiting xfsaild thread
George Kennedy (1):
vt_ioctl: change VT_RESIZEX ioctl to check for error return from
vc_resize()
Gustav Wiklander (1):
spi: Fix memory leak on splited transfers
Heikki Krogerus (1):
device property: Fix the secondary firmware node handling in
set_primary_fwnode()
Helge Deller (1):
fs/signalfd.c: fix inconsistent return codes for signalfd4
Hou Pu (1):
scsi: target: iscsi: Fix hang in iscsit_access_np() when getting
tpg->np_login_sem
Hugh Dickins (2):
khugepaged: khugepaged_test_exit() check mmget_still_valid()
khugepaged: adjust VM_BUG_ON_MM() in __khugepaged_enter()
Jan Kara (5):
ext4: fix checking of directory entry validity for inline directories
ext4: don't BUG on inconsistent journal feature
writeback: Protect inode->i_io_list with inode->i_lock
writeback: Avoid skipping inode writeback
writeback: Fix sync livelock due to b_dirty_time processing
Jarkko Sakkinen (1):
tpm: Unify the mismatching TPM space buffer sizes
Jason Gunthorpe (1):
include/linux/log2.h: add missing () around n in roundup_pow_of_two()
Javed Hasan (1):
scsi: fcoe: Memory leak fix in fcoe_sysfs_fcf_del()
Jens Axboe (1):
block: ensure bdi->io_pages is always initialized
Jing Xiangfeng (1):
scsi: iscsi: Do not put host in iscsi_set_flashnode_param()
Li Heng (1):
efi: add missed destroy_workqueue when efisubsys_init fails
Lukas Czerner (3):
jbd2: make sure jh have b_transaction set in refile/unfile_buffer
ext4: handle read only external journal device
ext4: handle option set by mount flags correctly
Lukas Wunner (4):
spi: Prevent adding devices below an unregistering controller
serial: pl011: Fix oops on -EPROBE_DEFER
serial: pl011: Don't leak amba_ports entry on driver register error
serial: 8250: Avoid error message on reprobe
Luo Meng (1):
ext4: only set last error block when check system zone failed
Mao Wenan (1):
virtio_ring: Avoid loop when vq is broken in virtqueue_poll
Marc Zyngier (1):
epoll: Keep a reference on files added to the check list
Masami Hiramatsu (2):
perf probe: Fix memory leakage when the probe point is not found
uaccess: Add non-pagefault user-space read functions
Max Reitz (1):
xfs: Fix tail rounding in xfs_alloc_file_space()
Mikulas Patocka (2):
xfs: don't update mtime on COW faults
dm writecache: handle DAX to partitions on persistent memory correctly
Ming Lei (1):
blk-mq: order adding requests to hctx->dispatch and checking
SCHED_RESTART
Muchun Song (1):
kprobes: fix kill kprobe which has been marked as gone
Namhyung Kim (1):
perf jevents: Fix suspicious code in fixregex()
Peter Xu (1):
mm/hugetlb: fix calculation of adjust_range_if_pmd_sharing_possible
Peter Zijlstra (1):
cpuidle: Fixup IRQ state
Qiushi Wu (1):
PCI: Fix pci_create_slot() reference count leak
Rafael J. Wysocki (1):
PM: sleep: core: Fix the handling of pending runtime resume requests
Ralph Campbell (1):
mm/thp: fix __split_huge_pmd_locked() for migration PMD
Robin Murphy (1):
iommu/iova: Don't BUG on invalid PFNs
Selvin Xavier (1):
RDMA/bnxt_re: Do not add user qps to flushlist
Sergey Senozhatsky (1):
serial: 8250: change lock order in serial8250_do_startup()
Sunghyun Jin (1):
percpu: fix first chunk size calculation for populated bitmap
Tejun Heo (1):
libata: implement ATA_HORKAGE_MAX_TRIM_128M and apply to Sandisks
Tetsuo Handa (1):
vt: defer kfree() of vc_screenbuf in vc_do_resize()
Valmer Huhn (1):
serial: 8250_exar: Fix number of ports for Commtech PCIe cards
Varun Prakash (1):
scsi: target: iscsi: Fix data digest calculation
Wei Yongjun (1):
kernel/relay.c: fix memleak on destroy relay channel
Xianting Tian (1):
fs: prevent BUG_ON in submit_bh_wbc()
Xunlei Pang (1):
mm: memcg: fix memcg reclaim soft lockup
Yang Xu (1):
KEYS: reaching the keys quotas correctly
kaixuxia (1):
xfs: Fix deadlock between AGI and AGF with RENAME_WHITEOUT
zhangyi (F) (1):
jbd2: add the missing unlock_buffer() in the error path of
jbd2_write_superblock()
block/blk-core.c | 2 +
block/blk-mq-sched.c | 9 ++
block/blk-mq.c | 9 ++
drivers/ata/libata-core.c | 5 +-
drivers/ata/libata-scsi.c | 8 +-
drivers/base/core.c | 12 +-
drivers/base/power/main.c | 16 ++-
drivers/char/tpm/tpm-chip.c | 9 +-
drivers/char/tpm/tpm.h | 6 +-
drivers/char/tpm/tpm2-space.c | 26 ++--
drivers/char/tpm/tpmrm-dev.c | 2 +-
drivers/cpuidle/cpuidle.c | 3 +-
drivers/firmware/efi/efi.c | 2 +
drivers/infiniband/hw/bnxt_re/main.c | 3 +-
drivers/iommu/iova.c | 4 +-
drivers/md/dm-writecache.c | 12 +-
drivers/net/macvlan.c | 21 ++-
drivers/nvme/host/fc.c | 1 +
drivers/nvme/host/rdma.c | 1 +
drivers/pci/slot.c | 6 +-
drivers/scsi/fcoe/fcoe_ctlr.c | 2 +-
drivers/scsi/scsi_transport_iscsi.c | 2 +-
drivers/spi/Kconfig | 3 +
drivers/spi/spi.c | 30 +++-
drivers/target/iscsi/iscsi_target.c | 17 ++-
drivers/target/iscsi/iscsi_target_login.c | 6 +-
drivers/target/iscsi/iscsi_target_login.h | 3 +-
drivers/target/iscsi/iscsi_target_nego.c | 3 +-
drivers/target/target_core_user.c | 2 +-
drivers/tty/serial/8250/8250_core.c | 11 +-
drivers/tty/serial/8250/8250_exar.c | 24 +++-
drivers/tty/serial/8250/8250_port.c | 9 +-
drivers/tty/serial/amba-pl011.c | 16 ++-
drivers/tty/vt/vt.c | 5 +-
drivers/tty/vt/vt_ioctl.c | 12 +-
drivers/vfio/vfio_iommu_type1.c | 71 ++++++++-
drivers/virtio/virtio_ring.c | 3 +
fs/buffer.c | 9 ++
fs/eventpoll.c | 23 +--
fs/ext4/block_validity.c | 3 +-
fs/ext4/namei.c | 6 +-
fs/ext4/super.c | 147 ++++++++++++-------
fs/fs-writeback.c | 83 ++++++-----
fs/jbd2/journal.c | 4 +-
fs/jbd2/transaction.c | 10 ++
fs/nfs/nfs4proc.c | 4 +-
fs/signalfd.c | 10 +-
fs/xfs/libxfs/xfs_attr_leaf.c | 4 +-
fs/xfs/libxfs/xfs_bmap.c | 2 +-
fs/xfs/xfs_bmap_util.c | 4 +-
fs/xfs/xfs_buf.c | 8 +-
fs/xfs/xfs_dquot.c | 9 +-
fs/xfs/xfs_file.c | 12 +-
fs/xfs/xfs_icache.c | 13 +-
fs/xfs/xfs_inode.c | 110 ++++++++------
fs/xfs/xfs_ioctl.c | 5 +-
fs/xfs/xfs_reflink.c | 1 +
fs/xfs/xfs_sysfs.h | 6 +-
fs/xfs/xfs_trans_ail.c | 4 +-
fs/xfs/xfs_trans_dquot.c | 2 +-
fs/xfs/xfs_trans_inode.c | 2 +
include/linux/fs.h | 8 +-
include/linux/libata.h | 1 +
include/linux/log2.h | 2 +-
include/linux/uaccess.h | 26 ++++
include/trace/events/writeback.h | 13 +-
kernel/kprobes.c | 9 +-
kernel/locking/lockdep_proc.c | 2 +-
kernel/relay.c | 1 +
mm/huge_memory.c | 40 +++---
mm/hugetlb.c | 24 ++--
mm/khugepaged.c | 7 +-
mm/maccess.c | 167 ++++++++++++++++++++--
mm/page_alloc.c | 7 +-
mm/percpu.c | 2 +-
mm/vmscan.c | 8 ++
security/keys/key.c | 2 +-
security/keys/keyctl.c | 4 +-
tools/include/uapi/linux/perf_event.h | 2 +-
tools/perf/pmu-events/jevents.c | 2 +-
tools/perf/util/probe-finder.c | 2 +-
81 files changed, 864 insertions(+), 322 deletions(-)
--
2.25.1
1
78

15 Oct '20
From: Max Reitz <mreitz(a)redhat.com>
mainline inclusion
from mainline-v5.4-rc3
commit e093c4be760ebf46c131ae0dd6138865a22f46fa
category: bugfix
bugzilla: NA
CVE: NA
---------------------------
To ensure that all blocks touched by the range [offset, offset + count)
are allocated, we need to calculate the block count from the difference
of the range end (rounded up) and the range start (rounded down).
Before this patch, we just round up the byte count, which may lead to
unaligned ranges not being fully allocated:
$ touch test_file
$ block_size=$(stat -fc '%S' test_file)
$ fallocate -o $((block_size / 2)) -l $block_size test_file
$ xfs_bmap test_file
test_file:
0: [0..7]: 1396264..1396271
1: [8..15]: hole
There should not be a hole there. Instead, the first two blocks should
be fully allocated.
With this patch applied, the result is something like this:
$ touch test_file
$ block_size=$(stat -fc '%S' test_file)
$ fallocate -o $((block_size / 2)) -l $block_size test_file
$ xfs_bmap test_file
test_file:
0: [0..15]: 11024..11039
Signed-off-by: Max Reitz <mreitz(a)redhat.com>
Reviewed-by: Carlos Maiolino <cmaiolino(a)redhat.com>
Reviewed-by: Christoph Hellwig <hch(a)lst.de>
Reviewed-by: Darrick J. Wong <darrick.wong(a)oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong(a)oracle.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: zhangyi (F) <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/xfs/xfs_bmap_util.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
index 3e1dd66bd676..e54f5aed30a9 100644
--- a/fs/xfs/xfs_bmap_util.c
+++ b/fs/xfs/xfs_bmap_util.c
@@ -869,6 +869,7 @@ xfs_alloc_file_space(
xfs_filblks_t allocatesize_fsb;
xfs_extlen_t extsz, temp;
xfs_fileoff_t startoffset_fsb;
+ xfs_fileoff_t endoffset_fsb;
int nimaps;
int quota_flag;
int rt;
@@ -896,7 +897,8 @@ xfs_alloc_file_space(
imapp = &imaps[0];
nimaps = 1;
startoffset_fsb = XFS_B_TO_FSBT(mp, offset);
- allocatesize_fsb = XFS_B_TO_FSB(mp, count);
+ endoffset_fsb = XFS_B_TO_FSB(mp, offset + count);
+ allocatesize_fsb = endoffset_fsb - startoffset_fsb;
/*
* Allocate file space until done or until there is an error
--
2.25.1
1
10

[PATCH kernel-5.5] RISCV-V: KVM: use __u64 define variables instead of u64 in kvm.h
by l00484210 13 Oct '20
by l00484210 13 Oct '20
13 Oct '20
From: MingWang Li <limingwang(a)huawei.com>
euleros inclusion
category: bugfix
bugzilla: NA
CVE: NA
one error occurred while building qemu on riscv system, which is
unknown type name 'u64'.
Link: https://gitee.com/openeuler/kernel/issues/I1XUB0
Signed-off-by: MingWang Li <limingwang(a)huawei.com>
---
arch/riscv/include/uapi/asm/kvm.h | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
index 65cd00654..c2b4ad6b1 100644
--- a/arch/riscv/include/uapi/asm/kvm.h
+++ b/arch/riscv/include/uapi/asm/kvm.h
@@ -75,10 +75,10 @@ struct kvm_riscv_csr {
/* TIMER registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
struct kvm_riscv_timer {
- u64 frequency;
- u64 time;
- u64 compare;
- u64 state;
+ __u64 frequency;
+ __u64 time;
+ __u64 compare;
+ __u64 state;
};
/* Possible states for kvm_riscv_timer */
@@ -110,17 +110,17 @@ struct kvm_riscv_timer {
/* Timer registers are mapped as type 4 */
#define KVM_REG_RISCV_TIMER (0x04 << KVM_REG_RISCV_TYPE_SHIFT)
#define KVM_REG_RISCV_TIMER_REG(name) \
- (offsetof(struct kvm_riscv_timer, name) / sizeof(u64))
+ (offsetof(struct kvm_riscv_timer, name) / sizeof(__u64))
/* F extension registers are mapped as type 5 */
#define KVM_REG_RISCV_FP_F (0x05 << KVM_REG_RISCV_TYPE_SHIFT)
#define KVM_REG_RISCV_FP_F_REG(name) \
- (offsetof(struct __riscv_f_ext_state, name) / sizeof(u32))
+ (offsetof(struct __riscv_f_ext_state, name) / sizeof(__u32))
/* D extension registers are mapped as type 6 */
#define KVM_REG_RISCV_FP_D (0x06 << KVM_REG_RISCV_TYPE_SHIFT)
#define KVM_REG_RISCV_FP_D_REG(name) \
- (offsetof(struct __riscv_d_ext_state, name) / sizeof(u64))
+ (offsetof(struct __riscv_d_ext_state, name) / sizeof(__u64))
#endif
--
2.19.1
2
1

[PATCH 1/2] netfilter: nf_tables: incorrect enum nft_list_attributes definition
by Yang Yingliang 13 Oct '20
by Yang Yingliang 13 Oct '20
13 Oct '20
From: Pablo Neira Ayuso <pablo(a)netfilter.org>
stable inclusion
from linux-4.19.144
commit 3f21d1dd7cafb0230dc141e64ec5da622b3b1c46
--------------------------------
[ Upstream commit da9125df854ea48a6240c66e8a67be06e2c12c03 ]
This should be NFTA_LIST_UNSPEC instead of NFTA_LIST_UNPEC, all other
similar attribute definitions are postfixed with _UNSPEC.
Fixes: 96518518cc41 ("netfilter: add nftables")
Signed-off-by: Pablo Neira Ayuso <pablo(a)netfilter.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Signed-off-by: Aichun Li <liaichun(a)huawei.com>
Reviewed-by: wangxiaopeng <wangxiaopeng7(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/uapi/linux/netfilter/nf_tables.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h
index 44c8ba1f3652..8586c816bea0 100644
--- a/include/uapi/linux/netfilter/nf_tables.h
+++ b/include/uapi/linux/netfilter/nf_tables.h
@@ -132,7 +132,7 @@ enum nf_tables_msg_types {
* @NFTA_LIST_ELEM: list element (NLA_NESTED)
*/
enum nft_list_attributes {
- NFTA_LIST_UNPEC,
+ NFTA_LIST_UNSPEC,
NFTA_LIST_ELEM,
__NFTA_LIST_MAX
};
--
2.25.1
1
1

[PATCH 1/6] blk-mq: insert passthrough request into hctx->dispatch directly
by Yang Yingliang 13 Oct '20
by Yang Yingliang 13 Oct '20
13 Oct '20
From: Ming Lei <ming.lei(a)redhat.com>
mainline inclusion
from mainline-5.6-rc4
commit 01e99aeca3979600302913cef3f89076786f32c8
category: bugfix
bugzilla: 42777
CVE: NA
---------------------------
For some reason, device may be in one situation which can't handle
FS request, so STS_RESOURCE is always returned and the FS request
will be added to hctx->dispatch. However passthrough request may
be required at that time for fixing the problem. If passthrough
request is added to scheduler queue, there isn't any chance for
blk-mq to dispatch it given we prioritize requests in hctx->dispatch.
Then the FS IO request may never be completed, and IO hang is caused.
So passthrough request has to be added to hctx->dispatch directly
for fixing the IO hang.
Fix this issue by inserting passthrough request into hctx->dispatch
directly together withing adding FS request to the tail of
hctx->dispatch in blk_mq_dispatch_rq_list(). Actually we add FS request
to tail of hctx->dispatch at default, see blk_mq_request_bypass_insert().
Then it becomes consistent with original legacy IO request
path, in which passthrough request is always added to q->queue_head.
Cc: Dongli Zhang <dongli.zhang(a)oracle.com>
Cc: Christoph Hellwig <hch(a)infradead.org>
Cc: Ewan D. Milne <emilne(a)redhat.com>
Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Conflicts:
block/blk-flush.c
block/blk-mq.c
block/blk-mq-sched.c
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
Reviewed-by: Yufen Yu <yuyufen(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
block/blk-flush.c | 2 +-
block/blk-mq-sched.c | 22 +++++++++++++++-------
block/blk-mq.c | 18 +++++++++++-------
block/blk-mq.h | 3 ++-
4 files changed, 29 insertions(+), 16 deletions(-)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 256fa1ccc2bd..2a8369eb6c1c 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -495,7 +495,7 @@ void blk_insert_flush(struct request *rq)
if ((policy & REQ_FSEQ_DATA) &&
!(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
if (q->mq_ops)
- blk_mq_request_bypass_insert(rq, false);
+ blk_mq_request_bypass_insert(rq, false, false);
else
list_add_tail(&rq->queuelist, &q->queue_head);
return;
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index e98f51b86342..623c258d0b34 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -357,13 +357,19 @@ static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx,
bool has_sched,
struct request *rq)
{
- /* dispatch flush rq directly */
- if (rq->rq_flags & RQF_FLUSH_SEQ) {
- spin_lock(&hctx->lock);
- list_add(&rq->queuelist, &hctx->dispatch);
- spin_unlock(&hctx->lock);
+ /*
+ * dispatch flush and passthrough rq directly
+ *
+ * passthrough request has to be added to hctx->dispatch directly.
+ * For some reason, device may be in one situation which can't
+ * handle FS request, so STS_RESOURCE is always returned and the
+ * FS request will be added to hctx->dispatch. However passthrough
+ * request may be required at that time for fixing the problem. If
+ * passthrough request is added to scheduler queue, there isn't any
+ * chance to dispatch it given we prioritize requests in hctx->dispatch.
+ */
+ if ((rq->rq_flags & RQF_FLUSH_SEQ) || blk_rq_is_passthrough(rq))
return true;
- }
if (has_sched)
rq->rq_flags |= RQF_SORTED;
@@ -387,8 +393,10 @@ void blk_mq_sched_insert_request(struct request *rq, bool at_head,
WARN_ON(e && (rq->tag != -1));
- if (blk_mq_sched_bypass_insert(hctx, !!e, rq))
+ if (blk_mq_sched_bypass_insert(hctx, !!e, rq)) {
+ blk_mq_request_bypass_insert(rq, at_head, false);
goto run;
+ }
if (e && e->type->ops.mq.insert_requests) {
LIST_HEAD(list);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 59c92961cc20..a9138235b870 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -721,7 +721,7 @@ static void blk_mq_requeue_work(struct work_struct *work)
* merge.
*/
if (rq->rq_flags & RQF_DONTPREP)
- blk_mq_request_bypass_insert(rq, false);
+ blk_mq_request_bypass_insert(rq, false, false);
else
blk_mq_sched_insert_request(rq, true, false, false);
}
@@ -1227,7 +1227,7 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
bool needs_restart;
spin_lock(&hctx->lock);
- list_splice_init(list, &hctx->dispatch);
+ list_splice_tail_init(list, &hctx->dispatch);
spin_unlock(&hctx->lock);
/*
@@ -1590,13 +1590,17 @@ void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
* Should only be used carefully, when the caller knows we want to
* bypass a potential IO scheduler on the target device.
*/
-void blk_mq_request_bypass_insert(struct request *rq, bool run_queue)
+void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
+ bool run_queue)
{
struct blk_mq_ctx *ctx = rq->mq_ctx;
struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(rq->q, ctx->cpu);
spin_lock(&hctx->lock);
- list_add_tail(&rq->queuelist, &hctx->dispatch);
+ if (at_head)
+ list_add(&rq->queuelist, &hctx->dispatch);
+ else
+ list_add_tail(&rq->queuelist, &hctx->dispatch);
spin_unlock(&hctx->lock);
if (run_queue)
@@ -1776,7 +1780,7 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
if (bypass_insert)
return BLK_STS_RESOURCE;
- blk_mq_request_bypass_insert(rq, run_queue);
+ blk_mq_request_bypass_insert(rq, false, run_queue);
return BLK_STS_OK;
}
@@ -1792,7 +1796,7 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false);
if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE)
- blk_mq_request_bypass_insert(rq, true);
+ blk_mq_request_bypass_insert(rq, false, true);
else if (ret != BLK_STS_OK)
blk_mq_end_request(rq, ret);
@@ -1827,7 +1831,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
if (ret != BLK_STS_OK) {
if (ret == BLK_STS_RESOURCE ||
ret == BLK_STS_DEV_RESOURCE) {
- blk_mq_request_bypass_insert(rq,
+ blk_mq_request_bypass_insert(rq, false,
list_empty(list));
break;
}
diff --git a/block/blk-mq.h b/block/blk-mq.h
index a6094c27b827..debc646e1bed 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -64,7 +64,8 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
*/
void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
bool at_head);
-void blk_mq_request_bypass_insert(struct request *rq, bool run_queue);
+void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
+ bool run_queue);
void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
struct list_head *list);
--
2.25.1
1
5

13 Oct '20
From: Fang Lijun <fanglijun3(a)huawei.com>
ascend inclusion
category: bugfix
bugzilla: NA
CVE: NA
-------------------------------------------------
The register_persistent_clock will be called after kernel init,
so it can not be defined as __init.
Fixes: 76ab899d73d6 ("arm64/ascend: Implement the read_persistend_clock64 for aarch64")
Signed-off-by: Fang Lijun <fanglijun3(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Reviewed-by: Ding Tianhong <dingtianhong(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/kernel/time.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/time.c b/arch/arm64/kernel/time.c
index c06c3feb6772..902d0f0b4f7b 100644
--- a/arch/arm64/kernel/time.c
+++ b/arch/arm64/kernel/time.c
@@ -77,7 +77,7 @@ void read_persistent_clock64(struct timespec64 *ts)
__read_persistent_clock(ts);
}
-int __init register_persistent_clock(clock_access_fn read_persistent)
+int register_persistent_clock(clock_access_fn read_persistent)
{
/* Only allow the clockaccess functions to be registered once */
if (__read_persistent_clock == dummy_clock_access) {
--
2.25.1
1
0
Al Viro (4):
epoll: do not insert into poll queues until all sanity checks are done
epoll: replace ->visited/visited_list with generation count
epoll: EPOLL_CTL_ADD: close the race in decision to take fast path
ep_create_wakeup_source(): dentry name can change under you...
Bartosz Golaszewski (1):
gpio: mockup: fix resource leak in error path
Bryan O'Donoghue (1):
USB: gadget: f_ncm: Fix NDP16 datagram validation
Chaitanya Kulkarni (1):
nvme-core: get/put ctrl and transport module in
nvme_dev_open/release()
Chris Packham (2):
spi: fsl-espi: Only process interrupts for expected events
pinctrl: mvebu: Fix i2c sda definition for 98DX3236
Dinh Nguyen (1):
clk: socfpga: stratix10: fix the divider for the emac_ptp_free_clk
Felix Fietkau (1):
mac80211: do not allow bigger VHT MPDUs than the hardware supports
Greg Kroah-Hartman (1):
Linux 4.19.150
Hans de Goede (1):
mmc: sdhci: Workaround broken command queuing on Intel GLK based IRBIS
models
James Smart (1):
nvme-fc: fail new connections to a deleted host or remote port
Jean Delvare (1):
drm/amdgpu: restore proper ref count in amdgpu_display_crtc_set_config
Jeffrey Mitchell (1):
nfs: Fix security label length not being reset
Jiri Kosina (1):
Input: i8042 - add nopnp quirk for Acer Aspire 5 A515
Laurent Dufour (2):
mm: replace memmap_context by meminit_context
mm: don't rely on system state to detect hot-plug operations
Lucy Yan (1):
net: dec: de2104x: Increase receive ring size for Tulip
Marek Szyprowski (1):
clk: samsung: exynos4: mark 'chipid' clock as CLK_IGNORE_UNUSED
Martin Cerveny (1):
drm/sun4i: mixer: Extend regmap max_register
Nicolas VINCENT (1):
i2c: cpm: Fix i2c_ram structure
Olympia Giannou (1):
rndis_host: increase sleep time in the query-response loop
Sebastien Boeuf (1):
net: virtio_vsock: Enhance connection semantics
Stefano Garzarella (3):
vsock/virtio: use RCU to avoid use-after-free on the_virtio_vsock
vsock/virtio: stop workers during the .remove()
vsock/virtio: add transport parameter to the
virtio_transport_reset_no_sock()
Steven Rostedt (VMware) (1):
ftrace: Move RCU is watching check after recursion check
Taiping Lai (1):
gpio: sprd: Clear interrupt when setting the type as edge
Thibaut Sautereau (1):
random32: Restore __latent_entropy attribute on net_rand_state
Vincent Huang (1):
Input: trackpoint - enable Synaptics trackpoints
Will McVicker (1):
netfilter: ctnetlink: add a range check for l3/l4 protonum
Xie He (3):
drivers/net/wan/hdlc_fr: Add needed_headroom for PVC devices
drivers/net/wan/lapbether: Make skb->protocol consistent with the
header
drivers/net/wan/hdlc: Set skb->protocol before transmitting
Yu Kuai (1):
iommu/exynos: add missing put_device() call in exynos_iommu_of_xlate()
dillon min (1):
gpio: tc35894: fix up tc35894 interrupt configuration
Makefile | 2 +-
arch/ia64/mm/init.c | 6 +-
drivers/base/node.c | 84 ++++---
drivers/clk/samsung/clk-exynos4.c | 4 +-
drivers/clk/socfpga/clk-s10.c | 2 +-
drivers/gpio/gpio-mockup.c | 2 +
drivers/gpio/gpio-sprd.c | 3 +
drivers/gpio/gpio-tc3589x.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c | 2 +-
drivers/gpu/drm/sun4i/sun8i_mixer.c | 2 +-
drivers/i2c/busses/i2c-cpm.c | 3 +
drivers/input/mouse/trackpoint.c | 2 +
drivers/input/serio/i8042-x86ia64io.h | 7 +
drivers/iommu/exynos-iommu.c | 8 +-
drivers/mmc/host/sdhci-pci-core.c | 3 +-
drivers/net/ethernet/dec/tulip/de2104x.c | 2 +-
drivers/net/usb/rndis_host.c | 2 +-
drivers/net/wan/hdlc_cisco.c | 1 +
drivers/net/wan/hdlc_fr.c | 6 +-
drivers/net/wan/hdlc_ppp.c | 1 +
drivers/net/wan/lapbether.c | 4 +-
drivers/nvme/host/core.c | 15 ++
drivers/nvme/host/fc.c | 6 +-
drivers/pinctrl/mvebu/pinctrl-armada-xp.c | 2 +-
drivers/spi/spi-fsl-espi.c | 5 +-
drivers/usb/gadget/function/f_ncm.c | 30 +--
drivers/vhost/vsock.c | 94 +++----
fs/eventpoll.c | 71 +++---
fs/nfs/dir.c | 3 +
include/linux/mm.h | 2 +-
include/linux/mmzone.h | 11 +-
include/linux/node.h | 11 +-
include/linux/virtio_vsock.h | 3 +-
kernel/trace/ftrace.c | 6 +-
lib/random32.c | 2 +-
mm/memory_hotplug.c | 5 +-
mm/page_alloc.c | 11 +-
net/mac80211/vht.c | 8 +-
net/netfilter/nf_conntrack_netlink.c | 2 +
net/vmw_vsock/virtio_transport.c | 265 +++++++++++++-------
net/vmw_vsock/virtio_transport_common.c | 13 +-
41 files changed, 416 insertions(+), 297 deletions(-)
--
2.25.1
1
38
很高兴 openEuler 20.09 kernel 如期而至,该版本包含众多特性,欢迎通过 openEuler
版本来体验。
openEuler-20.09 kernel tree 位于:
git@gitee.com:openeuler/kernel.git openEuler-20.09
openEuler kernel 项目位于:
https://gitee.com/openeuler/kernel
openEuler 20.09 kernel 依然基于社区的 Linux 4.19 LTS 内核,在 openEuler 1.0 LTS
基础上往前演进(跟踪到 4.19.140),在这大概6个月的时间内,合入并使能了多个特性,
下面从架构支持,驱动支持,以及内核特性支持等几个方面来进行介绍。
1 架构支持
1.1 支持海光CPU-Hygon Dhyana
海光的Peng Wu向openEuler贡献了Hygon CPU的支持补丁集,通过这个补丁集,新增了
Hygon CPU 的ACPI, cpufreq,perf,KVM,NTB以及Hygon CPU的识别等支持,通过这
个补丁集,openEuler完整支持海光x86 CPU。
1.2 支持昇腾芯片
包括CPU和AI CPU共享虚拟地址的SVM(shared virtual memory)特性,以及芯片的硬件
支持。
1.3 支持鲲鹏PC平台
鲲鹏PC CPU基于鲲鹏920,8核。20.09 版本内核加入了对PC的休眠支持(架构特性),
以及I2C控制器等支持。
1.4 支持 ARM v8.x 特性
- ARMv8.0 CRC32 instructions
- ARMv8.0-SB Control of speculation
- Armv8.2 Cache clean to Point of Deep Persistence
- ARMv8.2-TTCNP, Translation table Common not private translations
- ARMv8.4-TLBI, TLB maintenance and TLB range instructions
- Armv8.5-RNG, Random number generator
- Armv8.5-BTI, Branch target identification
- Armv8.5-E0PD, Preventing EL0 access to halves of address map
- ARMv8.6 Data gathering hint
- ARMv8.6 Matrix multiply extension
1.5 支持 risc-v
支持 risc-v kvm. (openEuler 20.09 通过 kernel 5.5 来支持 risc-v)
kernel tree 位于:
git@gitee.com:openeuler/kernel.git kernel-5.5
2. 新介质以及新驱动支持
2.1 ARM64支持 persistence memory
ARMv8.2引入新的指令支持 persistent memory,鲲鹏920处理器完整支持该特性,通过
打开内核的 ACPI NFIT以及libnvdimm的配置文件,就能在鲲鹏上完美支持NVDIMM介质
内存。
2.2 1822 网卡驱动增强
- 1822网卡支持x86架构
- 1822网卡支持128队列
2.3 3408/3416 RAID卡规避
3408iMR/3416iMRraid卡在ARM64平台上(ARM64生态问题)使用存在问题,开启SMMU后,
会存在不兼容,通过SMMU 设备bypass特性,规避不兼容问题。
2.4 Huawei iBMA 驱动
iBMA 驱动是一组 BMC 与带内系统通信的驱动,系统宕机时,能辅助保存宕机时的运行
状态信息,帮助定位故障原因。
2.5 hns3: update hns3 version to 1.9.38.8
2.6 TIPC: enable TIPC module by default
3. 存储/IO特性支持
3.1 bcache稳定性提升
bcache Maintainer Coly回合了大量bcache的bugfix以及特性,大幅提升bcache稳定性
和可用性
4. 性能优化特性
4.1 sched Task Steal
回合社区的Task Steal特性,通过提升CPU利用率,替身MySQL数据库mix场景的性能10%+;
4.2 REFCOUNT_FULL以及CMPXCHG_LOOP()优化
通过atomic_fetch_*操作来降低cmpxchg()带来的性能开销,从而提升refcount机制性能,
带来文件读写等场景的性能提升,在ARM64服务器上尤其明显,空文件读写的benchmark 20+提升。
4.3 Linux's vmalloc Seeing "Large Performance Benefits"
优化 vmalloc 性能,内核针对vmalloc区的查找做了优化,从O(N)到~O(log(N) 。
See: https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.2-vmalloc-Perfo…
4.4 MPAM特性增强
支持通过ACPI获取系统MPAM资源;
4.5 percpu refcount for pagecache
pagecache的生命周期管理是通过refcount来管理的,在多核并发压力下,读取文件cache
的atomic操作成为瓶颈,因此引入percpu refcount mechanism来提升性能,针对nginx
http测试,性能有1倍+的提升
4.6 ARM64 clear_page()性能优化
通过‘stnp’代替'dc zva'指令,提升clear_page()的性能,在鲲鹏920上,针对hugetlb
测试,有53%的性能提升。
4.7 调度,优化关键进程的抢占时延,提升响应速度
对 SCHED_IDLE 策略的优化,之前选核的时候只选择 idle 的CPU,现在是如果那个CPU
上运行的是 SCHED_IDLE 的进程,也可以选择到,由于运行SCHED_IDLE进程的CPU并不是
idle,没有唤醒时延,能提升响应速度。
5. 安全特性
5.1 支持AppArmor
AppArmor相比Selinux使用起来简单,但之前版本未开启AppArmor支持。20.09版本支持
AppArmor,但默认的安全策略依然是Selinux,需要在启动参数传入security=apparmor使能
5.2 IMA摘要列表增强
相比内核社区原生 IMA 机制,IMA 摘要列表扩展从安全性、性能、易用性三个方面进行
了改良,助力完整性保护机制在生产场景下落地,并具有三大优势:
极致安全
原生 IMA 机制要求在现网环境下预先生成并标记文件扩展属性,访问文件时将文件扩展属
性作为参考值,信任链不完整。摘要列表扩展将文件参考摘要值保存在内核空间中,构建阶
段通过摘要列表的形式携带在发布的 rpm 包中,安装 rpm 包的同时导入摘要列表并执行
验签,确保了参考值来自于软件发行商,实现了完整的信任链。
惊艳的性能表现
IMA 度量场景下,摘要列表扩展在确保安全性的前提下,减少了不必要的 PCR 扩展操作,
开启度量时性能损失小于 5%,相比原生 IMA 度量性能提升高达 50%。IMA 评估场景下,
摘要列表扩展将签名验证统一移动到启动阶段进行,避免每次访问文件时都执行验签,
相比原生 IMA 评估场景提升运行阶段文件访问的性能约 20%。
快速部署,平滑升级
原生 IMA 机制在初次部署或每次更新软件包时,需要切换到 fix 模式手动标记文件扩展
属性后再重启进入 enforce 模式,才能正常访问安装的程序。 摘要列表扩展可实现安
装完成后开箱即用,且允许直接在enforce 模式下安装或升级 rpm 包,无需重启和手动
标记即可使用,实现了用户感知最小化,适合现网环境下快速批量部署。
6. 功耗特性
6.1 支持 CPU idle TEO governor
TEO 是新引入的CPU调频算法,更适合 tickless 系统,可以为CPU选择更合适的节能状态。
7 调测特性
7.1 support livepatch without ftrace
8. 虚拟化特性
8.1 支持双层调度
双层调度就是让Hypervisor的调度器感知到VM的VCPU上跑什么应用。让VM的调度器感知到
Hypervisor 层物理CPU压力。两层调度感知,整机达到最好的业务性能。
8.2 支持 PMU nmi watchdog
8.3 支持 SmartPolling (guest idle poll)
ARM 平台上 cpu idle 支持 smart polling,用来提升性能。
8.4 支持 vmtop, 用于虚拟机性能指标监测
8.5 支持ARM架构PV qspinlock功能,提升vcpu的自旋锁等锁性能
---
openEuler Kernel SIG
1
0
From: George Wilkie <gwilkie(a)vyatta.att-mail.com>
[ Upstream commit 2f3f7d1fa0d1039b24a55d127ed190f196fc3e79 ]
If you configure a route with multiple labels, e.g.
ip route add 10.10.3.0/24 encap mpls 16/100 via 10.10.2.2 dev ens4
A warning is logged:
kernel: [ 130.561819] netlink: 'ip': attribute type 1 has an invalid
length.
This happens because mpls_iptunnel_policy has set the type of
MPLS_IPTUNNEL_DST to fixed size NLA_U32.
Change it to a minimum size.
nla_get_labels() does the remaining validation.
Fixes: e3e4712ec096 ("mpls: ip tunnel support")
Signed-off-by: George Wilkie <gwilkie(a)vyatta.att-mail.com>
Reviewed-by: David Ahern <dsahern(a)gmail.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Signed-off-by: Aichun Li <liaichun(a)huawei.com>
Reviewed-by: wangxiaopeng <wangxiaopeng7(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
net/mpls/mpls_iptunnel.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/mpls/mpls_iptunnel.c b/net/mpls/mpls_iptunnel.c
index 8141eb10752f..b556acebdc86 100644
--- a/net/mpls/mpls_iptunnel.c
+++ b/net/mpls/mpls_iptunnel.c
@@ -28,7 +28,7 @@
#include "internal.h"
static const struct nla_policy mpls_iptunnel_policy[MPLS_IPTUNNEL_MAX + 1] = {
- [MPLS_IPTUNNEL_DST] = { .type = NLA_U32 },
+ [MPLS_IPTUNNEL_DST] = { .len = sizeof(u32) },
[MPLS_IPTUNNEL_TTL] = { .type = NLA_U8 },
};
--
2.25.1
1
42
From: Dan Carpenter <dan.carpenter(a)oracle.com>
stable inclusion
from linux-4.19.148
commit 45676c0bc28eff8f46455b28e2db80a77676488b
CVE: CVE-2020-25643
--------------------------------
[ Upstream commit 66d42ed8b25b64eb63111a2b8582c5afc8bf1105 ]
There are a couple bugs here:
1) If opt[1] is zero then this results in a forever loop. If the value
is less than 2 then it is invalid.
2) It assumes that "len" is more than sizeof(valid_accm) or 6 which can
result in memory corruption.
In the case of LCP_OPTION_ACCM, then we should check "opt[1]" instead
of "len" because, if "opt[1]" is less than sizeof(valid_accm) then
"nak_len" gets out of sync and it can lead to memory corruption in the
next iterations through the loop. In case of LCP_OPTION_MAGIC, the
only valid value for opt[1] is 6, but the code is trying to log invalid
data so we should only discard the data when "len" is less than 6
because that leads to a read overflow.
Reported-by: ChenNan Of Chaitin Security Research Lab <whutchennan(a)gmail.com>
Fixes: e022c2f07ae5 ("WAN: new synchronous PPP implementation for generic HDLC.")
Signed-off-by: Dan Carpenter <dan.carpenter(a)oracle.com>
Reviewed-by: Eric Dumazet <edumazet(a)google.com>
Reviewed-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/wan/hdlc_ppp.c | 16 +++++++++++-----
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/drivers/net/wan/hdlc_ppp.c b/drivers/net/wan/hdlc_ppp.c
index ab8b3cbbb205..85844f26547d 100644
--- a/drivers/net/wan/hdlc_ppp.c
+++ b/drivers/net/wan/hdlc_ppp.c
@@ -386,11 +386,8 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
}
for (opt = data; len; len -= opt[1], opt += opt[1]) {
- if (len < 2 || len < opt[1]) {
- dev->stats.rx_errors++;
- kfree(out);
- return; /* bad packet, drop silently */
- }
+ if (len < 2 || opt[1] < 2 || len < opt[1])
+ goto err_out;
if (pid == PID_LCP)
switch (opt[0]) {
@@ -398,6 +395,8 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
continue; /* MRU always OK and > 1500 bytes? */
case LCP_OPTION_ACCM: /* async control character map */
+ if (opt[1] < sizeof(valid_accm))
+ goto err_out;
if (!memcmp(opt, valid_accm,
sizeof(valid_accm)))
continue;
@@ -409,6 +408,8 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
}
break;
case LCP_OPTION_MAGIC:
+ if (len < 6)
+ goto err_out;
if (opt[1] != 6 || (!opt[2] && !opt[3] &&
!opt[4] && !opt[5]))
break; /* reject invalid magic number */
@@ -427,6 +428,11 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
ppp_cp_event(dev, pid, RCR_GOOD, CP_CONF_ACK, id, req_len, data);
kfree(out);
+ return;
+
+err_out:
+ dev->stats.rx_errors++;
+ kfree(out);
}
static int ppp_rx(struct sk_buff *skb)
--
2.25.1
1
0
Adrian Hunter (1):
perf kcore_copy: Fix module map when there are no modules loaded
Al Viro (1):
fix dget_parent() fastpath race
Alain Michaud (1):
Bluetooth: guard against controllers sending zero'd events
Alex Deucher (2):
drm/amdgpu/powerplay: fix AVFS handling with custom powerplay table
drm/amdgpu/powerplay/smu7: fix AVFS handling with custom powerplay
table
Alex Williamson (1):
vfio/pci: Clear error and request eventfd ctx after releasing
Alexander Duyck (1):
e1000: Do not perform reset in reset_task if we are already down
Alexandre Belloni (2):
rtc: sa1100: fix possible race condition
rtc: ds1374: fix possible race condition
Amelie Delaunay (2):
dmaengine: stm32-mdma: use vchan_terminate_vdesc() in .terminate_all
dmaengine: stm32-dma: use vchan_terminate_vdesc() in .terminate_all
Andreas Steinmetz (1):
ALSA: usb-audio: Fix case when USB MIDI interface has more than one
extra endpoint descriptor
Andy Lutomirski (1):
selftests/x86/syscall_nt: Clear weird flags after each test
Anshuman Khandual (1):
arm64/cpufeature: Drop TraceFilt feature exposure from ID_DFR0
register
Anthony Iliopoulos (1):
nvme: explicitly update mpath disk capacity on revalidation
Aric Cyr (1):
drm/amd/display: dal_ddc_i2c_payloads_create can fail causing panic
Ayush Sawal (1):
crypto: chelsio - This fixes the kernel panic which occurs during a
libkcapi test
Balsundar P (1):
scsi: aacraid: fix illegal IO beyond last LBA
Bart Van Assche (3):
scsi: ufs: Make ufshcd_add_command_trace() easier to read
scsi: ufs: Fix a race condition in the tracing code
RDMA/rxe: Fix configuration of atomic queue pair attributes
Bob Peterson (1):
gfs2: clean up iopen glock mess in gfs2_create_inode
Boris Brezillon (1):
mtd: parser: cmdline: Support MTD names containing one or more colons
Bradley Bolen (1):
mmc: core: Fix size overflow for mmc partitions
Brian Foster (1):
xfs: fix attr leaf header freemap.size underflow
Chris Wilson (1):
dma-fence: Serialise signal enabling (dma_fence_enable_sw_signaling)
Christian Borntraeger (1):
s390/zcrypt: Fix ZCRYPT_PERDEV_REQCNT ioctl
Christophe JAILLET (4):
RDMA/iw_cgxb4: Fix an error handling path in 'c4iw_connect()'
perf cpumap: Fix snprintf overflow check
SUNRPC: Fix a potential buffer overflow in 'svc_print_xprts()'
scsi: aacraid: Fix error handling paths in aac_probe_one()
Chuck Lever (1):
svcrdma: Fix leak of transport addresses
Colin Ian King (2):
media: tda10071: fix unsigned sign extension overflow
USB: EHCI: ehci-mv: fix less than zero comparison of an unsigned int
Cong Wang (1):
atm: fix a memory leak of vcc->user_back
Dan Carpenter (1):
media: staging/imx: Missing assignment in
imx_media_capture_device_register()
Daniel Borkmann (1):
bpf: Fix clobbering of r2 in bpf_gen_ld_abs
Darrick J. Wong (3):
xfs: fix log reservation overflows when allocating large rt extents
xfs: don't ever return a stale pointer from __xfs_dir3_free_read
xfs: mark dir corrupt when lookup-by-hash fails
Dave Hansen (1):
x86/pkeys: Add check for pkey "overflow"
David Sterba (1):
btrfs: don't force read-only after error in drop snapshot
Dennis Li (1):
drm/amdkfd: fix a memory leak issue
Dinghao Liu (8):
drm/nouveau/debugfs: fix runtime pm imbalance on error
drm/nouveau: fix runtime pm imbalance on error
drm/nouveau/dispnv50: fix runtime pm imbalance on error
ASoC: img-i2s-out: Fix runtime PM imbalance on error
wlcore: fix runtime pm imbalance in wl1271_tx_work
wlcore: fix runtime pm imbalance in wlcore_regdomain_config
mtd: rawnand: omap_elm: Fix runtime PM imbalance on error
PCI: tegra: Fix runtime PM imbalance on error
Dinh Nguyen (1):
clk: stratix10: use do_div() for 64-bit calculation
Divya Indi (1):
tracing: Adding NULL checks for trace_array descriptor pointer
Dmitry Baryshkov (1):
regmap: fix page selection for noinc reads
Dmitry Bogdanov (1):
net: qed: RDMA personality shouldn't fail VF load
Dmitry Osipenko (2):
PM / devfreq: tegra30: Fix integer overflow on CPU's freq max out
dmaengine: tegra-apb: Prevent race conditions on channel's freeing
Don Brace (1):
scsi: hpsa: correct race condition in offload enabled
Doug Smythies (1):
tools/power/x86/intel_pstate_tracer: changes for python 3
compatibility
Douglas Anderson (1):
bdev: Reduce time holding bd_mutex in sync in blkdev_close()
Eric Dumazet (2):
net: silence data-races on sk_backlog.tail
mac802154: tx: fix use-after-free
Felix Fietkau (1):
mt76: clear skb pointers from rx aggregation reorder buffer during
cleanup
Fuqian Huang (1):
m68k: q40: Fix info-leak in rtc_ioctl
Gabriel Ravier (1):
tools: gpio-hammer: Avoid potential overflow in main
Gao Xiang (1):
mm, THP, swap: fix allocating cluster for swapfile by mistake
Greg Kroah-Hartman (1):
Linux 4.19.149
Gustavo Romero (1):
KVM: PPC: Book3S HV: Treat TM-related invalid form instructions on P9
like the valid ones
Hans de Goede (2):
ASoC: Intel: bytcr_rt5640: Add quirk for MPMAN Converter9 2-in-1
i2c: core: Call i2c_acpi_install_space_handler() before
i2c_acpi_register_devices()
Hillf Danton (1):
Bluetooth: prefetch channel before killing sock
Hou Tao (2):
mtd: cfi_cmdset_0002: don't free cfi->cfiq in error path of
cfi_amdstd_setup()
ubi: fastmap: Free unused fastmap anchor peb during detach
Howard Chung (1):
Bluetooth: L2CAP: handle l2cap config request during open state
Hui Wang (1):
ALSA: hda/realtek - Couldn't detect Mic if booting with headset
plugged
Ian Rogers (5):
perf parse-events: Fix 3 use after frees found with clang ASAN
perf mem2node: Avoid double free related to realloc
perf evsel: Fix 2 memory leaks
perf trace: Fix the selection for architectures to generate the errno
name tables
perf metricgroup: Free metric_events on error
Ilya Leoshkevich (1):
s390/init: add missing __init annotations
Israel Rukshin (2):
nvme: Fix controller creation races with teardown flow
nvmet-rdma: fix double free of rdma queue
Ivan Lazeev (1):
tpm_crb: fix fTPM on AMD Zen+ CPUs
Ivan Safonov (1):
staging:r8188eu: avoid skb_clone for amsdu to msdu conversion
Jaewon Kim (1):
mm/mmap.c: initialize align_offset explicitly for vm_unmapped_area
James Morse (1):
firmware: arm_sdei: Use cpus_read_lock() to avoid races with cpuhp
James Smart (3):
scsi: lpfc: Fix kernel crash at lpfc_nvme_info_show during remote port
bounce
scsi: lpfc: Fix RQ buffer leakage when no IOCBs available
scsi: lpfc: Fix coverity errors in fmdi attribute handling
Jan Höppner (1):
s390/dasd: Fix zero write for FBA devices
Jason Gunthorpe (1):
RDMA/cm: Remove a race freeing timewait_info
Javed Hasan (2):
scsi: libfc: Handling of extra kref
scsi: libfc: Skip additional kref updating work event
Jeff Layton (2):
ceph: ensure we have a new cap before continuing in fill_inode
ceph: fix potential race in ceph_check_caps
Jia He (1):
mm: fix double page fault on arm64 if PTE_AF is cleared
Jin Yao (1):
perf parse-events: Use strcmp() to compare the PMU name
Jing Xiangfeng (1):
atm: eni: fix the missed pci_disable_device() for eni_init_one()
Jiri Olsa (1):
perf stat: Fix duration_time value for higher intervals
Jiri Slaby (3):
ata: define AC_ERR_OK
ata: make qc_prep return ata_completion_errors
ata: sata_mv, avoid trigerrable BUG_ON
Joakim Tjernlund (1):
ALSA: usb-audio: Add delay quirk for H570e USB headsets
Joe Perches (1):
kernel/sys.c: avoid copying possible padding bytes in copy_to_user
John Clements (1):
drm/amdgpu: increase atombios cmd timeout
John Garry (2):
bus: hisi_lpc: Fixup IO ports addresses to avoid use-after-free in
host removal
perf jevents: Fix leak of mapfile memory
John Meneghini (1):
nvme-multipath: do not reset on unknown status
Jonathan Bakker (3):
power: supply: max17040: Correct voltage reading
phy: samsung: s5pv210-usb2: Add delay after reset
tty: serial: samsung: Correct clock selection logic
Jordan Crouse (1):
drm/msm/a5xx: Always set an OPP supported hardware value
Josef Bacik (1):
tracing: Set kernel_stack's caller size properly
Josh Poimboeuf (1):
objtool: Fix noreturn detection for ignored functions
Kai-Heng Feng (1):
ALSA: hda/realtek: Enable front panel headset LED on Lenovo
ThinkStation P520
Kangjie Lu (1):
gma/gma500: fix a memory disclosure bug due to uninitialized bytes
Kevin Kou (1):
sctp: move trace_sctp_probe_path into sctp_outq_sack
Kirill A. Shutemov (1):
mm: avoid data corruption on CoW fault into PFN-mapped VMA
Krzysztof Kozlowski (1):
dt-bindings: sound: wm8994: Correct required supplies based on actual
implementaion
Kusanagi Kouichi (1):
debugfs: Fix !DEBUG_FS debugfs_create_automount
Lee Jones (1):
mfd: mfd-core: Protect against NULL call-back function pointer
Linus Lüssing (4):
batman-adv: bla: fix type misuse for backbone_gw hash indexing
batman-adv: mcast/TT: fix wrongly dropped or rerouted packets
batman-adv: mcast: fix duplicate mcast packets in BLA backbone from
mesh
batman-adv: mcast: fix duplicate mcast packets from BLA backbone to
mesh
Liu Jian (1):
ieee802154: fix one possible memleak in ca8210_dev_com_init
Liu Song (1):
ubifs: Fix out-of-bounds memory access caused by abnormal value of
node_len
Madhuparna Bhowmik (2):
drivers: char: tlclk.c: Avoid data race between init and interrupt
handler
rapidio: avoid data race between file operation callbacks and
mport_cdev_add().
Manish Mandlik (1):
Bluetooth: Fix refcount use-after-free issue
Marc Zyngier (1):
KVM: arm64: Assume write fault on S1PTW permission fault on
instruction fetch
Marco Elver (1):
seqlock: Require WRITE_ONCE surrounding raw_seqcount_barrier
Marek Szyprowski (1):
drm/vc4/vc4_hdmi: fill ASoC card owner
Martin Cerveny (1):
drm/sun4i: sun8i-csc: Secondary CSC register correction
Masami Hiramatsu (1):
kprobes: Fix to check probe enabled before disarm_kprobe_ftrace()
Matthias Fend (1):
dmaengine: zynqmp_dma: fix burst length configuration
Maxim Mikityanskiy (1):
Bluetooth: btrtl: Use kvmalloc for FW allocations
Maximilian Luz (1):
mwifiex: Increase AES key storage size to 256 bits
Mert Dirik (1):
ar5523: Add USB ID of SMCWUSBT-G2 wireless adapter
Miaohe Lin (1):
KVM: arm/arm64: vgic: Fix potential double free dist->spis in
__kvm_vgic_destroy()
Miaoqing Pan (2):
ath10k: fix array out-of-bounds access
ath10k: fix memory leak for tpc_stats_final
Mikel Rychliski (1):
PCI: Use ioremap(), not phys_to_virt() for platform ROM
Miklos Szeredi (1):
fuse: don't check refcount after stealing page
Mikulas Patocka (1):
arch/x86/lib/usercopy_64.c: fix __copy_user_flushcache() cache
writeback
Mohan Kumar (1):
ALSA: hda: Clear RIRB status before reading WP
Nathan Chancellor (2):
tracing: Use address-of operator on section symbols
mm/kmemleak.c: use address-of operator on section symbols
Nicholas Piggin (1):
powerpc/traps: Make unrecoverable NMIs die instead of panic
Nick Desaulniers (1):
lib/string.c: implement stpcpy
Nikhil Devshatwar (1):
media: ti-vpe: cal: Restrict DMA to avoid memory corruption
Niklas Söderlund (1):
thermal: rcar_thermal: Handle probe error gracefully
Nilesh Javali (1):
scsi: qedi: Fix termination timeouts in session logout
Oleh Kravchenko (1):
leds: mlxreg: Fix possible buffer overflow
Oliver O'Halloran (1):
powerpc/eeh: Only dump stack once if an MMIO loop is detected
Palmer Dabbelt (1):
RISC-V: Take text_mutex in ftrace_init_nop()
Pan Bian (3):
scsi: fnic: fix use after free
RDMA/qedr: Fix potential use after free
RDMA/i40iw: Fix potential use after free
Paolo Bonzini (1):
KVM: x86: fix incorrect comparison in trace event
Pavel Machek (1):
drm/msm: fix leaks if initialization fails
Pavel Shilovsky (1):
CIFS: Properly process SMB3 lease breaks
Peter Ujfalusi (1):
serial: 8250_omap: Fix sleeping function called from invalid context
during probe
Pratik Rajesh Sampat (1):
cpufreq: powernv: Fix frame-size-overflow in powernv_cpufreq_work_fn
Qian Cai (4):
skbuff: fix a data race in skb_queue_len()
random: fix data races at timer_rand_state
mm/vmscan.c: fix data races using kswapd_classzone_idx
vfio/pci: fix memory leaks of eventfd ctx
Qu Wenruo (1):
btrfs: qgroup: fix data leak caused by race between writeback and
truncate
Rafael J. Wysocki (1):
ACPI: EC: Reference count query handlers under lock
Raviteja Narayanam (1):
serial: uartps: Wait for tx_empty in console setup
Rodrigo Siqueira (1):
drm/amd/display: Stop if retimer is not available
Russell King (1):
ASoC: kirkwood: fix IRQ error handling
Sagar Biradar (1):
scsi: aacraid: Disabling TM path and only processing IOP reset
Sagi Grimberg (1):
nvme: fix possible deadlock when I/O is blocked
Sakari Ailus (1):
media: smiapp: Fix error handling at NVM reading
Sascha Hauer (1):
ubi: Fix producing anchor PEBs
Satendra Singh Thakur (1):
dmaengine: mediatek: hsdma_probe: fixed a memory leak when
devm_request_irq fails
Sean Christopherson (1):
KVM: x86: Reset MMU context if guest toggles CR4.SMAP or CR4.PKE
Shreyas Joshi (1):
printk: handle blank console arguments passed in.
Sonny Sasaka (1):
Bluetooth: Handle Inquiry Cancel error after Inquiry Complete
Stefan Berger (1):
tpm: ibmvtpm: Wait for buffer to be set before proceeding
Stephen Kitt (1):
clk/ti/adpll: allocate room for terminating null
Steve Grubb (1):
audit: CONFIG_CHANGE don't log internal bookkeeping as an event
Steve Rutherford (1):
KVM: Remove CREATE_IRQCHIP/SET_PIT2 race
Stuart Hayes (1):
PCI: pciehp: Fix MSI interrupt race
Sven Eckelmann (1):
batman-adv: Add missing include for in_interrupt()
Sven Schnelle (2):
selftests/ftrace: fix glob selftest
lockdep: fix order in trace_hardirqs_off_caller()
Sylwester Nawrocki (2):
ASoC: wm8994: Skip setting of the WM8994_MICBIAS register for WM1811
ASoC: wm8994: Ensure the device is resumed in wm89xx_mic_detect
functions
Takashi Iwai (3):
ALSA: usb-audio: Don't create a mixer element with bogus volume range
media: go7007: Fix URB type for interrupt handling
ALSA: hda: Fix potential race in unsol event handler
Tang Bin (1):
USB: EHCI: ehci-mv: fix error handling in mv_ehci_probe()
Thomas Gleixner (3):
x86/ioapic: Unbreak check_timer()
bpf: Remove recursion prevention from rcu free callback
x86/speculation/mds: Mark mds_user_clear_cpu_buffers() __always_inline
Thomas Richter (2):
s390/cpum_sf: Use kzalloc and minor changes
perf test: Fix test trace+probe_vfs_getname.sh on s390
Tianjia Zhang (1):
clocksource/drivers/h8300_timer8: Fix wrong return value in
h8300_8timer_init()
Tom Lendacky (1):
KVM: SVM: Add a dedicated INVD intercept routine
Tom Rix (3):
ieee802154/adf7242: check status of adf7242_read_reg
ALSA: asihpi: fix iounmap in error handler
tracing: fix double free
Tonghao Zhang (2):
net: openvswitch: use u64 for meter bucket
net: openvswitch: use div_u64() for 64-by-32 divisions
Trond Myklebust (2):
nfsd: Don't add locks to closed or closing open stateids
NFS: Fix races nfs_page_group_destroy() vs
nfs_destroy_unlinked_subrequests()
Tuong Lien (1):
tipc: fix memory leak in service subscripting
Tzung-Bi Shih (1):
ASoC: max98090: remove msleep in PLL unlocked workaround
Vasily Averin (5):
neigh_stat_seq_next() should increase position index
rt_cpu_seq_next should increase position index
ipv6_route_seq_next should increase position index
mm/swapfile.c: swap_next should increase position index
selinux: sel_avc_get_stat_idx should increase position index
Vignesh Raghavendra (2):
serial: 8250_port: Don't service RX FIFO if throttled
serial: 8250: 8250_omap: Terminate DMA before pushing data on RX
timeout
Vincent Whitchurch (1):
ARM: 8948/1: Prevent OOB access in stacktrace
Wei Li (1):
MIPS: Add the missing 'CPU_1074K' into __get_cpu_type()
Wei Yongjun (2):
sparc64: vcc: Fix error return code in vcc_probe()
scsi: cxlflash: Fix error return code in cxlflash_probe()
Wen Gong (1):
ath10k: use kzalloc to read for ath10k_sdio_hif_diag_read
Wen Yang (1):
drm/omap: fix possible object reference leak
Will Deacon (1):
arm64: cpufeature: Relax checks for AArch32 support at EL[0-2]
Xianting Tian (1):
mm/filemap.c: clear page error before actual read
Xie XiuQi (1):
perf util: Fix memory leak of prefix_if_not_in
Yonghong Song (1):
bpf: Fix a rcu warning for bpffs map pretty-print
Yu Chen (1):
usb: dwc3: Increase timeout for CmdAct cleared by device controller
Zeng Tao (1):
vfio/pci: fix racy on error and request eventfd ctx
Zenghui Yu (1):
KVM: arm64: vgic-its: Fix memory leak on the error path of
vgic_add_lpi()
Zhang Xiaoxu (1):
cifs: Fix double add page to memcg when cifs_readpages
Zhu Yanjun (1):
RDMA/rxe: Set sys_image_guid to be aligned with HW IB devices
Zhuang Yanying (1):
KVM: fix overflow of zero page refcount with ksm running
peter chang (1):
scsi: pm80xx: Cleanup command when a reset times out
zhengbin (1):
media: mc-device.c: fix memleak in media_device_register_entity
.../devicetree/bindings/sound/wm8994.txt | 18 ++-
Documentation/driver-api/libata.rst | 2 +-
Makefile | 2 +-
arch/arm/include/asm/kvm_emulate.h | 11 +-
arch/arm/kernel/stacktrace.c | 2 +
arch/arm/kernel/traps.c | 6 +-
arch/arm64/include/asm/kvm_emulate.h | 9 +-
arch/arm64/kernel/cpufeature.c | 12 +-
arch/arm64/kvm/hyp/switch.c | 2 +-
arch/m68k/q40/config.c | 1 +
arch/mips/include/asm/cpu-type.h | 1 +
arch/powerpc/include/asm/kvm_asm.h | 3 +
arch/powerpc/kernel/eeh.c | 2 +-
arch/powerpc/kernel/traps.c | 6 +-
arch/powerpc/kvm/book3s_hv_tm.c | 28 +++-
arch/powerpc/kvm/book3s_hv_tm_builtin.c | 16 +-
arch/riscv/include/asm/ftrace.h | 7 +
arch/riscv/kernel/ftrace.c | 19 +++
arch/s390/kernel/perf_cpum_sf.c | 9 +-
arch/s390/kernel/setup.c | 6 +-
arch/x86/include/asm/nospec-branch.h | 4 +-
arch/x86/include/asm/pkeys.h | 5 +
arch/x86/kernel/apic/io_apic.c | 1 +
arch/x86/kernel/fpu/xstate.c | 9 +-
arch/x86/kvm/mmutrace.h | 2 +-
arch/x86/kvm/svm.c | 8 +-
arch/x86/kvm/x86.c | 13 +-
arch/x86/lib/usercopy_64.c | 2 +-
drivers/acpi/ec.c | 16 +-
drivers/ata/acard-ahci.c | 6 +-
drivers/ata/libahci.c | 6 +-
drivers/ata/libata-core.c | 9 +-
drivers/ata/libata-sff.c | 12 +-
drivers/ata/pata_macio.c | 6 +-
drivers/ata/pata_pxa.c | 8 +-
drivers/ata/pdc_adma.c | 7 +-
drivers/ata/sata_fsl.c | 4 +-
drivers/ata/sata_inic162x.c | 4 +-
drivers/ata/sata_mv.c | 34 ++--
drivers/ata/sata_nv.c | 18 ++-
drivers/ata/sata_promise.c | 6 +-
drivers/ata/sata_qstor.c | 8 +-
drivers/ata/sata_rcar.c | 6 +-
drivers/ata/sata_sil.c | 8 +-
drivers/ata/sata_sil24.c | 6 +-
drivers/ata/sata_sx4.c | 6 +-
drivers/atm/eni.c | 2 +-
drivers/base/regmap/regmap.c | 12 +-
drivers/bluetooth/btrtl.c | 20 +--
drivers/bus/hisi_lpc.c | 27 +++-
drivers/char/random.c | 12 +-
drivers/char/tlclk.c | 17 +-
drivers/char/tpm/tpm_crb.c | 123 +++++++++++----
drivers/char/tpm/tpm_ibmvtpm.c | 9 ++
drivers/char/tpm/tpm_ibmvtpm.h | 1 +
drivers/clk/socfpga/clk-pll-s10.c | 4 +-
drivers/clk/ti/adpll.c | 11 +-
drivers/clocksource/h8300_timer8.c | 2 +-
drivers/cpufreq/powernv-cpufreq.c | 13 +-
drivers/crypto/chelsio/chcr_algo.c | 5 +-
drivers/crypto/chelsio/chtls/chtls_io.c | 10 +-
drivers/devfreq/tegra-devfreq.c | 4 +-
drivers/dma-buf/dma-fence.c | 78 +++++-----
drivers/dma/mediatek/mtk-hsdma.c | 4 +-
drivers/dma/stm32-dma.c | 9 +-
drivers/dma/stm32-mdma.c | 9 +-
drivers/dma/tegra20-apb-dma.c | 3 +-
drivers/dma/xilinx/zynqmp_dma.c | 24 +--
drivers/firmware/arm_sdei.c | 26 ++--
drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c | 31 ++--
drivers/gpu/drm/amd/amdgpu/atom.c | 4 +-
.../drm/amd/amdkfd/kfd_device_queue_manager.c | 2 +
drivers/gpu/drm/amd/display/dc/core/dc_link.c | 67 ++++----
.../gpu/drm/amd/display/dc/core/dc_link_ddc.c | 52 +++----
.../gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 7 +
.../drm/amd/powerplay/hwmgr/vega10_hwmgr.c | 7 +
drivers/gpu/drm/gma500/cdv_intel_display.c | 2 +
drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 27 +++-
drivers/gpu/drm/msm/msm_drv.c | 6 +-
drivers/gpu/drm/nouveau/dispnv50/disp.c | 4 +-
drivers/gpu/drm/nouveau/nouveau_debugfs.c | 5 +-
drivers/gpu/drm/nouveau/nouveau_gem.c | 4 +-
.../drm/nouveau/nvkm/subdev/bios/shadowpci.c | 17 +-
.../gpu/drm/omapdrm/dss/omapdss-boot-init.c | 4 +-
drivers/gpu/drm/radeon/radeon_bios.c | 30 ++--
drivers/gpu/drm/sun4i/sun8i_csc.h | 2 +-
drivers/gpu/drm/vc4/vc4_hdmi.c | 1 +
drivers/i2c/i2c-core-base.c | 2 +-
drivers/infiniband/core/cm.c | 25 +--
drivers/infiniband/hw/cxgb4/cm.c | 4 +-
drivers/infiniband/hw/i40iw/i40iw_cm.c | 2 +-
drivers/infiniband/hw/qedr/qedr_iw_cm.c | 2 +-
drivers/infiniband/sw/rxe/rxe.c | 2 +
drivers/infiniband/sw/rxe/rxe_qp.c | 7 +-
drivers/leds/leds-mlxreg.c | 4 +-
drivers/media/dvb-frontends/tda10071.c | 9 +-
drivers/media/i2c/smiapp/smiapp-core.c | 3 +-
drivers/media/media-device.c | 65 ++++----
drivers/media/platform/ti-vpe/cal.c | 6 +-
drivers/media/usb/go7007/go7007-usb.c | 4 +-
drivers/mfd/mfd-core.c | 10 ++
drivers/mmc/core/mmc.c | 9 +-
drivers/mtd/chips/cfi_cmdset_0002.c | 1 -
drivers/mtd/cmdlinepart.c | 23 ++-
drivers/mtd/nand/raw/omap_elm.c | 1 +
drivers/mtd/ubi/fastmap-wl.c | 46 ++++--
drivers/mtd/ubi/fastmap.c | 14 +-
drivers/mtd/ubi/ubi.h | 6 +-
drivers/mtd/ubi/wl.c | 32 ++--
drivers/mtd/ubi/wl.h | 1 -
drivers/net/ethernet/intel/e1000/e1000_main.c | 18 ++-
drivers/net/ethernet/qlogic/qed/qed_sriov.c | 1 +
drivers/net/ieee802154/adf7242.c | 4 +-
drivers/net/ieee802154/ca8210.c | 1 +
drivers/net/wireless/ath/ar5523/ar5523.c | 2 +
drivers/net/wireless/ath/ath10k/debug.c | 3 +-
drivers/net/wireless/ath/ath10k/sdio.c | 18 ++-
drivers/net/wireless/ath/ath10k/wmi.c | 49 +++---
drivers/net/wireless/marvell/mwifiex/fw.h | 2 +-
.../wireless/marvell/mwifiex/sta_cmdresp.c | 4 +-
drivers/net/wireless/mediatek/mt76/agg-rx.c | 1 +
drivers/net/wireless/ti/wlcore/main.c | 4 +-
drivers/net/wireless/ti/wlcore/tx.c | 1 +
drivers/nvme/host/core.c | 12 +-
drivers/nvme/host/multipath.c | 21 ++-
drivers/nvme/host/nvme.h | 19 ++-
drivers/nvme/target/rdma.c | 30 ++--
drivers/pci/controller/pci-tegra.c | 3 +-
drivers/pci/hotplug/pciehp_hpc.c | 26 +++-
drivers/pci/rom.c | 17 --
drivers/phy/samsung/phy-s5pv210-usb2.c | 4 +
drivers/power/supply/max17040_battery.c | 2 +-
drivers/rapidio/devices/rio_mport_cdev.c | 14 +-
drivers/rtc/rtc-ds1374.c | 15 +-
drivers/rtc/rtc-sa1100.c | 18 ++-
drivers/s390/block/dasd_fba.c | 9 +-
drivers/s390/crypto/zcrypt_api.c | 3 +-
drivers/scsi/aacraid/aachba.c | 8 +-
drivers/scsi/aacraid/commsup.c | 2 +-
drivers/scsi/aacraid/linit.c | 46 ++++--
drivers/scsi/cxlflash/main.c | 1 +
drivers/scsi/fnic/fnic_scsi.c | 3 +-
drivers/scsi/hpsa.c | 80 +++++++---
drivers/scsi/libfc/fc_rport.c | 13 +-
drivers/scsi/lpfc/lpfc_attr.c | 35 +++--
drivers/scsi/lpfc/lpfc_ct.c | 137 +++++++++--------
drivers/scsi/lpfc/lpfc_hw.h | 36 ++---
drivers/scsi/lpfc/lpfc_sli.c | 4 +
drivers/scsi/pm8001/pm8001_sas.c | 50 ++++--
drivers/scsi/qedi/qedi_iscsi.c | 3 +
drivers/scsi/ufs/ufshcd.c | 14 +-
drivers/staging/media/imx/imx-media-capture.c | 2 +-
drivers/staging/rtl8188eu/core/rtw_recv.c | 19 +--
drivers/thermal/rcar_thermal.c | 6 +-
drivers/tty/serial/8250/8250_omap.c | 8 +-
drivers/tty/serial/8250/8250_port.c | 16 +-
drivers/tty/serial/samsung.c | 8 +-
drivers/tty/serial/xilinx_uartps.c | 8 +
drivers/tty/vcc.c | 1 +
drivers/usb/dwc3/gadget.c | 2 +-
drivers/usb/host/ehci-mv.c | 8 +-
drivers/vfio/pci/vfio_pci.c | 13 ++
fs/block_dev.c | 10 ++
fs/btrfs/extent-tree.c | 2 -
fs/btrfs/inode.c | 23 ++-
fs/ceph/caps.c | 14 +-
fs/ceph/inode.c | 5 +-
fs/cifs/cifsglob.h | 9 +-
fs/cifs/file.c | 21 ++-
fs/cifs/misc.c | 17 +-
fs/cifs/smb1ops.c | 8 +-
fs/cifs/smb2misc.c | 32 +---
fs/cifs/smb2ops.c | 44 ++++--
fs/cifs/smb2pdu.h | 2 +-
fs/dcache.c | 4 +-
fs/fuse/dev.c | 1 -
fs/gfs2/inode.c | 13 +-
fs/nfs/pagelist.c | 67 +++++---
fs/nfs/write.c | 10 +-
fs/nfsd/nfs4state.c | 73 +++++----
fs/ubifs/io.c | 16 +-
fs/xfs/libxfs/xfs_attr_leaf.c | 4 +-
fs/xfs/libxfs/xfs_dir2_node.c | 1 +
fs/xfs/libxfs/xfs_trans_resv.c | 96 +++++++++---
fs/xfs/scrub/dir.c | 3 +
include/linux/debugfs.h | 5 +-
include/linux/libata.h | 13 +-
include/linux/mmc/card.h | 2 +-
include/linux/nfs_page.h | 2 +
include/linux/pci.h | 1 -
include/linux/seqlock.h | 11 +-
include/linux/skbuff.h | 14 +-
include/net/sock.h | 4 +-
include/trace/events/sctp.h | 9 --
kernel/audit_watch.c | 2 -
kernel/bpf/hashtab.c | 8 -
kernel/bpf/inode.c | 4 +-
kernel/kprobes.c | 5 +-
kernel/printk/printk.c | 3 +
kernel/sys.c | 4 +-
kernel/trace/trace.c | 5 +-
kernel/trace/trace_entries.h | 2 +-
kernel/trace/trace_events.c | 2 +
kernel/trace/trace_events_hist.c | 1 -
kernel/trace/trace_preemptirq.c | 4 +-
lib/string.c | 24 +++
mm/filemap.c | 8 +
mm/kmemleak.c | 2 +-
mm/memory.c | 121 +++++++++++++--
mm/mmap.c | 2 +
mm/swapfile.c | 4 +-
mm/vmscan.c | 45 +++---
net/atm/lec.c | 6 +
net/batman-adv/bridge_loop_avoidance.c | 145 ++++++++++++++----
net/batman-adv/bridge_loop_avoidance.h | 4 +-
net/batman-adv/routing.c | 4 +
net/batman-adv/soft-interface.c | 6 +-
net/bluetooth/hci_event.c | 25 ++-
net/bluetooth/l2cap_core.c | 29 ++--
net/bluetooth/l2cap_sock.c | 18 ++-
net/core/filter.c | 4 +-
net/core/neighbour.c | 1 +
net/ipv4/route.c | 1 +
net/ipv4/tcp.c | 2 +-
net/ipv6/ip6_fib.c | 7 +-
net/llc/af_llc.c | 2 +-
net/mac802154/tx.c | 8 +-
net/openvswitch/meter.c | 4 +-
net/openvswitch/meter.h | 2 +-
net/sctp/outqueue.c | 6 +
net/sunrpc/svc_xprt.c | 19 ++-
net/sunrpc/xprtrdma/svc_rdma_backchannel.c | 1 +
net/tipc/topsrv.c | 4 +-
net/unix/af_unix.c | 11 +-
security/selinux/selinuxfs.c | 1 +
sound/hda/hdac_bus.c | 4 +
sound/pci/asihpi/hpioctl.c | 4 +-
sound/pci/hda/hda_controller.c | 11 +-
sound/pci/hda/patch_realtek.c | 13 +-
sound/soc/codecs/max98090.c | 8 +-
sound/soc/codecs/wm8994.c | 10 ++
sound/soc/codecs/wm_hubs.c | 3 +
sound/soc/codecs/wm_hubs.h | 1 +
sound/soc/img/img-i2s-out.c | 8 +-
sound/soc/intel/boards/bytcr_rt5640.c | 10 ++
sound/soc/kirkwood/kirkwood-dma.c | 2 +-
sound/usb/midi.c | 29 +++-
sound/usb/mixer.c | 10 ++
sound/usb/quirks.c | 7 +-
tools/gpio/gpio-hammer.c | 17 +-
tools/objtool/check.c | 2 +-
tools/perf/builtin-stat.c | 2 +-
tools/perf/pmu-events/jevents.c | 15 +-
.../perf/tests/shell/lib/probe_vfs_getname.sh | 2 +-
tools/perf/trace/beauty/arch_errno_names.sh | 2 +-
tools/perf/util/cpumap.c | 10 +-
tools/perf/util/evsel.c | 3 +
tools/perf/util/mem2node.c | 3 +-
tools/perf/util/metricgroup.c | 3 +
tools/perf/util/parse-events.c | 9 +-
tools/perf/util/sort.c | 2 +-
tools/perf/util/symbol-elf.c | 7 +
.../intel_pstate_tracer.py | 22 +--
.../ftrace/test.d/ftrace/func-filter-glob.tc | 2 +-
tools/testing/selftests/x86/syscall_nt.c | 1 +
virt/kvm/arm/mmio.c | 2 +-
virt/kvm/arm/mmu.c | 5 +-
virt/kvm/arm/vgic/vgic-init.c | 1 +
virt/kvm/arm/vgic/vgic-its.c | 11 +-
virt/kvm/kvm_main.c | 1 +
270 files changed, 2313 insertions(+), 1196 deletions(-)
--
2.25.1
1
237

[PATCH 01/65] mm: memcg: make memory.oom.group tolerable to task migration
by Yang Yingliang 10 Oct '20
by Yang Yingliang 10 Oct '20
10 Oct '20
From: Roman Gushchin <guro(a)fb.com>
mainline inclusion
from mainline-5.7-rc1
commit 48fe267c503ec22014ba4e83d002b07caad034d0
category: bugfix
bugzilla: 33351
CVE: NA
-------------------------------------------------
If a task is getting moved out of the OOMing cgroup, it might result in
unexpected OOM killings if memory.oom.group is used anywhere in the cgroup
tree.
Imagine the following example:
A (oom.group = 1)
/ \
(OOM) B C
Let's say B's memory.max is exceeded and it's OOMing. The OOM killer
selects a task in B as a victim, but someone asynchronously moves the task
into C. mem_cgroup_get_oom_group() will iterate over all ancestors of C
up to the root cgroup. In theory it had to stop at the oom_domain level -
the memory cgroup which is OOMing. But because B is not an ancestor of C,
it's not happening. Instead it chooses A (because it's oom.group is set),
and kills all tasks in A. This behavior is wrong because the OOM happened
in B, so there is no reason to kill anything outside.
Fix this by checking it the memory cgroup to which the task belongs is a
descendant of the oom_domain. If not, memory.oom.group should be ignored,
and the OOM killer should kill only the victim task.
Reported-by: Dan Schatzberg <dschatzberg(a)fb.com>
Signed-off-by: Roman Gushchin <guro(a)fb.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Acked-by: Johannes Weiner <hannes(a)cmpxchg.org>
Link: http://lkml.kernel.org/r/20200316223510.3176148-1-guro@fb.com
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
(cherry picked from commit 48fe267c503ec22014ba4e83d002b07caad034d0)
Signed-off-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Liu Shixin <liushixin2(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
mm/memcontrol.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 1342b9540476..8611f3301686 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1856,6 +1856,14 @@ struct mem_cgroup *mem_cgroup_get_oom_group(struct task_struct *victim,
if (memcg == root_mem_cgroup)
goto out;
+ /*
+ * If the victim task has been asynchronously moved to a different
+ * memory cgroup, we might end up killing tasks outside oom_domain.
+ * In this case it's better to ignore memory.group.oom.
+ */
+ if (unlikely(!mem_cgroup_is_descendant(memcg, oom_domain)))
+ goto out;
+
/*
* Traverse the memory cgroup hierarchy from the victim task's
* cgroup up to the OOMing cgroup (or root) to find the
--
2.25.1
1
64
From: Qian Cai <cai(a)lca.pw>
mainline inclusion
from mainline-v5.9-rc1
commit a449bf58e45abf919b27ba62a9dcbae3ad36e725
category: bugfix
bugzilla: NA
CVE: NA
-------------------------------------------------
swap_info_struct si.highest_bit, si.swap_map[offset] and si.flags could
be accessed concurrently separately as noticed by KCSAN,
=== si.highest_bit ===
write to 0xffff8d5abccdc4d4 of 4 bytes by task 5353 on cpu 24:
swap_range_alloc+0x81/0x130
swap_range_alloc at mm/swapfile.c:681
scan_swap_map_slots+0x371/0xb90
get_swap_pages+0x39d/0x5c0
get_swap_page+0xf2/0x524
add_to_swap+0xe4/0x1c0
shrink_page_list+0x1795/0x2870
shrink_inactive_list+0x316/0x880
shrink_lruvec+0x8dc/0x1380
shrink_node+0x317/0xd80
do_try_to_free_pages+0x1f7/0xa10
try_to_free_pages+0x26c/0x5e0
__alloc_pages_slowpath+0x458/0x1290
read to 0xffff8d5abccdc4d4 of 4 bytes by task 6672 on cpu 70:
scan_swap_map_slots+0x4a6/0xb90
scan_swap_map_slots at mm/swapfile.c:892
get_swap_pages+0x39d/0x5c0
get_swap_page+0xf2/0x524
add_to_swap+0xe4/0x1c0
shrink_page_list+0x1795/0x2870
shrink_inactive_list+0x316/0x880
shrink_lruvec+0x8dc/0x1380
shrink_node+0x317/0xd80
do_try_to_free_pages+0x1f7/0xa10
try_to_free_pages+0x26c/0x5e0
__alloc_pages_slowpath+0x458/0x1290
Reported by Kernel Concurrency Sanitizer on:
CPU: 70 PID: 6672 Comm: oom01 Tainted: G W L 5.5.0-next-20200205+ #3
Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 07/10/2019
=== si.swap_map[offset] ===
write to 0xffffbc370c29a64c of 1 bytes by task 6856 on cpu 86:
__swap_entry_free_locked+0x8c/0x100
__swap_entry_free_locked at mm/swapfile.c:1209 (discriminator 4)
__swap_entry_free.constprop.20+0x69/0xb0
free_swap_and_cache+0x53/0xa0
unmap_page_range+0x7f8/0x1d70
unmap_single_vma+0xcd/0x170
unmap_vmas+0x18b/0x220
exit_mmap+0xee/0x220
mmput+0x10e/0x270
do_exit+0x59b/0xf40
do_group_exit+0x8b/0x180
read to 0xffffbc370c29a64c of 1 bytes by task 6855 on cpu 20:
_swap_info_get+0x81/0xa0
_swap_info_get at mm/swapfile.c:1140
free_swap_and_cache+0x40/0xa0
unmap_page_range+0x7f8/0x1d70
unmap_single_vma+0xcd/0x170
unmap_vmas+0x18b/0x220
exit_mmap+0xee/0x220
mmput+0x10e/0x270
do_exit+0x59b/0xf40
do_group_exit+0x8b/0x180
=== si.flags ===
write to 0xffff956c8fc6c400 of 8 bytes by task 6087 on cpu 23:
scan_swap_map_slots+0x6fe/0xb50
scan_swap_map_slots at mm/swapfile.c:887
get_swap_pages+0x39d/0x5c0
get_swap_page+0x377/0x524
add_to_swap+0xe4/0x1c0
shrink_page_list+0x1795/0x2870
shrink_inactive_list+0x316/0x880
shrink_lruvec+0x8dc/0x1380
shrink_node+0x317/0xd80
do_try_to_free_pages+0x1f7/0xa10
try_to_free_pages+0x26c/0x5e0
__alloc_pages_slowpath+0x458/0x1290
read to 0xffff956c8fc6c400 of 8 bytes by task 6207 on cpu 63:
_swap_info_get+0x41/0xa0
__swap_info_get at mm/swapfile.c:1114
put_swap_page+0x84/0x490
__remove_mapping+0x384/0x5f0
shrink_page_list+0xff1/0x2870
shrink_inactive_list+0x316/0x880
shrink_lruvec+0x8dc/0x1380
shrink_node+0x317/0xd80
do_try_to_free_pages+0x1f7/0xa10
try_to_free_pages+0x26c/0x5e0
__alloc_pages_slowpath+0x458/0x1290
The writes are under si->lock but the reads are not. For si.highest_bit
and si.swap_map[offset], data race could trigger logic bugs, so fix them
by having WRITE_ONCE() for the writes and READ_ONCE() for the reads
except those isolated reads where they compare against zero which a data
race would cause no harm. Thus, annotate them as intentional data races
using the data_race() macro.
For si.flags, the readers are only interested in a single bit where a
data race there would cause no issue there.
[cai(a)lca.pw: add a missing annotation for si->flags in memory.c]
Link: http://lkml.kernel.org/r/1581612647-5958-1-git-send-email-cai@lca.pw
Signed-off-by: Qian Cai <cai(a)lca.pw>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Marco Elver <elver(a)google.com>
Cc: Hugh Dickins <hughd(a)google.com>
Link: http://lkml.kernel.org/r/1581095163-12198-1-git-send-email-cai@lca.pw
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
[yyl: remove data_race(), this macro is used when KCSAN is enabled]
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
mm/swapfile.c | 21 +++++++++++++--------
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 60ccb3072318..41400ae4e7c1 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -639,7 +639,7 @@ static void swap_range_alloc(struct swap_info_struct *si, unsigned long offset,
if (offset == si->lowest_bit)
si->lowest_bit += nr_entries;
if (end == si->highest_bit)
- si->highest_bit -= nr_entries;
+ WRITE_ONCE(si->highest_bit, si->highest_bit - nr_entries);
si->inuse_pages += nr_entries;
if (si->inuse_pages == si->pages) {
si->lowest_bit = si->max;
@@ -671,7 +671,7 @@ static void swap_range_free(struct swap_info_struct *si, unsigned long offset,
if (end > si->highest_bit) {
bool was_full = !si->highest_bit;
- si->highest_bit = end;
+ WRITE_ONCE(si->highest_bit, end);
if (was_full && (si->flags & SWP_WRITEOK))
add_to_avail_list(si);
}
@@ -804,7 +804,7 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
else
goto done;
}
- si->swap_map[offset] = usage;
+ WRITE_ONCE(si->swap_map[offset], usage);
inc_cluster_info_page(si, si->cluster_info, offset);
unlock_cluster(ci);
@@ -850,12 +850,13 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
scan:
spin_unlock(&si->lock);
- while (++offset <= si->highest_bit) {
+ while (++offset <= READ_ONCE(si->highest_bit)) {
if (!si->swap_map[offset]) {
spin_lock(&si->lock);
goto checks;
}
- if (vm_swap_full() && si->swap_map[offset] == SWAP_HAS_CACHE) {
+ if (vm_swap_full() &&
+ READ_ONCE(si->swap_map[offset]) == SWAP_HAS_CACHE) {
spin_lock(&si->lock);
goto checks;
}
@@ -870,7 +871,8 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
spin_lock(&si->lock);
goto checks;
}
- if (vm_swap_full() && si->swap_map[offset] == SWAP_HAS_CACHE) {
+ if (vm_swap_full() &&
+ READ_ONCE(si->swap_map[offset]) == SWAP_HAS_CACHE) {
spin_lock(&si->lock);
goto checks;
}
@@ -1166,7 +1168,10 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p,
}
usage = count | has_cache;
- p->swap_map[offset] = usage ? : SWAP_HAS_CACHE;
+ if (usage)
+ WRITE_ONCE(p->swap_map[offset], usage);
+ else
+ WRITE_ONCE(p->swap_map[offset], SWAP_HAS_CACHE);
return usage;
}
@@ -3522,7 +3527,7 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
} else
err = -ENOENT; /* unused swap entry */
- p->swap_map[offset] = count | has_cache;
+ WRITE_ONCE(p->swap_map[offset], count | has_cache);
unlock_out:
unlock_cluster_or_swap_info(p, ci);
--
2.25.1
1
0

30 Sep '20
From: Bixuan Cui <cuibixuan(a)huawei.com>
ascend inclusion
category: bugfix
bugzilla: NA
CVE: NA
-------------------------------------------------
Fix a mistake check for commit bd6c06e0917d
("iommu: introduce device fault report API")
Fixes: bd6c06e0917d ("iommu: introduce device fault report API")
Signed-off-by: Bixuan Cui <cuibixuan(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/iommu/iommu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 831c5065f7f8..bee6b8fcfe0e 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -982,7 +982,7 @@ int iommu_unregister_device_fault_handler(struct device *dev)
mutex_lock(¶m->lock);
/* we cannot unregister handler if there are pending faults */
- if (list_empty(¶m->fault_param->faults)) {
+ if (!list_empty(¶m->fault_param->faults)) {
ret = -EBUSY;
goto unlock;
}
--
2.25.1
1
0

[PATCH] arm64: Kconfig: change fix compile error if gcc don't support armv8.4-a
by Yang Yingliang 30 Sep '20
by Yang Yingliang 30 Sep '20
30 Sep '20
hulk inclusion
category: feature
CVE: NA
-----------------------
After 4d0831e8a029 ("kconfig: unify cc-option and as-option")
in mainline, we can use cc-option.
But in linux-4.19, this patch is not merged, so we will get the
follow error:
CC scripts/mod/empty.o
CC scripts/mod/devicetable-offsets.s
Assembler messages:
Error: unknown architecture `armv8.4-a'
Error: unrecognized option -march=armv8.4-a
Fix this by changing cc-option to as-option.
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 1ef7862d1b75..e379fca3c8ad 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1307,7 +1307,7 @@ endmenu
menu "ARMv8.4 architectural features"
config AS_HAS_ARMV8_4
- def_bool $(cc-option,-Wa$(comma)-march=armv8.4-a)
+ def_bool $(as-option,-Wa$(comma)-march=armv8.4-a)
config ARM64_TLB_RANGE
bool "Enable support for tlbi range feature"
--
2.25.1
1
0

[PATCH 1/2] block: loop: set discard granularity and alignment for block device backed loop
by Yang Yingliang 30 Sep '20
by Yang Yingliang 30 Sep '20
30 Sep '20
From: Ming Lei <ming.lei(a)redhat.com>
mainline inclusion
from mainline-v5.9-rc3
commit bcb21c8cc9947286211327d663ace69f07d37a76
category: bugfix
bugzilla: NA
CVE: NA
-------------------------------------------------
In case of block device backend, if the backend supports write zeros, the
loop device will set queue flag of QUEUE_FLAG_DISCARD. However,
limits.discard_granularity isn't setup, and this way is wrong,
see the following description in Documentation/ABI/testing/sysfs-block:
A discard_granularity of 0 means that the device does not support
discard functionality.
Especially 9b15d109a6b2 ("block: improve discard bio alignment in
__blkdev_issue_discard()") starts to take q->limits.discard_granularity
for computing max discard sectors. And zero discard granularity may cause
kernel oops, or fail discard request even though the loop queue claims
discard support via QUEUE_FLAG_DISCARD.
Fix the issue by setup discard granularity and alignment.
Fixes: c52abf563049 ("loop: Better discard support for block devices")
Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
Reviewed-by: Christoph Hellwig <hch(a)lst.de>
Acked-by: Coly Li <colyli(a)suse.de>
Cc: Hannes Reinecke <hare(a)suse.com>
Cc: Xiao Ni <xni(a)redhat.com>
Cc: Martin K. Petersen <martin.petersen(a)oracle.com>
Cc: Evan Green <evgreen(a)chromium.org>
Cc: Gwendal Grignou <gwendal(a)chromium.org>
Cc: Chaitanya Kulkarni <chaitanya.kulkarni(a)wdc.com>
Cc: Andrzej Pietrasiewicz <andrzej.p(a)collabora.com>
Cc: Christoph Hellwig <hch(a)lst.de>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Yufen Yu <yuyufen(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/block/loop.c | 33 ++++++++++++++++++---------------
1 file changed, 18 insertions(+), 15 deletions(-)
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index da68c42aed68..19042b42a8ba 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -864,6 +864,7 @@ static void loop_config_discard(struct loop_device *lo)
struct file *file = lo->lo_backing_file;
struct inode *inode = file->f_mapping->host;
struct request_queue *q = lo->lo_queue;
+ u32 granularity, max_discard_sectors;
/*
* If the backing device is a block device, mirror its zeroing
@@ -876,11 +877,10 @@ static void loop_config_discard(struct loop_device *lo)
struct request_queue *backingq;
backingq = bdev_get_queue(inode->i_bdev);
- blk_queue_max_discard_sectors(q,
- backingq->limits.max_write_zeroes_sectors);
- blk_queue_max_write_zeroes_sectors(q,
- backingq->limits.max_write_zeroes_sectors);
+ max_discard_sectors = backingq->limits.max_write_zeroes_sectors;
+ granularity = backingq->limits.discard_granularity ?:
+ queue_physical_block_size(backingq);
/*
* We use punch hole to reclaim the free space used by the
@@ -889,23 +889,26 @@ static void loop_config_discard(struct loop_device *lo)
* useful information.
*/
} else if (!file->f_op->fallocate || lo->lo_encrypt_key_size) {
- q->limits.discard_granularity = 0;
- q->limits.discard_alignment = 0;
- blk_queue_max_discard_sectors(q, 0);
- blk_queue_max_write_zeroes_sectors(q, 0);
+ max_discard_sectors = 0;
+ granularity = 0;
} else {
- q->limits.discard_granularity = inode->i_sb->s_blocksize;
- q->limits.discard_alignment = 0;
-
- blk_queue_max_discard_sectors(q, UINT_MAX >> 9);
- blk_queue_max_write_zeroes_sectors(q, UINT_MAX >> 9);
+ max_discard_sectors = UINT_MAX >> 9;
+ granularity = inode->i_sb->s_blocksize;
}
- if (q->limits.max_write_zeroes_sectors)
+ if (max_discard_sectors) {
+ q->limits.discard_granularity = granularity;
+ blk_queue_max_discard_sectors(q, max_discard_sectors);
+ blk_queue_max_write_zeroes_sectors(q, max_discard_sectors);
blk_queue_flag_set(QUEUE_FLAG_DISCARD, q);
- else
+ } else {
+ q->limits.discard_granularity = 0;
+ blk_queue_max_discard_sectors(q, 0);
+ blk_queue_max_write_zeroes_sectors(q, 0);
blk_queue_flag_clear(QUEUE_FLAG_DISCARD, q);
+ }
+ q->limits.discard_alignment = 0;
}
static void loop_unprepare_queue(struct loop_device *lo)
--
2.25.1
1
1

30 Sep '20
This reverts commit 8cc25436c41592b2236b4d2911a3031131a62872.
---
arch/arm64/include/asm/kvm_asm.h | 15 ----------
arch/arm64/kernel/vmlinux.lds.S | 8 -----
arch/arm64/kvm/hyp/entry.S | 16 ++++------
arch/arm64/kvm/hyp/hyp-entry.S | 51 +++++++++++++-------------------
arch/arm64/kvm/hyp/switch.c | 31 -------------------
5 files changed, 26 insertions(+), 95 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 5df55a4dab42..400cb2af5ba6 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -146,21 +146,6 @@ extern u32 __kvm_get_mdcr_el2(void);
kern_hyp_va \vcpu
.endm
-/*
- * KVM extable for unexpected exceptions.
- * In the same format _asm_extable, but output to a different section so that
- * it can be mapped to EL2. The KVM version is not sorted. The caller must
- * ensure:
- * x18 has the hypervisor value to allow any Shadow-Call-Stack instrumented
- * code to write to it, and that SPSR_EL2 and ELR_EL2 are restored by the fixup.
- */
-.macro _kvm_extable, from, to
- .pushsection __kvm_ex_table, "a"
- .align 3
- .long (\from - .), (\to - .)
- .popsection
-.endm
-
#endif
#endif /* __ARM_KVM_ASM_H__ */
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 69e7c8d4a00f..d6050c6e65bc 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -24,13 +24,6 @@ ENTRY(_text)
jiffies = jiffies_64;
-
-#define HYPERVISOR_EXTABLE \
- . = ALIGN(SZ_8); \
- __start___kvm_ex_table = .; \
- *(__kvm_ex_table) \
- __stop___kvm_ex_table = .;
-
#define HYPERVISOR_TEXT \
/* \
* Align to 4 KB so that \
@@ -46,7 +39,6 @@ jiffies = jiffies_64;
__hyp_idmap_text_end = .; \
__hyp_text_start = .; \
*(.hyp.text) \
- HYPERVISOR_EXTABLE \
__hyp_text_end = .;
#define IDMAP_TEXT \
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index 90e012fa3ca5..a0552b5177c5 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -164,22 +164,18 @@ alternative_endif
// This is our single instruction exception window. A pending
// SError is guaranteed to occur at the earliest when we unmask
// it, and at the latest just after the ISB.
+ .global abort_guest_exit_start
abort_guest_exit_start:
isb
+ .global abort_guest_exit_end
abort_guest_exit_end:
- msr daifset, #4 // Mask aborts
- ret
-
- _kvm_extable abort_guest_exit_start, 9997f
- _kvm_extable abort_guest_exit_end, 9997f
-9997:
- msr daifset, #4 // Mask aborts
- mov x0, #(1 << ARM_EXIT_WITH_SERROR_BIT)
- // restore the EL1 exception context so that we can report some
- // information. Merge the exception code with the SError pending bit.
+ // If the exception took place, restore the EL1 exception
+ // context so that we can report some information.
+ // Merge the exception code with the SError pending bit.
+ tbz x0, #ARM_EXIT_WITH_SERROR_BIT, 1f
msr elr_el2, x2
msr esr_el2, x3
msr spsr_el2, x4
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index c3e4ae84f3a4..71591a6ee63e 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -26,30 +26,6 @@
#include <asm/kvm_mmu.h>
#include <asm/mmu.h>
-.macro save_caller_saved_regs_vect
- /* x0 and x1 were saved in the vector entry */
- stp x2, x3, [sp, #-16]!
- stp x4, x5, [sp, #-16]!
- stp x6, x7, [sp, #-16]!
- stp x8, x9, [sp, #-16]!
- stp x10, x11, [sp, #-16]!
- stp x12, x13, [sp, #-16]!
- stp x14, x15, [sp, #-16]!
- stp x16, x17, [sp, #-16]!
-.endm
-
-.macro restore_caller_saved_regs_vect
- ldp x16, x17, [sp], #16
- ldp x14, x15, [sp], #16
- ldp x12, x13, [sp], #16
- ldp x10, x11, [sp], #16
- ldp x8, x9, [sp], #16
- ldp x6, x7, [sp], #16
- ldp x4, x5, [sp], #16
- ldp x2, x3, [sp], #16
- ldp x0, x1, [sp], #16
-.endm
-
.text
.pushsection .hyp.text, "ax"
@@ -209,14 +185,27 @@ el2_sync:
el2_error:
- save_caller_saved_regs_vect
- stp x29, x30, [sp, #-16]!
-
- bl kvm_unexpected_el2_exception
-
- ldp x29, x30, [sp], #16
- restore_caller_saved_regs_vect
+ ldp x0, x1, [sp], #16
+ /*
+ * Only two possibilities:
+ * 1) Either we come from the exit path, having just unmasked
+ * PSTATE.A: change the return code to an EL2 fault, and
+ * carry on, as we're already in a sane state to handle it.
+ * 2) Or we come from anywhere else, and that's a bug: we panic.
+ *
+ * For (1), x0 contains the original return code and x1 doesn't
+ * contain anything meaningful at that stage. We can reuse them
+ * as temp registers.
+ * For (2), who cares?
+ */
+ mrs x0, elr_el2
+ adr x1, abort_guest_exit_start
+ cmp x0, x1
+ adr x1, abort_guest_exit_end
+ ccmp x0, x1, #4, ne
+ b.ne __hyp_panic
+ mov x0, #(1 << ARM_EXIT_WITH_SERROR_BIT)
eret
sb
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index e9ea7cf3e98f..acd2d84b190c 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -25,7 +25,6 @@
#include <asm/barrier.h>
#include <asm/cpufeature.h>
-#include <asm/extable.h>
#include <asm/kprobes.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h>
@@ -37,9 +36,6 @@
#include <asm/processor.h>
#include <asm/thread_info.h>
-extern struct exception_table_entry __start___kvm_ex_table;
-extern struct exception_table_entry __stop___kvm_ex_table;
-
/* Check whether the FP regs were dirtied while in the host-side run loop: */
static bool __hyp_text update_fp_enabled(struct kvm_vcpu *vcpu)
{
@@ -730,30 +726,3 @@ void __hyp_text __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt)
unreachable();
}
-
-asmlinkage void __hyp_text kvm_unexpected_el2_exception(void)
-{
- unsigned long addr, fixup;
- struct kvm_cpu_context *host_ctxt;
- struct exception_table_entry *entry, *end;
- unsigned long elr_el2 = read_sysreg(elr_el2);
-
- entry = hyp_symbol_addr(__start___kvm_ex_table);
- end = hyp_symbol_addr(__stop___kvm_ex_table);
- host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);
-
- while (entry < end) {
- addr = (unsigned long)&entry->insn + entry->insn;
- fixup = (unsigned long)&entry->fixup + entry->fixup;
-
- if (addr != elr_el2) {
- entry++;
- continue;
- }
-
- write_sysreg(fixup, elr_el2);
- return;
- }
-
- hyp_panic(host_ctxt);
-}
--
2.25.1
2
3
From: Shengzui You <youshengzui(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: NA
CVE: NA
-----------------------------------
Reviewed-by: Weiwei Deng <dengweiwei(a)huawei.com>
Reviewed-by: Zhaohui Zhong <zhongzhaohui(a)huawei.com>
Reviewed-by: Junxin Chen <chenjunxin1(a)huawei.com>
Signed-off-by: Shengzui You <youshengzui(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
.../hisilicon/hns3/hns-customer/hns3pf/hclge_main_it.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns-customer/hns3pf/hclge_main_it.h b/drivers/net/ethernet/hisilicon/hns3/hns-customer/hns3pf/hclge_main_it.h
index cd1636e6dd65..343e036412d4 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns-customer/hns3pf/hclge_main_it.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns-customer/hns3pf/hclge_main_it.h
@@ -36,14 +36,14 @@ typedef void (*nic_event_fn_t) (struct net_device *netdev,
enum hnae3_event_type_custom);
/**
- * nic_register_event - register for nic event listening
+ * nic_register_event - register for nic event handling
* @event_call: nic event handler
* return 0 - success , negative - fail
*/
int nic_register_event(nic_event_fn_t event_call);
/**
- * nic_unregister_event - quit nic event listening
+ * nic_unregister_event - quit nic event handling
* return 0 - success , negative - fail
*/
int nic_unregister_event(void);
--
2.25.1
1
2
euleros inclusion
category: bugfix
bugzilla: NA
CVE: NA
It will raise 'missing "WITH Linux-syscall-note" for SPDX-License-Identifier'
when make headers_install. this patch fix it.
Link: https://gitee.com/openeuler/kernel/issues/I1WZL5
Signed-off-by: Yipeng Yin <yinyipeng1(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
---
arch/riscv/include/uapi/asm/kvm.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
index f4274c2e5..65cd00654 100644
--- a/arch/riscv/include/uapi/asm/kvm.h
+++ b/arch/riscv/include/uapi/asm/kvm.h
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: GPL-2.0 */
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
/*
* Copyright (C) 2019 Western Digital Corporation or its affiliates.
*
--
2.19.1
2
1
From: Ming Lei <ming.lei(a)redhat.com>
stable inclusion
from linux-4.19.144
commit b48bcb664b657ae94b19c0728978c88e012f7a37
CVE: CVE-2020-25641
--------------------------------
commit 7e24969022cbd61ddc586f14824fc205661bb124 upstream.
Block layer usually doesn't support or allow zero-length bvec. Since
commit 1bdc76aea115 ("iov_iter: use bvec iterator to implement
iterate_bvec()"), iterate_bvec() switches to bvec iterator. However,
Al mentioned that 'Zero-length segments are not disallowed' in iov_iter.
Fixes for_each_bvec() so that it can move on after seeing one zero
length bvec.
Fixes: 1bdc76aea115 ("iov_iter: use bvec iterator to implement iterate_bvec()")
Reported-by: syzbot <syzbot+61acc40a49a3e46e25ea(a)syzkaller.appspotmail.com>
Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
Tested-by: Tetsuo Handa <penguin-kernel(a)i-love.sakura.ne.jp>
Cc: Al Viro <viro(a)zeniv.linux.org.uk>
Cc: Matthew Wilcox <willy(a)infradead.org>
Cc: <stable(a)vger.kernel.org>
Link: https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg2262077.html
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/linux/bvec.h | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/include/linux/bvec.h b/include/linux/bvec.h
index fe7a22dd133b..bc1f16e9f3f4 100644
--- a/include/linux/bvec.h
+++ b/include/linux/bvec.h
@@ -119,11 +119,18 @@ static inline bool bvec_iter_rewind(const struct bio_vec *bv,
return true;
}
+static inline void bvec_iter_skip_zero_bvec(struct bvec_iter *iter)
+{
+ iter->bi_bvec_done = 0;
+ iter->bi_idx++;
+}
+
#define for_each_bvec(bvl, bio_vec, iter, start) \
for (iter = (start); \
(iter).bi_size && \
((bvl = bvec_iter_bvec((bio_vec), (iter))), 1); \
- bvec_iter_advance((bio_vec), &(iter), (bvl).bv_len))
+ (bvl).bv_len ? (void)bvec_iter_advance((bio_vec), &(iter), \
+ (bvl).bv_len) : bvec_iter_skip_zero_bvec(&(iter)))
/* for iterating one bio from start to end */
#define BVEC_ITER_ALL_INIT (struct bvec_iter) \
--
2.25.1
1
0
From: Evan Green <evgreen(a)chromium.org>
mainline inclusion
from mainline-v5.7-rc1
commit 8cd55087dc45b2e1a73ed2a197cbf405f32deb08
category: bugfix
bugzilla: 38877
CVE: NA
Properly plumb out EOPNOTSUPP from loop driver operations, which may
get returned when for instance a discard operation is attempted but not
supported by the underlying block device. Before this change, everything
was reported in the log as an I/O error, which is scary and not
helpful in debugging.
Signed-off-by: Evan Green <evgreen(a)chromium.org>
Reviewed-by: Gwendal Grignou <gwendal(a)chromium.org>
Reviewed-by: Bart Van Assche <bvanassche(a)acm.org>
Signed-off-by: Andrzej Pietrasiewicz <andrzej.p(a)collabora.com>
Reviewed-by: Christoph Hellwig <hch(a)lst.de>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Luo Meng <luomeng12(a)huawei.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/block/loop.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index da68c42aed68..19b64ca8c4e3 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -461,7 +461,7 @@ static void lo_complete_rq(struct request *rq)
if (!cmd->use_aio || cmd->ret < 0 || cmd->ret == blk_rq_bytes(rq) ||
req_op(rq) != REQ_OP_READ) {
if (cmd->ret < 0)
- ret = BLK_STS_IOERR;
+ ret = errno_to_blk_status(cmd->ret);
goto end_io;
}
@@ -1924,7 +1924,10 @@ static void loop_handle_cmd(struct loop_cmd *cmd)
failed:
/* complete non-aio request */
if (!cmd->use_aio || ret) {
- cmd->ret = ret ? -EIO : 0;
+ if (ret == -EOPNOTSUPP)
+ cmd->ret = ret;
+ else
+ cmd->ret = ret ? -EIO : 0;
blk_mq_complete_request(rq);
}
}
--
2.25.1
1
5
Dan Carpenter (1):
hdlc_ppp: add range checks in ppp_cp_parse_cr()
David Ahern (1):
ipv4: Update exception handling for multipath routes via same device
Dmitry Golovin (1):
x86/boot: kbuild: allow readelf executable to be specified
Edwin Peer (1):
bnxt_en: return proper error codes in bnxt_show_temp
Eric Dumazet (3):
ipv6: avoid lockdep issue in fib6_del()
net: qrtr: check skb_put_padto() return value
net: add __must_check to skb_put_padto()
Fangrui Song (1):
Documentation/llvm: fix the name of llvm-size
Florian Fainelli (1):
net: phy: Avoid NPD upon phy_detach() when driver is unbound
Ganji Aravind (1):
cxgb4: Fix offset when clearing filter byte counters
Greg Kroah-Hartman (1):
Linux 4.19.148
Jakub Kicinski (1):
nfp: use correct define to return NONE fec
Linus Walleij (1):
net: dsa: rtl8366: Properly clear member config
Lukas Wunner (1):
serial: 8250: Avoid error message on reprobe
Mark Gray (1):
geneve: add transport ports in route lookup for geneve
Mark Salyzyn (1):
af_key: pfkey_dump needs parameter validation
Masahiro Yamada (5):
net: wan: wanxl: use allow to pass CROSS_COMPILE_M68k for rebuilding
firmware
net: wan: wanxl: use $(M68KCC) instead of $(M68KAS) for rebuilding
firmware
kbuild: remove AS variable
kbuild: replace AS=clang with LLVM_IAS=1
kbuild: support LLVM=1 to switch the default tools to Clang/LLVM
Michael Chan (1):
bnxt_en: Protect bnxt_set_eee() and bnxt_set_pauseparam() with mutex.
Muchun Song (1):
kprobes: fix kill kprobe which has been marked as gone
Necip Fazil Yildiran (1):
net: ipv6: fix kconfig dependency warning for IPV6_SEG6_HMAC
Nick Desaulniers (2):
MAINTAINERS: add CLANG/LLVM BUILD SUPPORT info
Documentation/llvm: add documentation on building w/ Clang/LLVM
Peilin Ye (1):
tipc: Fix memory leak in tipc_group_create_member()
Petr Machata (1):
net: DCB: Validate DCB_ATTR_DCB_BUFFER argument
Priyaranjan Jha (2):
tcp_bbr: refactor bbr_target_cwnd() for general inflight provisioning
tcp_bbr: adapt cwnd based on ack aggregation estimation
Ralph Campbell (1):
mm/thp: fix __split_huge_pmd_locked() for migration PMD
Rustam Kovhaev (1):
KVM: fix memory leak in kvm_io_bus_unregister_dev()
Tetsuo Handa (1):
tipc: fix shutdown() of connection oriented socket
Vasily Gorbik (1):
kbuild: add OBJSIZE variable for the size tool
Wei Wang (1):
ip: fix tos reflection in ack and reset packets
Xin Long (1):
tipc: use skb_unshare() instead in tipc_buf_append()
Xunlei Pang (1):
mm: memcg: fix memcg reclaim soft lockup
Yunsheng Lin (1):
net: sch_generic: aviod concurrent reset and enqueue op for lockless
qdisc
Documentation/kbuild/llvm.rst | 87 +++++++++
MAINTAINERS | 9 +
Makefile | 38 +++-
arch/x86/boot/compressed/Makefile | 2 +-
drivers/net/dsa/rtl8366.c | 20 +-
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 19 +-
.../net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 31 +--
.../net/ethernet/chelsio/cxgb4/cxgb4_filter.c | 9 +-
.../ethernet/netronome/nfp/nfp_net_ethtool.c | 4 +-
drivers/net/geneve.c | 37 +++-
drivers/net/phy/phy_device.c | 3 +-
drivers/net/wan/Kconfig | 2 +-
drivers/net/wan/Makefile | 12 +-
drivers/net/wan/hdlc_ppp.c | 16 +-
drivers/tty/serial/8250/8250_core.c | 11 +-
include/linux/skbuff.h | 7 +-
include/net/inet_connection_sock.h | 4 +-
kernel/kprobes.c | 9 +-
mm/huge_memory.c | 40 ++--
mm/vmscan.c | 8 +
net/dcb/dcbnl.c | 8 +
net/ipv4/ip_output.c | 3 +-
net/ipv4/route.c | 13 +-
net/ipv4/tcp_bbr.c | 180 ++++++++++++++++--
net/ipv6/Kconfig | 1 +
net/ipv6/ip6_fib.c | 13 +-
net/key/af_key.c | 7 +
net/qrtr/qrtr.c | 20 +-
net/sched/sch_generic.c | 49 +++--
net/tipc/group.c | 14 +-
net/tipc/msg.c | 3 +-
net/tipc/socket.c | 5 +-
tools/objtool/Makefile | 6 +
virt/kvm/kvm_main.c | 21 +-
34 files changed, 550 insertions(+), 161 deletions(-)
create mode 100644 Documentation/kbuild/llvm.rst
--
2.25.1
1
38

【议题收集】【Meeting Notice】openEuler kernel sig meeting Time: 2020-10-16 10:00-12:00
by Xie XiuQi 28 Sep '20
by Xie XiuQi 28 Sep '20
28 Sep '20
openEuler kernel sig meeting 定于 2020-10-16 10:00-12:00 召开,
欢迎回复邮件申报议题。
(受十一假期影响,中间跳过一次例会,如有问题需要交流,可以先发邮件列表)
--- 会议信息 ---
会议链接:https://zoom.us/j/94156903933
会议 ID : 94156903933
--- 会议纪要归档 ---
Meeting Record 20200918: https://gitee.com/openeuler/kernel/issues/I1WGN0
1
0
euleros inclusion
category: bugfix
bugzilla: NA
CVE: NA
It will raise 'missing "WITH Linux-syscall-note" for SPDX-License-Identifier'
when make headers_install. this patch fix it.
Link: https://gitee.com/openeuler/kernel/issues/I1WZL5
Signed-off-by: Yipeng Yin <yinyipeng1(a)huawei.com>
---
arch/riscv/include/uapi/asm/kvm.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
index f4274c2e5..65cd00654 100644
--- a/arch/riscv/include/uapi/asm/kvm.h
+++ b/arch/riscv/include/uapi/asm/kvm.h
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: GPL-2.0 */
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
/*
* Copyright (C) 2019 Western Digital Corporation or its affiliates.
*
--
2.19.1
2
1

27 Sep '20
Steal tasks to improve CPU utilization, backported from branch
'kernel-4.19'.
Cheng Jian (3):
disable stealing by default
sched/fair: introduce SCHED_STEAL
config: enable CONFIG_SCHED_STEAL by default
Steve Sistare (10):
sched: Provide sparsemask, a reduced contention bitmap
sched/topology: Provide hooks to allocate data shared per LLC
sched/topology: Provide cfs_overload_cpus bitmap
sched/fair: Dynamically update cfs_overload_cpus
sched/fair: Hoist idle_stamp up from idle_balance
sched/fair: Generalize the detach_task interface
sched/fair: Provide can_migrate_task_llc
sched/fair: Steal work from an overloaded CPU when CPU goes idle
sched/fair: disable stealing if too many NUMA nodes
sched/fair: Provide idle search schedstats
arch/arm64/configs/euleros_defconfig | 1 +
arch/arm64/configs/hulk_defconfig | 1 +
arch/arm64/configs/openeuler_defconfig | 1 +
arch/arm64/configs/storage_ci_defconfig | 1 +
arch/arm64/configs/syzkaller_defconfig | 1 +
arch/x86/configs/hulk_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
arch/x86/configs/storage_ci_defconfig | 1 +
include/linux/sched/topology.h | 3 +
init/Kconfig | 15 +
kernel/sched/core.c | 35 ++-
kernel/sched/fair.c | 367 ++++++++++++++++++++++--
kernel/sched/features.h | 8 +
kernel/sched/sched.h | 20 ++
kernel/sched/sparsemask.h | 210 ++++++++++++++
kernel/sched/stats.c | 15 +
kernel/sched/stats.h | 20 ++
kernel/sched/topology.c | 141 ++++++++-
18 files changed, 810 insertions(+), 32 deletions(-)
create mode 100644 kernel/sched/sparsemask.h
--
2.17.1
1
13
Adam Borowski (1):
x86/defconfig: Enable CONFIG_USB_XHCI_HCD=y
Alexey Kardashevskiy (1):
powerpc/dma: Fix dma_map_ops::get_required_mask
Arvind Sankar (1):
x86/boot/compressed: Disable relocation relaxation
Bob Peterson (1):
gfs2: initialize transaction tr_ailX_lists earlier
Christophe JAILLET (1):
clk: davinci: Use the correct size when allocating memory
Chuck Lever (1):
NFS: Zero-stateid SETATTR should first return delegation
Daniel Mack (1):
dsa: Allow forwarding of redirected IGMP traffic
David Milburn (2):
nvme-fc: cancel async events before freeing event struct
nvme-rdma: cancel async events before freeing event struct
Dinghao Liu (1):
scsi: pm8001: Fix memleak in pm8001_exec_internal_task_abort
Evan Nimmo (1):
i2c: algo: pca: Reapply i2c bus settings after reset
Gabriel Krisman Bertazi (1):
f2fs: Return EOF on unaligned end of file DIO read
Greg Kroah-Hartman (2):
Revert "ALSA: hda - Fix silent audio output and corrupted input on MSI
X570-A PRO"
Linux 4.19.147
Gustav Wiklander (1):
spi: Fix memory leak on splited transfers
Haiyang Zhang (1):
hv_netvsc: Remove "unlikely" from netvsc_select_queue
Hans de Goede (1):
Input: i8042 - add Entroware Proteus EL07R4 to nomux and reset lists
Huacai Chen (1):
KVM: MIPS: Change the definition of kvm type
J. Bruce Fields (1):
SUNRPC: stop printk reading past end of string
James Smart (1):
scsi: lpfc: Fix FLOGI/PLOGI receive race condition in pt2pt discovery
Javed Hasan (1):
scsi: libfc: Fix for double free()
Jiri Olsa (1):
perf test: Fix the "signal" test inline assembly
Laurent Pinchart (1):
rapidio: Replace 'select' DMAENGINES 'with depends on'
Miaohe Lin (1):
net: handle the return value of pskb_carve_frag_list() correctly
Michael Kelley (1):
Drivers: hv: vmbus: Add timeout to vmbus_wait_for_unload
Namhyung Kim (1):
perf test: Free formats for perf pmu parse test
Naresh Kumar PBS (1):
RDMA/bnxt_re: Restrict the max_gids to 256
Nathan Chancellor (1):
clk: rockchip: Fix initialization of mux_pll_src_4plls_p
Olga Kornievskaia (1):
NFSv4.1 handle ERR_DELAY error reclaiming locking state on delegation
recall
Oliver Neukum (2):
USB: UAS: fix disconnect by unplugging a hub
usblp: fix race between disconnect() and read()
Penghao (1):
USB: quirks: Add USB_QUIRK_IGNORE_REMOTE_WAKEUP quirk for BYD zhaoxin
notebook
Quentin Perret (1):
ehci-hcd: Move include to keep CRC stable
Quinn Tran (3):
scsi: qla2xxx: Update rscn_rcvd field to more meaningful scan_needed
scsi: qla2xxx: Move rport registration out of internal work_list
scsi: qla2xxx: Reduce holding sess_lock to prevent CPU lock-up
Sahitya Tummala (1):
f2fs: fix indefinite loop scanning for free nid
Stafford Horne (1):
openrisc: Fix cache API compile issue when not inlining
Stephan Gerhold (1):
ASoC: qcom: Set card->owner to avoid warnings
Sunghyun Jin (1):
percpu: fix first chunk size calculation for populated bitmap
Tetsuo Handa (1):
fbcon: Fix user font detection test at fbcon_resize().
Thomas Bogendoerfer (2):
MIPS: SNI: Fix MIPS_L1_CACHE_SHIFT
MIPS: SNI: Fix spurious interrupts
Tobias Diedrich (1):
serial: 8250_pci: Add Realtek 816a and 816b
Vincent Huang (1):
Input: trackpoint - add new trackpoint variant IDs
Vincent Whitchurch (2):
regulator: pwm: Fix machine constraints application
spi: spi-loopback-test: Fix out-of-bounds read
Volker Rümelin (1):
i2c: i801: Fix resume bug
Yu Kuai (2):
drm/mediatek: Add exception handing in mtk_drm_probe() if component
init fail
drm/mediatek: Add missing put_device() call in
mtk_hdmi_dt_parse_pdata()
Makefile | 2 +-
arch/mips/Kconfig | 1 +
arch/mips/kvm/mips.c | 2 +
arch/mips/sni/a20r.c | 9 +-
arch/openrisc/mm/cache.c | 2 +-
arch/powerpc/kernel/dma-iommu.c | 3 +-
arch/x86/boot/compressed/Makefile | 2 +
arch/x86/configs/i386_defconfig | 1 +
arch/x86/configs/x86_64_defconfig | 1 +
drivers/clk/davinci/pll.c | 2 +-
drivers/clk/rockchip/clk-rk3228.c | 2 +-
drivers/gpu/drm/mediatek/mtk_drm_drv.c | 7 +-
drivers/gpu/drm/mediatek/mtk_hdmi.c | 26 ++++--
drivers/hv/channel_mgmt.c | 7 +-
drivers/i2c/algos/i2c-algo-pca.c | 35 +++++---
drivers/i2c/busses/i2c-i801.c | 21 +++--
drivers/infiniband/hw/bnxt_re/qplib_sp.c | 2 +-
drivers/infiniband/hw/bnxt_re/qplib_sp.h | 1 +
drivers/input/mouse/trackpoint.c | 10 ++-
drivers/input/mouse/trackpoint.h | 10 ++-
drivers/input/serio/i8042-x86ia64io.h | 16 ++++
drivers/net/hyperv/netvsc_drv.c | 2 +-
drivers/nvme/host/fc.c | 1 +
drivers/nvme/host/rdma.c | 1 +
drivers/rapidio/Kconfig | 2 +-
drivers/regulator/pwm-regulator.c | 2 +-
drivers/scsi/libfc/fc_disc.c | 2 -
drivers/scsi/lpfc/lpfc_els.c | 4 +-
drivers/scsi/pm8001/pm8001_sas.c | 2 +-
drivers/scsi/qla2xxx/qla_def.h | 10 ++-
drivers/scsi/qla2xxx/qla_gbl.h | 5 +-
drivers/scsi/qla2xxx/qla_gs.c | 30 ++++---
drivers/scsi/qla2xxx/qla_init.c | 101 +++++++++++++++++------
drivers/scsi/qla2xxx/qla_os.c | 29 ++++---
drivers/scsi/qla2xxx/qla_target.c | 85 +++++++++++++++----
drivers/spi/spi-loopback-test.c | 2 +-
drivers/spi/spi.c | 9 +-
drivers/tty/serial/8250/8250_pci.c | 11 +++
drivers/usb/class/usblp.c | 5 ++
drivers/usb/core/quirks.c | 4 +
drivers/usb/host/ehci-hcd.c | 1 +
drivers/usb/host/ehci-hub.c | 1 -
drivers/usb/storage/uas.c | 14 +++-
drivers/video/fbdev/core/fbcon.c | 2 +-
fs/f2fs/data.c | 3 +
fs/f2fs/node.c | 3 +
fs/gfs2/glops.c | 2 +
fs/gfs2/log.c | 2 -
fs/gfs2/trans.c | 2 +
fs/nfs/nfs4proc.c | 11 ++-
include/linux/i2c-algo-pca.h | 15 ++++
include/uapi/linux/kvm.h | 5 +-
mm/percpu.c | 2 +-
net/core/skbuff.c | 10 ++-
net/dsa/tag_edsa.c | 37 ++++++++-
net/sunrpc/rpcb_clnt.c | 4 +-
sound/pci/hda/patch_realtek.c | 1 -
sound/soc/qcom/apq8016_sbc.c | 1 +
sound/soc/qcom/apq8096.c | 1 +
sound/soc/qcom/sdm845.c | 1 +
sound/soc/qcom/storm.c | 1 +
tools/perf/tests/bp_signal.c | 5 +-
tools/perf/tests/pmu.c | 1 +
tools/perf/util/pmu.c | 11 +++
tools/perf/util/pmu.h | 1 +
65 files changed, 457 insertions(+), 149 deletions(-)
--
2.25.1
1
50
From: Evan Green <evgreen(a)chromium.org>
mainline inclusion
from mainline-v5.7-rc1
commit 8cd55087dc45b2e1a73ed2a197cbf405f32deb08
category: bugfix
bugzilla: 38877
CVE: NA
Properly plumb out EOPNOTSUPP from loop driver operations, which may
get returned when for instance a discard operation is attempted but not
supported by the underlying block device. Before this change, everything
was reported in the log as an I/O error, which is scary and not
helpful in debugging.
Signed-off-by: Evan Green <evgreen(a)chromium.org>
Reviewed-by: Gwendal Grignou <gwendal(a)chromium.org>
Reviewed-by: Bart Van Assche <bvanassche(a)acm.org>
Signed-off-by: Andrzej Pietrasiewicz <andrzej.p(a)collabora.com>
Reviewed-by: Christoph Hellwig <hch(a)lst.de>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Luo Meng <luomeng12(a)huawei.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/block/loop.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 19042b42a8ba..d0687e90e5d4 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -461,7 +461,7 @@ static void lo_complete_rq(struct request *rq)
if (!cmd->use_aio || cmd->ret < 0 || cmd->ret == blk_rq_bytes(rq) ||
req_op(rq) != REQ_OP_READ) {
if (cmd->ret < 0)
- ret = BLK_STS_IOERR;
+ ret = errno_to_blk_status(cmd->ret);
goto end_io;
}
@@ -1927,7 +1927,10 @@ static void loop_handle_cmd(struct loop_cmd *cmd)
failed:
/* complete non-aio request */
if (!cmd->use_aio || ret) {
- cmd->ret = ret ? -EIO : 0;
+ if (ret == -EOPNOTSUPP)
+ cmd->ret = ret;
+ else
+ cmd->ret = ret ? -EIO : 0;
blk_mq_complete_request(rq);
}
}
--
2.25.1
1
4
Adam Ford (2):
ARM: dts: logicpd-torpedo-baseboard: Fix broken audio
ARM: dts: logicpd-som-lv-baseboard: Fix broken audio
Aleksander Morgado (1):
USB: serial: option: add support for SIM7070/SIM7080/SIM7090 modules
Angelo Compagnucci (2):
iio: adc: mcp3422: fix locking scope
iio: adc: mcp3422: fix locking on error path
Bjørn Mork (1):
USB: serial: option: support dynamic Quectel USB compositions
Chris Healy (1):
ARM: dts: vfxxx: Add syscon compatible with OCOTP
Darrick J. Wong (1):
xfs: initialize the shortform attr header padding entry
Dinghao Liu (4):
RDMA/rxe: Fix memleak in rxe_mem_init_user
NFC: st95hf: Fix memleak in st95hf_in_send_cmd
firestream: Fix memleak in fs_open
HID: elan: Fix memleak in elan_input_configured
Dinh Nguyen (1):
ARM: dts: socfpga: fix register entry for timer3 on Arria10
Douglas Anderson (1):
mmc: sdhci-msm: Add retries when all tuning phases are found valid
Evgeniy Didin (1):
ARC: [plat-hsdk]: Switch ethernet phy-mode to rgmii-id
Filipe Manana (1):
btrfs: fix wrong address when faulting in pages in the search ioctl
Florian Fainelli (4):
ARM: dts: bcm: HR2: Fixed QSPI compatible string
ARM: dts: NSP: Fixed QSPI compatible string
ARM: dts: BCM5301X: Fixed QSPI compatible string
arm64: dts: ns2: Fixed QSPI compatible string
Florian Westphal (1):
netfilter: conntrack: allow sctp hearbeat after connection re-use
Francisco Jerez (1):
cpufreq: intel_pstate: Fix intel_pstate_get_hwp_max() for turbo
disabled
Greg Kroah-Hartman (1):
Linux 4.19.146
Hanjun Guo (1):
dmaengine: acpi: Put the CSRT table after using it
Heikki Krogerus (1):
usb: typec: ucsi: acpi: Check the _DEP dependencies
Hou Pu (1):
scsi: target: iscsi: Fix hang in iscsit_access_np() when getting
tpg->np_login_sem
Joerg Roedel (1):
iommu/amd: Do not use IOMMUv2 functionality when SME is active
Jonathan Cameron (12):
iio:light:ltr501 Fix timestamp alignment issue.
iio:accel:bmc150-accel: Fix timestamp alignment and prevent data leak.
iio:adc:ti-adc084s021 Fix alignment and data leak issues.
iio:adc:ina2xx Fix timestamp alignment issue.
iio:adc:max1118 Fix alignment of timestamp and data leak issues
iio:adc:ti-adc081c Fix alignment and data leak issues
iio:magnetometer:ak8975 Fix alignment and data leak issues.
iio:light:max44000 Fix timestamp alignment and prevent data leak.
iio:chemical:ccs811: Fix timestamp alignment and prevent data leak.
iio: accel: kxsd9: Fix alignment of local buffer.
iio:accel:mma7455: Fix timestamp alignment and prevent data leak.
iio:accel:mma8452: Fix timestamp alignment and prevent data leak.
Jordan Crouse (1):
drm/msm: Disable preemption on all 5xx targets
Josef Bacik (1):
btrfs: fix lockdep splat in add_missing_dev
Kamal Heib (2):
RDMA/rxe: Drop pointless checks in rxe_init_ports
RDMA/core: Fix reported speed and width
Leon Romanovsky (1):
gcov: Disable gcov build with GCC 10
Linus Torvalds (1):
vgacon: remove software scrollback support
Linus Walleij (1):
drm/tve200: Stabilize enable/disable
Mathias Nyman (1):
usb: Fix out of sync data toggle if a configured device is
reconfigured
Matthias Schiffer (1):
ARM: dts: ls1021a: fix QuadSPI-memory reg range
Maxim Kochetkov (1):
iio: adc: ti-ads1015: fix conversion when CONFIG_PM is not set
Michał Mirosław (1):
regulator: push allocation in set_consumer_device_supply() out of lock
Mohan Kumar (1):
ALSA: hda: Fix 2 channel swapping for Tegra
Nirenjan Krishnan (1):
HID: quirks: Set INCREMENT_USAGE_ON_DUPLICATE for all Saitek X52
devices
Ondrej Jirman (1):
drm/sun4i: Fix dsi dcs long write function
Patrick Riphagen (1):
USB: serial: ftdi_sio: add IDs for Xsens Mti USB converter
Peter Oberparleiter (1):
gcov: add support for GCC 10.1
Qu Wenruo (1):
btrfs: require only sector size alignment for parent eb bytenr
Rafael J. Wysocki (1):
cpufreq: intel_pstate: Refuse to turn off with HWP enabled
Rander Wang (1):
ALSA: hda: fix a runtime pm issue in SOF when integrated GPU is
disabled
Rustam Kovhaev (1):
staging: wlan-ng: fix out of bounds read in prism2sta_probe_usb()
Sagi Grimberg (2):
nvme-fabrics: don't check state NVME_CTRL_NEW for request acceptance
nvme-rdma: serialize controller teardown sequences
Sandeep Raghuraman (1):
drm/amdgpu: Fix bug in reporting voltage for CIK
Selvin Xavier (1):
RDMA/bnxt_re: Do not report transparent vlan from QP1
Sivaprakash Murugesan (1):
phy: qcom-qmp: Use correct values for ipq8074 PCIe Gen2 PHY init
Tetsuo Handa (1):
video: fbdev: fix OOB read in vga_8planes_imageblit()
Vaibhav Agarwal (1):
staging: greybus: audio: fix uninitialized value issue
Varun Prakash (1):
scsi: target: iscsi: Fix data digest calculation
Vineet Gupta (2):
ARC: HSDK: wireup perf irq
irqchip/eznps: Fix build error for !ARC700 builds
Wanpeng Li (1):
KVM: VMX: Don't freeze guest when event delivery causes an APIC-access
exit
Xie He (3):
drivers/net/wan/lapbether: Added needed_tailroom
drivers/net/wan/lapbether: Set network_header before transmitting
drivers/net/wan/hdlc_cisco: Add hard_header_len
Yi Zhang (1):
RDMA/rxe: Fix the parent sysfs read when the interface has 15 chars
Zeng Tao (1):
usb: core: fix slab-out-of-bounds Read in read_descriptors
Makefile | 2 +-
arch/arc/boot/dts/hsdk.dts | 6 +-
arch/arc/plat-eznps/include/plat/ctop.h | 1 -
arch/arm/boot/dts/bcm-hr2.dtsi | 2 +-
arch/arm/boot/dts/bcm-nsp.dtsi | 2 +-
arch/arm/boot/dts/bcm5301x.dtsi | 2 +-
.../boot/dts/logicpd-som-lv-baseboard.dtsi | 2 +
.../boot/dts/logicpd-torpedo-baseboard.dtsi | 2 +
arch/arm/boot/dts/ls1021a.dtsi | 2 +-
arch/arm/boot/dts/socfpga_arria10.dtsi | 2 +-
arch/arm/boot/dts/vfxxx.dtsi | 2 +-
.../boot/dts/broadcom/northstar2/ns2.dtsi | 2 +-
arch/powerpc/configs/pasemi_defconfig | 1 -
arch/powerpc/configs/ppc6xx_defconfig | 1 -
arch/x86/configs/i386_defconfig | 1 -
arch/x86/configs/x86_64_defconfig | 1 -
arch/x86/kvm/vmx.c | 1 +
drivers/atm/firestream.c | 1 +
drivers/cpufreq/intel_pstate.c | 14 +-
drivers/dma/acpi-dma.c | 4 +-
.../gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 3 +-
drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 3 +-
drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c | 4 +-
drivers/gpu/drm/tve200/tve200_display.c | 22 +-
drivers/hid/hid-elan.c | 2 +
drivers/hid/hid-ids.h | 2 +
drivers/hid/hid-quirks.c | 2 +
drivers/iio/accel/bmc150-accel-core.c | 15 +-
drivers/iio/accel/kxsd9.c | 16 +-
drivers/iio/accel/mma7455_core.c | 16 +-
drivers/iio/accel/mma8452.c | 11 +-
drivers/iio/adc/ina2xx-adc.c | 11 +-
drivers/iio/adc/max1118.c | 10 +-
drivers/iio/adc/mcp3422.c | 16 +-
drivers/iio/adc/ti-adc081c.c | 11 +-
drivers/iio/adc/ti-adc084s021.c | 10 +-
drivers/iio/adc/ti-ads1015.c | 10 +
drivers/iio/chemical/ccs811.c | 13 +-
drivers/iio/light/ltr501.c | 15 +-
drivers/iio/light/max44000.c | 12 +-
drivers/iio/magnetometer/ak8975.c | 16 +-
drivers/infiniband/core/verbs.c | 2 +-
drivers/infiniband/hw/bnxt_re/ib_verbs.c | 21 +-
drivers/infiniband/sw/rxe/rxe.c | 3 -
drivers/infiniband/sw/rxe/rxe_mr.c | 1 +
drivers/infiniband/sw/rxe/rxe_verbs.c | 2 +-
drivers/iommu/amd_iommu_v2.c | 7 +
drivers/mmc/host/sdhci-msm.c | 18 +-
drivers/net/wan/hdlc_cisco.c | 1 +
drivers/net/wan/lapbether.c | 3 +
drivers/nfc/st95hf/core.c | 2 +-
drivers/nvme/host/fabrics.c | 1 -
drivers/nvme/host/rdma.c | 6 +
drivers/phy/qualcomm/phy-qcom-qmp.c | 16 +-
drivers/phy/qualcomm/phy-qcom-qmp.h | 2 +
drivers/regulator/core.c | 46 ++--
drivers/staging/greybus/audio_topology.c | 29 +--
drivers/staging/wlan-ng/hfa384x_usb.c | 5 -
drivers/staging/wlan-ng/prism2usb.c | 19 +-
drivers/target/iscsi/iscsi_target.c | 17 +-
drivers/target/iscsi/iscsi_target_login.c | 6 +-
drivers/target/iscsi/iscsi_target_login.h | 3 +-
drivers/target/iscsi/iscsi_target_nego.c | 3 +-
drivers/usb/core/message.c | 91 ++++----
drivers/usb/core/sysfs.c | 5 +
drivers/usb/serial/ftdi_sio.c | 1 +
drivers/usb/serial/ftdi_sio_ids.h | 1 +
drivers/usb/serial/option.c | 22 +-
drivers/usb/typec/ucsi/ucsi_acpi.c | 4 +
drivers/video/console/Kconfig | 46 ----
drivers/video/console/vgacon.c | 221 +-----------------
drivers/video/fbdev/vga16fb.c | 2 +-
fs/btrfs/extent-tree.c | 19 +-
fs/btrfs/ioctl.c | 3 +-
fs/btrfs/print-tree.c | 12 +-
fs/btrfs/volumes.c | 10 +
fs/xfs/libxfs/xfs_attr_leaf.c | 4 +-
include/linux/netfilter/nf_conntrack_sctp.h | 2 +
include/soc/nps/common.h | 6 +
kernel/gcov/gcc_4_7.c | 4 +-
net/netfilter/nf_conntrack_proto_sctp.c | 39 +++-
sound/hda/hdac_device.c | 2 +
sound/pci/hda/patch_hdmi.c | 5 +
83 files changed, 479 insertions(+), 504 deletions(-)
--
2.25.1
1
75
From: Roberto Sassu <roberto.sassu(a)huawei.com>
hulk inclusion
category: feature
feature: digest-lists
---------------------------
Enable digest lists and PGP keys preload.
Signed-off-by: Roberto Sassu <roberto.sassu(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/configs/openeuler_defconfig | 73 +++++++++++++++++++-------
1 file changed, 54 insertions(+), 19 deletions(-)
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index 809350e03968..b2abedf899b5 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -3048,20 +3048,23 @@ CONFIG_HW_RANDOM_CAVIUM=y
#
CONFIG_RAW_DRIVER=y
CONFIG_MAX_RAW_DEVS=8192
-CONFIG_TCG_TPM=m
+CONFIG_TCG_TPM=y
CONFIG_HW_RANDOM_TPM=y
-CONFIG_TCG_TIS_CORE=m
-CONFIG_TCG_TIS=m
-CONFIG_TCG_TIS_SPI=m
-# CONFIG_TCG_TIS_I2C_ATMEL is not set
-# CONFIG_TCG_TIS_I2C_INFINEON is not set
-# CONFIG_TCG_TIS_I2C_NUVOTON is not set
-CONFIG_TCG_ATMEL=m
-# CONFIG_TCG_INFINEON is not set
-CONFIG_TCG_CRB=m
+CONFIG_TCG_TIS_CORE=y
+CONFIG_TCG_TIS=y
+CONFIG_TCG_TIS_SPI=y
+CONFIG_TCG_TIS_I2C_ATMEL=y
+CONFIG_TCG_TIS_I2C_INFINEON=y
+CONFIG_TCG_TIS_I2C_NUVOTON=y
+CONFIG_TCG_NSC=y
+CONFIG_TCG_ATMEL=y
+CONFIG_TCG_INFINEON=y
+# CONFIG_TCG_XEN is not set
+CONFIG_TCG_CRB=y
# CONFIG_TCG_VTPM_PROXY is not set
-# CONFIG_TCG_TIS_ST33ZP24_I2C is not set
-# CONFIG_TCG_TIS_ST33ZP24_SPI is not set
+CONFIG_TCG_TIS_ST33ZP24=y
+CONFIG_TCG_TIS_ST33ZP24_I2C=y
+CONFIG_TCG_TIS_ST33ZP24_SPI=y
# CONFIG_DEVPORT is not set
# CONFIG_XILLYBUS is not set
CONFIG_HISI_SVM=y
@@ -5425,8 +5428,8 @@ CONFIG_KEYS=y
CONFIG_KEYS_COMPAT=y
CONFIG_PERSISTENT_KEYRINGS=y
CONFIG_BIG_KEYS=y
-CONFIG_TRUSTED_KEYS=m
-CONFIG_ENCRYPTED_KEYS=m
+CONFIG_TRUSTED_KEYS=y
+CONFIG_ENCRYPTED_KEYS=y
# CONFIG_KEY_DH_OPERATIONS is not set
# CONFIG_SECURITY_DMESG_RESTRICT is not set
CONFIG_SECURITY=y
@@ -5459,7 +5462,39 @@ CONFIG_SECURITY_APPARMOR_HASH_DEFAULT=y
# CONFIG_SECURITY_APPARMOR_DEBUG is not set
# CONFIG_SECURITY_LOADPIN is not set
CONFIG_SECURITY_YAMA=y
-# CONFIG_INTEGRITY is not set
+CONFIG_INTEGRITY=y
+CONFIG_INTEGRITY_SIGNATURE=y
+CONFIG_INTEGRITY_ASYMMETRIC_KEYS=y
+CONFIG_INTEGRITY_TRUSTED_KEYRING=y
+CONFIG_INTEGRITY_AUDIT=y
+CONFIG_IMA=y
+CONFIG_IMA_MEASURE_PCR_IDX=10
+CONFIG_IMA_LSM_RULES=y
+# CONFIG_IMA_TEMPLATE is not set
+CONFIG_IMA_NG_TEMPLATE=y
+# CONFIG_IMA_SIG_TEMPLATE is not set
+CONFIG_IMA_DEFAULT_TEMPLATE="ima-ng"
+# CONFIG_IMA_DEFAULT_HASH_SHA1 is not set
+CONFIG_IMA_DEFAULT_HASH_SHA256=y
+CONFIG_IMA_DEFAULT_HASH="sha256"
+# CONFIG_IMA_WRITE_POLICY is not set
+CONFIG_IMA_READ_POLICY=y
+CONFIG_IMA_APPRAISE=y
+# CONFIG_IMA_APPRAISE_BUILD_POLICY is not set
+CONFIG_IMA_APPRAISE_BOOTPARAM=y
+CONFIG_IMA_TRUSTED_KEYRING=y
+# CONFIG_IMA_BLACKLIST_KEYRING is not set
+CONFIG_IMA_LOAD_X509=y
+CONFIG_IMA_X509_PATH="/etc/keys/x509_ima.der"
+# CONFIG_IMA_APPRAISE_SIGNED_INIT is not set
+CONFIG_IMA_DIGEST_LIST=y
+CONFIG_IMA_DIGEST_LISTS_DIR="/etc/ima/digest_lists"
+CONFIG_IMA_PARSER_BINARY_PATH="/usr/bin/upload_digest_lists"
+CONFIG_EVM=y
+CONFIG_EVM_ATTR_FSUUID=y
+# CONFIG_EVM_ADD_XATTRS is not set
+CONFIG_EVM_LOAD_X509=y
+CONFIG_EVM_X509_PATH="/etc/keys/x509_evm.der"
CONFIG_DEFAULT_SECURITY_SELINUX=y
# CONFIG_DEFAULT_SECURITY_APPARMOR is not set
# CONFIG_DEFAULT_SECURITY_DAC is not set
@@ -5646,9 +5681,9 @@ CONFIG_X509_CERTIFICATE_PARSER=y
CONFIG_PKCS7_MESSAGE_PARSER=y
# CONFIG_PKCS7_TEST_KEY is not set
CONFIG_SIGNED_PE_FILE_VERIFICATION=y
-# CONFIG_PGP_LIBRARY is not set
-# CONFIG_PGP_KEY_PARSER is not set
-# CONFIG_PGP_PRELOAD is not set
+CONFIG_PGP_LIBRARY=y
+CONFIG_PGP_KEY_PARSER=y
+CONFIG_PGP_PRELOAD=y
#
# Certificates for signature checking
@@ -5659,7 +5694,7 @@ CONFIG_SYSTEM_TRUSTED_KEYS=""
# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set
# CONFIG_SECONDARY_TRUSTED_KEYRING is not set
# CONFIG_SYSTEM_BLACKLIST_KEYRING is not set
-# CONFIG_PGP_PRELOAD_PUBLIC_KEYS is not set
+CONFIG_PGP_PRELOAD_PUBLIC_KEYS=y
CONFIG_BINARY_PRINTF=y
#
--
2.25.1
1
0

[PATCH] acpi/arm64: check the returned logical CPU number of 'acpi_map_cpuid()'
by Yang Yingliang 22 Sep '20
by Yang Yingliang 22 Sep '20
22 Sep '20
From: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: NA
CVE: NA
---------------------------
When we set 'nr_cpus=1' in kernel parameter, we get the following error.
It's because 'acpi_map_cpuid()' return -ENODEV in 'acpi_map_cpu()' when
there are not enough logical CPU numbers. So we need to check the
returned logical CPU number and return error if it is negative.
'acpi_map_cpu' when there are not enough logical CPU
[ 0.025955] Unable to handle kernel paging request at virtual address ffff00002915b828
[ 0.025958] Mem abort info:
[ 0.025959] ESR = 0x96000006
[ 0.025961] Exception class = DABT (current EL), IL = 32 bits
[ 0.025963] SET = 0, FnV = 0
[ 0.025965] EA = 0, S1PTW = 0
[ 0.025966] Data abort info:
[ 0.025968] ISV = 0, ISS = 0x00000006
[ 0.025970] CM = 0, WnR = 0
[ 0.025972] swapper pgtable: 4k pages, 48-bit VAs, pgdp = (____ptrval____)
[ 0.025974] [ffff00002915b828] pgd=000000013fffe003, pud=000000013fffd003, pmd=0000000000000000
[ 0.025979] Internal error: Oops: 96000006 [#1] SMP
[ 0.025981] Modules linked in:
[ 0.025983] Process swapper/0 (pid: 1, stack limit = 0x(____ptrval____))
[ 0.025986] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W 4.19.141+ #37
[ 0.025988] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
[ 0.025991] pstate: a0c00005 (NzCv daif +PAN +UAO)
[ 0.025993] pc : acpi_map_cpu+0xe0/0x170
[ 0.025996] lr : acpi_map_cpu+0xb8/0x170
[ 0.025997] sp : ffff8000fef1fa50
[ 0.025999] x29: ffff8000fef1fa50 x28: ffff000008e22058
[ 0.026001] x27: ffff0000092a6000 x26: ffff000008d60778
[ 0.026004] x25: ffff0000094c3000 x24: 0000000000000001
[ 0.026006] x23: ffff8000fe802c18 x22: 00000000ffffffff
[ 0.026008] x21: ffff000008a16000 x20: ffff000008a16a20
[ 0.026011] x19: 00000000ffffffed x18: ffffffffffffffff
[ 0.026013] x17: 0000000087411dcf x16: 00000000b93a5600
[ 0.026015] x15: ffff000009159708 x14: 0720072007200720
[ 0.026018] x13: 0720072007200720 x12: 0720072007200720
[ 0.026020] x11: 072007200720073d x10: 073d073d073d073d
[ 0.026022] x9 : 073d073d073d0764 x8 : 0765076607660766
[ 0.026024] x7 : 0766076607660778 x6 : 0000000000000130
[ 0.026027] x5 : ffff0000085955c8 x4 : 0000000000000000
[ 0.026029] x3 : 0000000000000000 x2 : ffff00000915b830
[ 0.026031] x1 : ffff00002915b828 x0 : 0000200000000000
[ 0.026033] Call trace:
[ 0.026035] acpi_map_cpu+0xe0/0x170
[ 0.026038] acpi_processor_add+0x44c/0x640
[ 0.026040] acpi_bus_attach+0x174/0x218
[ 0.026043] acpi_bus_attach+0xa8/0x218
[ 0.026045] acpi_bus_attach+0xa8/0x218
[ 0.026047] acpi_bus_attach+0xa8/0x218
[ 0.026049] acpi_bus_scan+0x58/0xb8
[ 0.026052] acpi_scan_init+0xf4/0x234
[ 0.026054] acpi_init+0x318/0x384
[ 0.026056] do_one_initcall+0x54/0x250
[ 0.026059] kernel_init_freeable+0x2d4/0x3c0
[ 0.026061] kernel_init+0x18/0x118
[ 0.026063] ret_from_fork+0x10/0x18
[ 0.026066] Code: d2800020 9120c042 8b010c41 9ad32000 (f820303f)
Signed-off-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Reviewed-by: Keqian Zhu <zhukeqian1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/kernel/acpi.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c
index 73eb04f997a7..729733536cb4 100644
--- a/arch/arm64/kernel/acpi.c
+++ b/arch/arm64/kernel/acpi.c
@@ -270,6 +270,10 @@ int acpi_map_cpu(acpi_handle handle, phys_cpuid_t physid, u32 acpi_id,
int cpu, nid;
cpu = acpi_map_cpuid(physid, acpi_id);
+ if (cpu < 0) {
+ pr_info("Unable to map GICC to logical cpu number\n");
+ return cpu;
+ }
nid = acpi_get_node(handle);
if (nid != NUMA_NO_NODE) {
set_cpu_numa_node(cpu, nid);
--
2.25.1
1
0
Greg Kroah-Hartman (1):
Linux 4.19.145
Jakub Kicinski (1):
net: disable netpoll on fresh napis
Jens Axboe (1):
block: ensure bdi->io_pages is always initialized
Kamil Lorenc (1):
net: usb: dm9601: Add USB ID of Keenetic Plus DSL
Paul Moore (1):
netlabel: fix problems with mapping removal
Roi Dayan (1):
net/mlx5e: Don't support phys switch id if not in switchdev mode
Takashi Sakamoto (1):
ALSA; firewire-tascam: exclude Tascam FE-8 from detection
Tetsuo Handa (1):
tipc: fix shutdown() of connectionless socket
Xin Long (1):
sctp: not disable bh in the whole sctp_get_port_local()
Makefile | 2 +-
block/blk-core.c | 2 +
.../net/ethernet/mellanox/mlx5/core/en_rep.c | 2 +-
drivers/net/usb/dm9601.c | 4 ++
net/core/dev.c | 3 +-
net/core/netpoll.c | 2 +-
net/netlabel/netlabel_domainhash.c | 59 ++++++++++---------
net/sctp/socket.c | 16 ++---
net/tipc/socket.c | 9 ++-
sound/firewire/tascam/tascam.c | 30 +++++++++-
10 files changed, 82 insertions(+), 47 deletions(-)
--
2.25.1
1
9
Al Grant (1):
perf tools: Correct SNOOPX field offset
Al Viro (1):
fix regression in "epoll: Keep a reference on files added to the check
list"
Amit Engel (1):
nvmet: Disable keep-alive timer when kato is cleared to 0h
Bodo Stroesser (2):
scsi: target: tcmu: Fix size in calls to tcmu_flush_dcache_range
scsi: target: tcmu: Optimize use of flush_dcache_page
Christophe JAILLET (1):
nvmet-fc: Fix a missed _irqsave version of spin_lock in
'nvmet_fc_fod_op_done()'
Dan Carpenter (1):
net: gemini: Fix another missing clk_disable_unprepare() in probe
Dan Crawford (1):
ALSA: hda - Fix silent audio output and corrupted input on MSI X570-A
PRO
Daniel Borkmann (1):
uaccess: Add non-pagefault user-space write function
Daniele Palmas (1):
net: usb: qmi_wwan: add Telit 0x1050 composition
Darrick J. Wong (1):
xfs: fix xfs_bmap_validate_extent_raw when checking attr fork of rt
files
Dinghao Liu (3):
net: hns: Fix memleak in hns_nic_dev_probe
net: systemport: Fix memleak in bcm_sysport_probe
net: arc_emac: Fix memleak in arc_mdio_probe
Dmitry Baryshkov (1):
drm/msm/a6xx: fix gmu start on newer firmware
Edwin Peer (1):
bnxt_en: fix HWRM error when querying VF temperature
Florian Fainelli (2):
MIPS: mm: BMIPS5000 has inclusive physical caches
MIPS: BMIPS: Also call bmips_cpu_setup() for secondary cores
Florian Westphal (1):
netfilter: nf_tables: fix destination register zeroing
Greg Kroah-Hartman (1):
Linux 4.19.144
Himadri Pandya (1):
net: usb: Fix uninit-was-stored issue in asix_read_phy_addr()
Huang Ying (1):
x86, fakenuma: Fix invalid starting node ID
Jakub Kicinski (1):
bnxt: don't enable NAPI until rings are ready
James Morse (4):
KVM: arm64: Add kvm_extable for vaxorcism code
KVM: arm64: Defer guest entry when an asynchronous exception is
pending
KVM: arm64: Survive synchronous exceptions caused by AT instructions
KVM: arm64: Set HCR_EL2.PTW to prevent AT taking synchronous exception
Jason Gunthorpe (1):
include/linux/log2.h: add missing () around n in roundup_pow_of_two()
Jeff Layton (1):
ceph: don't allow setlease on cephfs
Jesper Dangaard Brouer (1):
selftests/bpf: Fix massive output from test_maps
Johannes Berg (1):
cfg80211: regulatory: reject invalid hints
John Stultz (1):
tty: serial: qcom_geni_serial: Drop __init from
qcom_geni_console_setup
Josef Bacik (3):
btrfs: drop path before adding new uuid tree entry
btrfs: set the lockdep class for log tree extent buffers
btrfs: fix potential deadlock in the search ioctl
Jussi Kivilinna (1):
batman-adv: bla: use netif_rx_ni when not in interrupt context
Kai Vehmanen (1):
ALSA: hda/hdmi: always check pin power status in i915 pin fixup
Kim Phillips (1):
perf record/stat: Explicitly call out event modifiers in the
documentation
Krishna Manikandan (1):
drm/msm: add shutdown support for display platform_driver
Linus Lüssing (1):
batman-adv: Fix own OGM check in aggregated OGMs
Lu Baolu (1):
iommu/vt-d: Serialize IOMMU GCMD register modifications
Marc Zyngier (2):
HID: core: Correctly handle ReportSize being zero
HID: core: Sanitize event code and type when mapping input
Marek Szyprowski (1):
dmaengine: pl330: Fix burst length if burst size is smaller than bus
width
Masami Hiramatsu (1):
uaccess: Add non-pagefault user-space read functions
Max Staudt (1):
affs: fix basic permission bits to actually work
Michael Chan (1):
tg3: Fix soft lockup when tg3_reset_task() fails.
Mikulas Patocka (3):
ext2: don't update mtime on COW faults
xfs: don't update mtime on COW faults
dm writecache: handle DAX to partitions on persistent memory correctly
Ming Lei (1):
block: allow for_each_bvec to support zero len bvec
Mrinal Pandey (1):
checkpatch: fix the usage of capture group ( ... )
Namhyung Kim (1):
perf jevents: Fix suspicious code in fixregex()
Nicolas Dichtel (1):
gtp: add GTPA_LINK info to msg sent to userspace
Nikolay Borisov (2):
btrfs: Remove redundant extent_buffer_get in get_old_root
btrfs: Remove extraneous extent_buffer_get from tree_mod_log_rewind
Pablo Neira Ayuso (3):
netfilter: nf_tables: add NFTA_SET_USERDATA if not null
netfilter: nf_tables: incorrect enum nft_list_attributes definition
netfilter: nfnetlink: nfnetlink_unicast() reports EAGAIN instead of
ENOBUFS
Pavan Chebbi (1):
bnxt_en: Don't query FW when netif_running() is false.
Peter Ujfalusi (1):
dmaengine: of-dma: Fix of_dma_router_xlate's of_dma_xlate handling
Peter Zijlstra (1):
cpuidle: Fixup IRQ state
Rogan Dawes (1):
usb: qmi_wwan: add D-Link DWM-222 A2 device ID
Sean Young (2):
media: rc: do not access device via sysfs after rc_unregister_device()
media: rc: uevent sysfs file races with rc_unregister_device()
Shung-Hsi Yu (1):
net: ethernet: mlx4: Fix memory allocation in mlx4_buddy_init()
Simon Leiner (1):
xen/xenbus: Fix granting of vmalloc'd memory
Sven Eckelmann (1):
batman-adv: Avoid uninitialized chaddr when handling DHCP
Sven Schnelle (1):
s390: don't trace preemption in percpu macros
Takashi Iwai (1):
ALSA: pcm: oss: Remove superfluous WARN_ON() for mulaw sanity check
Takashi Sakamoto (1):
ALSA: firewire-digi00x: exclude Avid Adrenaline from detection
Tejun Heo (1):
libata: implement ATA_HORKAGE_MAX_TRIM_128M and apply to Sandisks
Tom Rix (1):
hwmon: (applesmc) check status earlier.
Tong Zhang (1):
ALSA: ca0106: fix error code handling
Tony Lindgren (1):
thermal: ti-soc-thermal: Fix bogus thermal shutdowns for omap4430
Vasundhara Volam (2):
bnxt_en: Check for zero dir entries in NVRAM.
bnxt_en: Fix PCI AER error recovery flow
Yu Kuai (1):
dmaengine: at_hdmac: check return value of of_find_device_by_node() in
at_dma_xlate()
Yuusuke Ashizuka (1):
ravb: Fixed to be able to unload modules
Documentation/filesystems/affs.txt | 16 +-
Makefile | 2 +-
arch/arm64/include/asm/kvm_arm.h | 3 +-
arch/arm64/include/asm/kvm_asm.h | 43 +++++
arch/arm64/kernel/vmlinux.lds.S | 8 +
arch/arm64/kvm/hyp/entry.S | 31 +++-
arch/arm64/kvm/hyp/hyp-entry.S | 66 ++++---
arch/arm64/kvm/hyp/switch.c | 39 +++-
arch/mips/kernel/smp-bmips.c | 2 +
arch/mips/mm/c-r4k.c | 4 +
arch/s390/include/asm/percpu.h | 28 +--
arch/x86/mm/numa_emulation.c | 2 +-
drivers/ata/libata-core.c | 5 +-
drivers/ata/libata-scsi.c | 8 +-
drivers/cpuidle/cpuidle.c | 3 +-
drivers/dma/at_hdmac.c | 2 +
drivers/dma/of-dma.c | 8 +-
drivers/dma/pl330.c | 2 +-
drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 12 +-
drivers/gpu/drm/msm/msm_drv.c | 8 +
drivers/hid/hid-core.c | 15 +-
drivers/hid/hid-input.c | 4 +
drivers/hid/hid-multitouch.c | 2 +
drivers/hwmon/applesmc.c | 31 ++--
drivers/iommu/intel_irq_remapping.c | 10 +-
drivers/md/dm-writecache.c | 12 +-
drivers/media/rc/rc-main.c | 44 +++--
drivers/net/ethernet/arc/emac_mdio.c | 1 +
drivers/net/ethernet/broadcom/bcmsysport.c | 6 +-
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 26 +--
.../net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 5 +-
drivers/net/ethernet/broadcom/tg3.c | 17 +-
drivers/net/ethernet/cortina/gemini.c | 34 ++--
drivers/net/ethernet/hisilicon/hns/hns_enet.c | 9 +-
drivers/net/ethernet/mellanox/mlx4/mr.c | 2 +-
drivers/net/ethernet/renesas/ravb_main.c | 110 ++++++------
drivers/net/gtp.c | 1 +
drivers/net/usb/asix_common.c | 2 +-
drivers/net/usb/qmi_wwan.c | 2 +
drivers/nvme/target/core.c | 6 +
drivers/nvme/target/fc.c | 4 +-
drivers/target/target_core_user.c | 15 +-
.../ti-soc-thermal/omap4-thermal-data.c | 23 +--
.../thermal/ti-soc-thermal/omap4xxx-bandgap.h | 10 +-
drivers/tty/serial/qcom_geni_serial.c | 2 +-
drivers/xen/xenbus/xenbus_client.c | 10 +-
fs/affs/amigaffs.c | 27 +++
fs/affs/file.c | 26 ++-
fs/btrfs/ctree.c | 8 +-
fs/btrfs/extent_io.c | 8 +-
fs/btrfs/extent_io.h | 6 +-
fs/btrfs/ioctl.c | 27 ++-
fs/btrfs/volumes.c | 3 +-
fs/ceph/file.c | 1 +
fs/eventpoll.c | 6 +-
fs/ext2/file.c | 6 +-
fs/xfs/libxfs/xfs_bmap.c | 2 +-
fs/xfs/xfs_file.c | 12 +-
include/linux/bvec.h | 9 +-
include/linux/hid.h | 42 +++--
include/linux/libata.h | 1 +
include/linux/log2.h | 2 +-
include/linux/netfilter/nfnetlink.h | 3 +-
include/linux/uaccess.h | 26 +++
include/net/netfilter/nf_tables.h | 2 +
include/uapi/linux/netfilter/nf_tables.h | 2 +-
mm/maccess.c | 167 ++++++++++++++++--
net/batman-adv/bat_v_ogm.c | 11 +-
net/batman-adv/bridge_loop_avoidance.c | 5 +-
net/batman-adv/gateway_client.c | 6 +-
net/netfilter/nf_tables_api.c | 64 ++++---
net/netfilter/nfnetlink.c | 11 +-
net/netfilter/nfnetlink_log.c | 3 +-
net/netfilter/nfnetlink_queue.c | 2 +-
net/netfilter/nft_payload.c | 4 +-
net/wireless/reg.c | 3 +
scripts/checkpatch.pl | 4 +-
sound/core/oss/mulaw.c | 4 +-
sound/firewire/digi00x/digi00x.c | 5 +
sound/pci/ca0106/ca0106_main.c | 3 +-
sound/pci/hda/patch_hdmi.c | 1 +
sound/pci/hda/patch_realtek.c | 1 +
tools/include/uapi/linux/perf_event.h | 2 +-
tools/perf/Documentation/perf-record.txt | 4 +
tools/perf/Documentation/perf-stat.txt | 4 +
tools/perf/pmu-events/jevents.c | 2 +-
tools/testing/selftests/bpf/test_maps.c | 2 +
87 files changed, 885 insertions(+), 337 deletions(-)
--
2.25.1
1
79
Aditya Pakki (5):
drm/radeon: fix multiple reference count leak
omapfb: fix multiple reference count leaks due to pm_runtime_get_sync
drm/nouveau/drm/noveau: fix reference count leak in nouveau_fbcon_open
drm/nouveau: fix reference count leak in nv50_disp_atomic_commit
drm/nouveau: Fix reference count leak in nouveau_connector_detect
Adrian Hunter (1):
scsi: ufs: Improve interrupt handling for shared interrupts
Alan Stern (3):
USB: yurex: Fix bad gfp argument
USB: quirks: Ignore duplicate endpoint on Sound Devices MixPre-D
usb: storage: Add unusual_uas entry for Sony PSZ drives
Alex Deucher (1):
drm/amdgpu: Fix buffer overflow in INFO ioctl
Alexey Kardashevskiy (1):
powerpc/xive: Ignore kmemleak false positives
Alvin Šipraga (1):
macvlan: validate setting of multiple remote source MAC addresses
Amelie Delaunay (1):
spi: stm32: fix stm32_spi_prepare_mbr in case of odd clk_rate
Andrey Konovalov (1):
efi: provide empty efi_enter_virtual_mode implementation
Andy Shevchenko (2):
mfd: intel-lpss: Add Intel Emmitsburg PCH PCI IDs
USB: gadget: u_f: Unbreak offset calculation in VLAs
Arnd Bergmann (1):
powerpc/spufs: add CONFIG_COREDUMP dependency
Athira Rajeev (1):
powerpc/perf: Fix soft lockups due to missed interrupt accounting
Bodo Stroesser (1):
scsi: target: tcmu: Fix crash on ARM during cmd completion
Brooke Basile (2):
USB: gadget: u_f: add overflow checks to VLA macros
USB: gadget: f_ncm: add bounds checks to ncm_unwrap_ntb()
Changming Liu (1):
USB: sisusbvga: Fix a potential UB casued by left shifting a negative
value
Chao Yu (1):
f2fs: fix error path in do_recover_data()
Chris Wilson (1):
locking/lockdep: Fix overflow in presentation of average lock-time
Christophe JAILLET (1):
usb: gadget: f_tcm: Fix some resource leaks in some error paths
Cong Wang (1):
tipc: fix uninit skb->data in tipc_nl_compat_dumpit()
Cyril Roelandt (1):
USB: Ignore UAS for JMicron JMS567 ATA/ATAPI Bridge
Dave Chinner (1):
xfs: Don't allow logging of XFS_ISTALE inodes
David Brazdil (1):
KVM: arm64: Fix symbol dependency in __hyp_call_panic_nvhe
Desnes A. Nunes do Rosario (1):
selftests/powerpc: Purge extra count_pmc() calls of ebb selftests
Dick Kennedy (1):
scsi: lpfc: Fix shost refcount mismatch when deleting vport
Ding Hui (1):
xhci: Always restore EP_SOFT_CLEAR_TOGGLE even if ep reset failed
Evan Quan (2):
drm/amd/pm: correct Vega10 swctf limit setting
drm/amd/pm: correct Vega12 swctf limit setting
Evgeny Novikov (1):
USB: lvtest: return proper error code in probe
Filipe Manana (1):
btrfs: fix space cache memory leak after transaction abort
George Kennedy (2):
fbcon: prevent user font height or width change from causing potential
out-of-bounds access
vt_ioctl: change VT_RESIZEX ioctl to check for error return from
vc_resize()
Greg Kroah-Hartman (1):
Linux 4.19.143
Hans Verkuil (1):
cec-api: prevent leaking memory through hole in structure
Hans de Goede (1):
HID: i2c-hid: Always sleep 60ms after I2C_HID_PWR_ON commands
Hector Martin (1):
ALSA: usb-audio: Update documentation comment for MS2109 quirk
Heikki Krogerus (1):
device property: Fix the secondary firmware node handling in
set_primary_fwnode()
Hou Pu (1):
null_blk: fix passing of REQ_FUA flag in null_handle_rq
Ikjoon Jang (1):
HID: quirks: add NOGET quirk for Logitech GROUP
Jan Kara (4):
ext4: don't BUG on inconsistent journal feature
writeback: Protect inode->i_io_list with inode->i_lock
writeback: Avoid skipping inode writeback
writeback: Fix sync livelock due to b_dirty_time processing
Jarkko Sakkinen (1):
tpm: Unify the mismatching TPM space buffer sizes
Jason Baron (1):
EDAC/ie31200: Fallback if host bridge device is already initialized
Javed Hasan (1):
scsi: fcoe: Memory leak fix in fcoe_sysfs_fcf_del()
Jia-Ju Bai (1):
media: pci: ttpci: av7110: fix possible buffer overflow caused by bad
DMA value in debiirq()
Jing Xiangfeng (1):
scsi: iscsi: Do not put host in iscsi_set_flashnode_param()
Josef Bacik (1):
btrfs: check the right error variable in btrfs_del_dir_entries_in_log
Kai-Heng Feng (2):
xhci: Do warm-reset when both CAS and XDEV_RESUME are set
USB: quirks: Add no-lpm quirk for another Raydium touchscreen
Li Guifu (1):
f2fs: fix use-after-free issue
Li Jun (1):
usb: host: xhci: fix ep context print mismatch in debugfs
Lukas Czerner (3):
jbd2: make sure jh have b_transaction set in refile/unfile_buffer
ext4: handle read only external journal device
ext4: handle option set by mount flags correctly
Lukas Wunner (2):
serial: pl011: Fix oops on -EPROBE_DEFER
serial: pl011: Don't leak amba_ports entry on driver register error
Mahesh Bandewar (1):
ipvlan: fix device features
Marcos Paulo de Souza (1):
btrfs: reset compression level for lzo on remount
Mark Tomlinson (1):
gre6: Fix reception with IP6_TNL_F_RCV_DSCP_COPY
Miaohe Lin (1):
net: Fix potential wrong skb->protocol in skb_vlan_untag()
Michael Ellerman (1):
powerpc/64s: Don't init FSCR_DSCR in __init_FSCR()
Mike Christie (1):
scsi: fcoe: Fix I/O path allocation
Ming Lei (1):
blk-mq: order adding requests to hctx->dispatch and checking
SCHED_RESTART
Navid Emamdoost (4):
drm/amdgpu: fix ref count leak in amdgpu_driver_open_kms
drm/amd/display: fix ref count leak in amdgpu_drm_ioctl
drm/amdgpu: fix ref count leak in amdgpu_display_crtc_set_config
drm/amdgpu/display: fix ref count leak when pm_runtime_get_sync fails
Necip Fazil Yildiran (1):
net: qrtr: fix usage of idr in port assignment to socket
Peilin Ye (2):
net/smc: Prevent kernel-infoleak in __smc_diag_dump()
HID: hiddev: Fix slab-out-of-bounds write in hiddev_ioctl_usage()
Peng Fan (1):
mips/vdso: Fix resource leaks in genvdso.c
Qiushi Wu (5):
ASoC: img: Fix a reference count leak in img_i2s_in_set_fmt
ASoC: img-parallel-out: Fix a reference count leak
ASoC: tegra: Fix reference count leaks.
drm/amdkfd: Fix reference count leaks.
PCI: Fix pci_create_slot() reference count leak
Qu Wenruo (1):
btrfs: file: reserve qgroup space after the hole punch range is locked
Quinn Tran (1):
scsi: qla2xxx: Fix null pointer access during disconnect from
subsystem
Rafael J. Wysocki (1):
PM: sleep: core: Fix the handling of pending runtime resume requests
Randy Dunlap (1):
ALSA: pci: delete repeated words in comments
Reto Schneider (1):
rtlwifi: rtl8192cu: Prevent leaking urb
Rob Clark (1):
drm/msm/adreno: fix updating ring fence
Robin Murphy (1):
iommu/iova: Don't BUG on invalid PFNs
Saurav Kashyap (2):
scsi: qla2xxx: Check if FW supports MQ before enabling
Revert "scsi: qla2xxx: Fix crash on qla2x00_mailbox_command"
Sean Young (1):
media: gpio-ir-tx: improve precision of transmitted signal due to
scheduling
Sergey Senozhatsky (1):
serial: 8250: change lock order in serial8250_do_startup()
Shay Agroskin (1):
net: ena: Make missed_tx stat incremental
Stanley Chu (2):
scsi: ufs: Fix possible infinite loop in ufshcd_hold
scsi: ufs: Clean up completed request without interrupt notification
Stephan Gerhold (1):
arm64: dts: qcom: msm8916: Pull down PDM GPIOs during sleep
Sumera Priyadarsini (1):
net: gianfar: Add of_node_put() before goto statement
Sylwester Nawrocki (1):
ASoC: wm8994: Avoid attempts to read unreadable registers
Tamseel Shams (1):
serial: samsung: Removes the IRQ not found warning
Tang Bin (1):
usb: host: ohci-exynos: Fix error handling in exynos_ohci_probe()
Tetsuo Handa (1):
vt: defer kfree() of vc_screenbuf in vc_do_resize()
Thinh Nguyen (4):
usb: uas: Add quirk for PNY Pro Elite
usb: dwc3: gadget: Don't setup more than requested
usb: dwc3: gadget: Fix handling ZLP
usb: dwc3: gadget: Handle ZLP for sg requests
Thomas Gleixner (2):
XEN uses irqdesc::irq_data_common::handler_data to store a per
interrupt XEN data pointer which contains XEN specific information.
genirq/matrix: Deal with the sillyness of for_each_cpu() on UP
Tianjia Zhang (1):
nvme-fc: Fix wrong return value in __nvme_fc_init_request()
Tom Rix (1):
USB: cdc-acm: rework notification_buffer resizing
Valmer Huhn (1):
serial: 8250_exar: Fix number of ports for Commtech PCIe cards
Vineeth Vijayan (1):
s390/cio: add cond_resched() in the slow_eval_known_fn() loop
Wolfram Sang (1):
i2c: rcar: in slave mode, clear NACK earlier
Xianting Tian (1):
fs: prevent BUG_ON in submit_bh_wbc()
Xiubo Li (1):
ceph: fix potential mdsc use-after-free crash
Yangbo Lu (1):
ARM: dts: ls1021a: output PPS signal on FIPER2
Zhi Chen (1):
Revert "ath10k: fix DMA related firmware crashes on multiple devices"
qiuguorui1 (1):
irqchip/stm32-exti: Avoid losing interrupts due to clearing pending
bits by mistake
Makefile | 2 +-
arch/arm/boot/dts/ls1021a.dtsi | 2 +-
arch/arm64/boot/dts/qcom/msm8916-pins.dtsi | 2 +-
arch/arm64/kvm/hyp/switch.c | 2 +-
arch/mips/vdso/genvdso.c | 10 ++
arch/powerpc/kernel/cpu_setup_power.S | 2 +-
arch/powerpc/perf/core-book3s.c | 4 +
arch/powerpc/platforms/cell/Kconfig | 1 +
arch/powerpc/sysdev/xive/native.c | 2 +
block/blk-mq-sched.c | 9 ++
block/blk-mq.c | 9 ++
drivers/base/core.c | 12 +-
drivers/base/power/main.c | 16 +-
drivers/block/null_blk_main.c | 2 +-
drivers/char/tpm/tpm-chip.c | 9 +-
drivers/char/tpm/tpm.h | 6 +-
drivers/char/tpm/tpm2-space.c | 26 ++--
drivers/char/tpm/tpmrm-dev.c | 2 +-
drivers/edac/ie31200_edac.c | 50 +++++-
.../gpu/drm/amd/amdgpu/amdgpu_connectors.c | 16 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c | 5 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 3 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 7 +-
drivers/gpu/drm/amd/amdkfd/kfd_topology.c | 20 ++-
.../drm/amd/powerplay/hwmgr/vega10_thermal.c | 7 +-
.../drm/amd/powerplay/hwmgr/vega12_thermal.c | 6 +-
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 2 +-
drivers/gpu/drm/nouveau/dispnv50/disp.c | 4 +-
drivers/gpu/drm/nouveau/nouveau_connector.c | 4 +-
drivers/gpu/drm/nouveau/nouveau_fbcon.c | 4 +-
drivers/gpu/drm/radeon/radeon_connectors.c | 20 ++-
drivers/hid/hid-ids.h | 1 +
drivers/hid/hid-quirks.c | 1 +
drivers/hid/i2c-hid/i2c-hid-core.c | 22 +--
drivers/hid/usbhid/hiddev.c | 4 +
drivers/i2c/busses/i2c-rcar.c | 1 +
drivers/iommu/iova.c | 4 +-
drivers/irqchip/irq-stm32-exti.c | 14 +-
drivers/media/cec/cec-api.c | 8 +-
drivers/media/pci/ttpci/av7110.c | 5 +-
drivers/media/rc/gpio-ir-tx.c | 7 +-
drivers/mfd/intel-lpss-pci.c | 3 +
drivers/net/ethernet/amazon/ena/ena_netdev.c | 5 +-
drivers/net/ethernet/freescale/gianfar.c | 4 +-
drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c | 2 +-
drivers/net/ipvlan/ipvlan_main.c | 27 +++-
drivers/net/macvlan.c | 21 ++-
drivers/net/wireless/ath/ath10k/hw.h | 2 +-
drivers/net/wireless/realtek/rtlwifi/usb.c | 5 +-
drivers/nvme/host/fc.c | 4 +-
drivers/pci/slot.c | 6 +-
drivers/s390/cio/css.c | 5 +
drivers/scsi/fcoe/fcoe_ctlr.c | 2 +-
drivers/scsi/lpfc/lpfc_vport.c | 26 +---
drivers/scsi/qla2xxx/qla_mbx.c | 8 -
drivers/scsi/qla2xxx/qla_nvme.c | 5 +
drivers/scsi/qla2xxx/qla_os.c | 5 +
drivers/scsi/scsi_transport_iscsi.c | 2 +-
drivers/scsi/ufs/ufshcd.c | 14 +-
drivers/spi/spi-stm32.c | 3 +-
drivers/target/target_core_user.c | 9 +-
drivers/tty/serial/8250/8250_exar.c | 24 ++-
drivers/tty/serial/8250/8250_port.c | 9 +-
drivers/tty/serial/amba-pl011.c | 16 +-
drivers/tty/serial/samsung.c | 8 +-
drivers/tty/vt/vt.c | 5 +-
drivers/tty/vt/vt_ioctl.c | 12 +-
drivers/usb/class/cdc-acm.c | 22 ++-
drivers/usb/core/quirks.c | 7 +
drivers/usb/dwc3/gadget.c | 104 ++++++++++---
drivers/usb/gadget/function/f_ncm.c | 81 ++++++++--
drivers/usb/gadget/function/f_tcm.c | 7 +-
drivers/usb/gadget/u_f.h | 38 +++--
drivers/usb/host/ohci-exynos.c | 5 +-
drivers/usb/host/xhci-debugfs.c | 8 +-
drivers/usb/host/xhci-hub.c | 19 +--
drivers/usb/host/xhci.c | 3 +-
drivers/usb/misc/lvstest.c | 2 +-
drivers/usb/misc/sisusbvga/sisusb.c | 2 +-
drivers/usb/misc/yurex.c | 2 +-
drivers/usb/storage/unusual_devs.h | 2 +-
drivers/usb/storage/unusual_uas.h | 14 ++
drivers/video/fbdev/core/fbcon.c | 25 ++-
drivers/video/fbdev/omap2/omapfb/dss/dispc.c | 7 +-
drivers/video/fbdev/omap2/omapfb/dss/dsi.c | 7 +-
drivers/video/fbdev/omap2/omapfb/dss/dss.c | 7 +-
drivers/video/fbdev/omap2/omapfb/dss/hdmi4.c | 5 +-
drivers/video/fbdev/omap2/omapfb/dss/hdmi5.c | 5 +-
drivers/video/fbdev/omap2/omapfb/dss/venc.c | 7 +-
drivers/xen/events/events_base.c | 16 +-
fs/btrfs/disk-io.c | 1 +
fs/btrfs/file.c | 8 +-
fs/btrfs/free-space-cache.c | 2 +-
fs/btrfs/super.c | 1 +
fs/btrfs/tree-log.c | 10 +-
fs/buffer.c | 9 ++
fs/ceph/mds_client.c | 14 +-
fs/ext4/super.c | 147 ++++++++++++------
fs/f2fs/f2fs.h | 4 +-
fs/f2fs/inline.c | 19 ++-
fs/f2fs/node.c | 6 +-
fs/f2fs/recovery.c | 10 +-
fs/f2fs/super.c | 5 +-
fs/fs-writeback.c | 83 +++++-----
fs/jbd2/transaction.c | 10 ++
fs/xfs/xfs_icache.c | 3 +-
fs/xfs/xfs_inode.c | 25 ++-
fs/xfs/xfs_trans_inode.c | 2 +
include/linux/efi.h | 4 +
include/linux/fs.h | 8 +-
include/trace/events/writeback.h | 13 +-
kernel/irq/matrix.c | 7 +
kernel/locking/lockdep_proc.c | 2 +-
net/core/skbuff.c | 4 +-
net/ipv6/ip6_tunnel.c | 10 +-
net/qrtr/qrtr.c | 20 +--
net/smc/smc_diag.c | 16 +-
net/tipc/netlink_compat.c | 12 +-
sound/pci/cs46xx/cs46xx_lib.c | 2 +-
sound/pci/cs46xx/dsp_spos_scb_lib.c | 2 +-
sound/pci/hda/hda_codec.c | 2 +-
sound/pci/hda/hda_generic.c | 2 +-
sound/pci/hda/patch_sigmatel.c | 2 +-
sound/pci/ice1712/prodigy192.c | 2 +-
sound/pci/oxygen/xonar_dg.c | 2 +-
sound/soc/codecs/wm8958-dsp2.c | 4 +
sound/soc/img/img-i2s-in.c | 4 +-
sound/soc/img/img-parallel-out.c | 4 +-
sound/soc/tegra/tegra30_ahub.c | 4 +-
sound/soc/tegra/tegra30_i2s.c | 4 +-
sound/usb/quirks-table.h | 4 +-
.../powerpc/pmu/ebb/back_to_back_ebbs_test.c | 2 -
.../selftests/powerpc/pmu/ebb/cycles_test.c | 2 -
.../powerpc/pmu/ebb/cycles_with_freeze_test.c | 2 -
.../powerpc/pmu/ebb/cycles_with_mmcr2_test.c | 2 -
tools/testing/selftests/powerpc/pmu/ebb/ebb.c | 2 -
.../pmu/ebb/ebb_on_willing_child_test.c | 2 -
.../powerpc/pmu/ebb/lost_exception_test.c | 1 -
.../powerpc/pmu/ebb/multi_counter_test.c | 7 -
.../powerpc/pmu/ebb/multi_ebb_procs_test.c | 2 -
.../powerpc/pmu/ebb/pmae_handling_test.c | 2 -
.../powerpc/pmu/ebb/pmc56_overflow_test.c | 2 -
142 files changed, 1037 insertions(+), 442 deletions(-)
--
2.25.1
1
120

[PATCH 01/37] arm64/ascend: enable ascend features for Ascend910 platform
by Yang Yingliang 22 Sep '20
by Yang Yingliang 22 Sep '20
22 Sep '20
From: Ding Tianhong <dingtianhong(a)huawei.com>
ascend inclusion
category: feature
bugzilla: NA
CVE: NA
-------------------------------------------------
The Ascend910 platform only use the acpi mode to boot system,
the oem message is recorded in the oem_table_id of IORT table,
so use the oem message to enable the ascend features.
Signed-off-by: Ding Tianhong <dingtianhong(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/mm/init.c | 7 ++++++-
drivers/acpi/arm64/iort.c | 24 ++++++++++++++++++++++++
include/linux/init.h | 7 +++++++
3 files changed, 37 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 883350f9cc42..f23da539a476 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -769,7 +769,7 @@ __setup("keepinitrd", keepinitrd_setup);
#endif
#ifdef CONFIG_ASCEND_FEATURES
-static int __init ascend_enable_setup(char *__unused)
+void ascend_enable_all_features(void)
{
if (IS_ENABLED(CONFIG_ASCEND_DVPP_MMAP))
enable_mmap_dvpp = 1;
@@ -782,6 +782,11 @@ static int __init ascend_enable_setup(char *__unused)
if (IS_ENABLED(CONFIG_SUSPEND))
mem_sleep_current = PM_SUSPEND_ON;
+}
+
+static int __init ascend_enable_setup(char *__unused)
+{
+ ascend_enable_all_features();
return 1;
}
diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
index 5a724bdb4a4d..7408ffab7303 100644
--- a/drivers/acpi/arm64/iort.c
+++ b/drivers/acpi/arm64/iort.c
@@ -25,6 +25,7 @@
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
+#include <linux/init.h>
#define IORT_TYPE_MASK(type) (1 << (type))
#define IORT_MSI_TYPE (1 << ACPI_IORT_NODE_ITS_GROUP)
@@ -1639,6 +1640,26 @@ static void __init iort_init_platform_devices(void)
}
}
+/*
+ * This function detects the ascend platform by oem table id.
+ */
+static bool ascend_platform_detected(struct acpi_table_header *h)
+{
+ if (!memcmp(h->oem_table_id, "HI19801P", ACPI_OEM_TABLE_ID_SIZE))
+ return true;
+
+ if (!memcmp(h->oem_table_id, "HI19802P", ACPI_OEM_TABLE_ID_SIZE))
+ return true;
+
+ if (!memcmp(h->oem_table_id, "HI19804P", ACPI_OEM_TABLE_ID_SIZE))
+ return true;
+
+ if (!memcmp(h->oem_table_id, "HI1980\0\0", ACPI_OEM_TABLE_ID_SIZE))
+ return true;
+
+ return false;
+}
+
void __init acpi_iort_init(void)
{
acpi_status status;
@@ -1654,5 +1675,8 @@ void __init acpi_iort_init(void)
return;
}
+ if (ascend_platform_detected(iort_table))
+ ascend_enable_all_features();
+
iort_init_platform_devices();
}
diff --git a/include/linux/init.h b/include/linux/init.h
index 2538d176dd1f..e6f970ac7bf4 100644
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -306,4 +306,11 @@ void __init parse_early_options(char *cmdline);
#define __exit_p(x) NULL
#endif
+#ifndef __ASSEMBLY__
+#ifdef CONFIG_ASCEND_FEATURES
+extern void ascend_enable_all_features(void);
+#else
+static inline void ascend_enable_all_features(void) { }
+#endif
+#endif
#endif /* _LINUX_INIT_H */
--
2.25.1
1
36
From: yanxiaodan <yanxiaodan(a)huawei.com>
Export walk_page_range,__kvm_tlb_flush_vmid and flush_tlb_mm_range,
so that memory-scan module can use these functions to scan the
memory pages and flush TLB.
memory-scan link: https://gitee.com/openeuler/memory-scan
Signed-off-by: yanxiaodan <yanxiaodan(a)huawei.com>
---
arch/arm64/kvm/hyp/tlb.c | 1 +
arch/x86/mm/tlb.c | 2 +-
mm/pagewalk.c | 1 +
3 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c
index d063a57..7158622 100644
--- a/arch/arm64/kvm/hyp/tlb.c
+++ b/arch/arm64/kvm/hyp/tlb.c
@@ -205,6 +205,7 @@ void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm)
__tlb_switch_to_host(kvm, &cxt);
}
+EXPORT_SYMBOL(__kvm_tlb_flush_vmid);
void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu)
{
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 1a3569b..be19bf5 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -945,7 +945,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
put_flush_tlb_info();
put_cpu();
}
-
+EXPORT_SYMBOL(flush_tlb_mm_range);
static void do_flush_tlb_all(void *info)
{
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index e81640d..44a9ab1 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -430,6 +430,7 @@ int walk_page_range(struct mm_struct *mm, unsigned long start,
} while (start = next, start < end);
return err;
}
+EXPORT_SYMBOL(walk_page_range);
/*
* Similar to walk_page_range() but can walk any page tables even if they are
--
1.8.3.1
1
0

【Meeting Notice】openEuler kernel sig meeting Time: 2020-09-18 10:00-12:00
by Meeting Book 17 Sep '20
by Meeting Book 17 Sep '20
17 Sep '20
5
7
hulk inclusion
category: tools
bugzilla: NA
CVE: NA
Link: https://gitee.com/src-openeuler/kernel/issues/I1SMDG
-------------------------------------------------
Python2 is no longer supported by the upstream community. The dependency
on python2 should be removed from the kernel code.
Signed-off-by: Zhipeng Xie <xiezhipeng1(a)huawei.com>
---
tools/perf/scripts/python/call-graph-from-sql.py | 2 +-
tools/perf/scripts/python/export-to-postgresql.py | 2 +-
tools/power/pm-graph/bootgraph.py | 2 +-
tools/power/pm-graph/sleepgraph.py | 2 +-
4 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/tools/perf/scripts/python/call-graph-from-sql.py b/tools/perf/scripts/python/call-graph-from-sql.py
index b494a67a1c67..099b472df4a2 100644
--- a/tools/perf/scripts/python/call-graph-from-sql.py
+++ b/tools/perf/scripts/python/call-graph-from-sql.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python2
+#!/usr/bin/python
# call-graph-from-sql.py: create call-graph from sql database
# Copyright (c) 2014-2017, Intel Corporation.
#
diff --git a/tools/perf/scripts/python/export-to-postgresql.py b/tools/perf/scripts/python/export-to-postgresql.py
index e46f51b17513..e97a03697aa2 100644
--- a/tools/perf/scripts/python/export-to-postgresql.py
+++ b/tools/perf/scripts/python/export-to-postgresql.py
@@ -171,7 +171,7 @@ import datetime
# SELECT * FROM samples_view WHERE event = 'transactions' AND branch_type_name = 'transaction abort';
#
# To print a call stack requires walking the call_paths table. For example this python script:
-# #!/usr/bin/python2
+# #!/usr/bin/python
#
# import sys
# from PySide.QtSql import *
diff --git a/tools/power/pm-graph/bootgraph.py b/tools/power/pm-graph/bootgraph.py
index 8ee626c0f6a5..abb4c38f029b 100755
--- a/tools/power/pm-graph/bootgraph.py
+++ b/tools/power/pm-graph/bootgraph.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python2
+#!/usr/bin/python
#
# Tool for analyzing boot timing
# Copyright (c) 2013, Intel Corporation.
diff --git a/tools/power/pm-graph/sleepgraph.py b/tools/power/pm-graph/sleepgraph.py
index 0c760478f7d7..420102e2cd08 100755
--- a/tools/power/pm-graph/sleepgraph.py
+++ b/tools/power/pm-graph/sleepgraph.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python2
+#!/usr/bin/python
#
# Tool for analyzing suspend/resume timing
# Copyright (c) 2013, Intel Corporation.
--
2.18.1
3
2
hulk inclusion
category: feature
CVE: NA
-----------------------
enable CONFIG_NUMA_AWARE_SPINLOCKS.
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/configs/euleros_defconfig | 3 ++-
arch/arm64/configs/hulk_defconfig | 2 +-
arch/arm64/configs/openeuler_defconfig | 2 +-
3 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/configs/euleros_defconfig b/arch/arm64/configs/euleros_defconfig
index 09c604738074..2762ee8afe05 100644
--- a/arch/arm64/configs/euleros_defconfig
+++ b/arch/arm64/configs/euleros_defconfig
@@ -418,6 +418,7 @@ CONFIG_ARM64_ERR_RECOV=y
CONFIG_MPAM=y
CONFIG_NUMA=y
CONFIG_NODES_SHIFT=3
+CONFIG_NUMA_AWARE_SPINLOCKS=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
@@ -440,7 +441,7 @@ CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_SECCOMP=y
CONFIG_PARAVIRT=y
-# CONFIG_PARAVIRT_SPINLOCKS is not set
+CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
diff --git a/arch/arm64/configs/hulk_defconfig b/arch/arm64/configs/hulk_defconfig
index 9952d0b89744..4979492bb484 100644
--- a/arch/arm64/configs/hulk_defconfig
+++ b/arch/arm64/configs/hulk_defconfig
@@ -440,7 +440,7 @@ CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_SECCOMP=y
CONFIG_PARAVIRT=y
-# CONFIG_PARAVIRT_SPINLOCKS is not set
+CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index beb292fa9b47..809350e03968 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -421,7 +421,7 @@ CONFIG_ARM64_ERR_RECOV=y
CONFIG_MPAM=y
CONFIG_NUMA=y
CONFIG_NODES_SHIFT=4
-# CONFIG_NUMA_AWARE_SPINLOCKS is not set
+CONFIG_NUMA_AWARE_SPINLOCKS=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
--
2.25.1
1
0

[PATCH 1/2] random: only read from /dev/random after its pool has received 128 bits
by Yang Yingliang 15 Sep '20
by Yang Yingliang 15 Sep '20
15 Sep '20
From: Theodore Ts'o <tytso(a)mit.edu>
mainline inclusion
from mainline-v5.2-rc1
commit eb9d1bf079bb438d1a066d72337092935fc770f6
category: bugfix
bugzilla: NA
CVE: NA
https://gitee.com/openeuler/kernel/issues/I1CVQ4?from=project-issue
-------------------------------------------------
Immediately after boot, we allow reads from /dev/random before its
entropy pool has been fully initialized. Fix this so that we don't
allow this until the blocking pool has received 128 bits.
We do this by repurposing the initialized flag in the entropy pool
struct, and use the initialized flag in the blocking pool to indicate
whether it is safe to pull from the blocking pool.
To do this, we needed to rework when we decide to push entropy from the
input pool to the blocking pool, since the initialized flag for the
input pool was used for this purpose. To simplify things, we no
longer use the initialized flag for that purpose, nor do we use the
entropy_total field any more.
Signed-off-by: Theodore Ts'o <tytso(a)mit.edu>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/char/random.c | 44 +++++++++++++++++------------------
include/trace/events/random.h | 13 ++++-------
2 files changed, 27 insertions(+), 30 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 6a5d4dfafc47..750d2ab2ac54 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -471,7 +471,6 @@ struct entropy_store {
unsigned short add_ptr;
unsigned short input_rotate;
int entropy_count;
- int entropy_total;
unsigned int initialized:1;
unsigned int last_data_init:1;
__u8 last_data[EXTRACT_SIZE];
@@ -644,7 +643,7 @@ static void process_random_ready_list(void)
*/
static void credit_entropy_bits(struct entropy_store *r, int nbits)
{
- int entropy_count, orig;
+ int entropy_count, orig, has_initialized = 0;
const int pool_size = r->poolinfo->poolfracbits;
int nfrac = nbits << ENTROPY_SHIFT;
@@ -699,23 +698,25 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
entropy_count = 0;
} else if (entropy_count > pool_size)
entropy_count = pool_size;
+ if ((r == &blocking_pool) && !r->initialized &&
+ (entropy_count >> ENTROPY_SHIFT) > 128)
+ has_initialized = 1;
if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)
goto retry;
- r->entropy_total += nbits;
- if (!r->initialized && r->entropy_total > 128) {
+ if (has_initialized)
r->initialized = 1;
- r->entropy_total = 0;
- }
trace_credit_entropy_bits(r->name, nbits,
- entropy_count >> ENTROPY_SHIFT,
- r->entropy_total, _RET_IP_);
+ entropy_count >> ENTROPY_SHIFT, _RET_IP_);
if (r == &input_pool) {
int entropy_bits = entropy_count >> ENTROPY_SHIFT;
+ struct entropy_store *other = &blocking_pool;
- if (crng_init < 2 && entropy_bits >= 128) {
+ if (crng_init < 2) {
+ if (entropy_bits < 128)
+ return;
crng_reseed(&primary_crng, r);
entropy_bits = r->entropy_count >> ENTROPY_SHIFT;
}
@@ -726,20 +727,14 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
wake_up_interruptible(&random_read_wait);
kill_fasync(&fasync, SIGIO, POLL_IN);
}
- /* If the input pool is getting full, send some
- * entropy to the blocking pool until it is 75% full.
+ /* If the input pool is getting full, and the blocking
+ * pool has room, send some entropy to the blocking
+ * pool.
*/
- if (entropy_bits > random_write_wakeup_bits &&
- r->initialized &&
- r->entropy_total >= 2*random_read_wakeup_bits) {
- struct entropy_store *other = &blocking_pool;
-
- if (other->entropy_count <=
- 3 * other->poolinfo->poolfracbits / 4) {
- schedule_work(&other->push_work);
- r->entropy_total = 0;
- }
- }
+ if (!work_pending(&other->push_work) &&
+ (ENTROPY_BITS(r) > 6 * r->poolinfo->poolbytes) &&
+ (ENTROPY_BITS(other) <= 6 * other->poolinfo->poolbytes))
+ schedule_work(&other->push_work);
}
}
@@ -1558,6 +1553,11 @@ static ssize_t extract_entropy_user(struct entropy_store *r, void __user *buf,
int large_request = (nbytes > 256);
trace_extract_entropy_user(r->name, nbytes, ENTROPY_BITS(r), _RET_IP_);
+ if (!r->initialized && r->pull) {
+ xfer_secondary_pool(r, ENTROPY_BITS(r->pull)/8);
+ if (!r->initialized)
+ return 0;
+ }
xfer_secondary_pool(r, nbytes);
nbytes = account(r, nbytes, 0, 0);
diff --git a/include/trace/events/random.h b/include/trace/events/random.h
index 0560dfc33f1c..32c10a515e2d 100644
--- a/include/trace/events/random.h
+++ b/include/trace/events/random.h
@@ -62,15 +62,14 @@ DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes_nolock,
TRACE_EVENT(credit_entropy_bits,
TP_PROTO(const char *pool_name, int bits, int entropy_count,
- int entropy_total, unsigned long IP),
+ unsigned long IP),
- TP_ARGS(pool_name, bits, entropy_count, entropy_total, IP),
+ TP_ARGS(pool_name, bits, entropy_count, IP),
TP_STRUCT__entry(
__field( const char *, pool_name )
__field( int, bits )
__field( int, entropy_count )
- __field( int, entropy_total )
__field(unsigned long, IP )
),
@@ -78,14 +77,12 @@ TRACE_EVENT(credit_entropy_bits,
__entry->pool_name = pool_name;
__entry->bits = bits;
__entry->entropy_count = entropy_count;
- __entry->entropy_total = entropy_total;
__entry->IP = IP;
),
- TP_printk("%s pool: bits %d entropy_count %d entropy_total %d "
- "caller %pS", __entry->pool_name, __entry->bits,
- __entry->entropy_count, __entry->entropy_total,
- (void *)__entry->IP)
+ TP_printk("%s pool: bits %d entropy_count %d caller %pS",
+ __entry->pool_name, __entry->bits,
+ __entry->entropy_count, (void *)__entry->IP)
);
TRACE_EVENT(push_to_pool,
--
2.25.1
1
1
Chen Jun (1):
config: set default value of CONFIG_ARM64_E0PD
Mark Brown (4):
arm64: Add initial support for E0PD
arm64: Factor out checks for KASLR in KPTI code into separate function
arm64: Don't use KPTI where we have E0PD
arm64: Use a variable to store non-global mappings decision
Will Deacon (1):
arm64: kpti: Fix "kpti=off" when KASLR is enabled
arch/arm64/Kconfig | 16 +++++++
arch/arm64/configs/euleros_defconfig | 1 +
arch/arm64/configs/hulk_defconfig | 1 +
arch/arm64/configs/openeuler_defconfig | 2 +
arch/arm64/include/asm/cpucaps.h | 3 +-
arch/arm64/include/asm/mmu.h | 4 +-
arch/arm64/include/asm/pgtable-hwdef.h | 2 +
arch/arm64/include/asm/pgtable-prot.h | 6 ++-
arch/arm64/include/asm/sysreg.h | 1 +
arch/arm64/kernel/cpufeature.c | 61 ++++++++++++++++++++++++--
arch/arm64/kernel/setup.c | 7 +++
11 files changed, 95 insertions(+), 9 deletions(-)
--
2.25.1
1
6
Weilong Chen (7):
arm64/ascend: Add new CONFIG for auto-tuning hugepage
arm64/ascend: Add mmap hook when alloc hugepage
arm64/ascend: Add set hugepage number helper function
arm64/ascend: Add hugepage flags change interface
arm64/ascend: Notifier will return a freed val to indecate print logs
arm64/ascend: Enable CONFIG_ASCEND_AUTO_TUNING_HUGEPAGE for
hulk_defconfig
arm64/ascend: Add auto tuning hugepage module
arch/arm64/Kconfig | 6 +
arch/arm64/configs/hulk_defconfig | 1 +
mm/Makefile | 1 +
mm/hugepage_tuning.c | 693 ++++++++++++++++++++++++++++++
mm/hugepage_tuning.h | 70 +++
mm/hugetlb.c | 22 +
mm/mmap.c | 30 ++
mm/oom_kill.c | 8 +-
8 files changed, 829 insertions(+), 2 deletions(-)
create mode 100644 mm/hugepage_tuning.c
create mode 100644 mm/hugepage_tuning.h
--
2.25.1
1
7
Guenter Roeck (1):
arm64: kaslr: Use standard early random function
Linus Torvalds (1):
random: random.h should include archrandom.h, not the other way around
Mark Brown (3):
arm64: kaslr: Announce KASLR status on boot
arm64: kaslr: Check command line before looking for a seed
arm64: Use v8.5-RNG entropy for KASLR seed
Mark Rutland (2):
arm64: add credited/trusted RNG support
random: add arch_get_random_*long_early()
Richard Henderson (1):
arm64: Implement archrandom.h for ARMv8.5-RNG
Robin Murphy (1):
arm64: Fix CONFIG_ARCH_RANDOM=n build
Yang Yingliang (1):
config: set default value of CONFIG_ARCH_RANDOM
Documentation/arm64/cpu-feature-registers.txt | 2 +
Documentation/arm64/elf_hwcaps.txt | 5 ++
arch/arm64/Kconfig | 12 +++
arch/arm64/configs/euleros_defconfig | 5 ++
arch/arm64/configs/hulk_defconfig | 5 ++
arch/arm64/configs/openeuler_defconfig | 5 ++
arch/arm64/include/asm/archrandom.h | 88 +++++++++++++++++++
arch/arm64/include/asm/cpucaps.h | 3 +-
arch/arm64/include/asm/hwcap.h | 1 +
arch/arm64/include/asm/sysreg.h | 4 +
arch/arm64/include/uapi/asm/hwcap.h | 1 +
arch/arm64/kernel/cpufeature.c | 14 +++
arch/arm64/kernel/cpuinfo.c | 1 +
arch/arm64/kernel/kaslr.c | 54 +++++++++++-
include/linux/random.h | 22 +++++
15 files changed, 217 insertions(+), 5 deletions(-)
create mode 100644 arch/arm64/include/asm/archrandom.h
--
2.25.1
1
10

11 Sep '20
Jingyi Wang (2):
arm64: watchdog: add switch to select sdei_watchdog/pmu_watchdog
arm64: update hulk_defconfig and openeuler_defconfig
arch/arm64/configs/hulk_defconfig | 2 +-
arch/arm64/configs/openeuler_defconfig | 3 ++-
arch/arm64/kernel/watchdog_sdei.c | 23 ++++++++++++++------
include/linux/nmi.h | 11 ++++++++++
kernel/watchdog.c | 29 +++++++++++++++++++-------
lib/Kconfig.debug | 7 ++-----
6 files changed, 55 insertions(+), 20 deletions(-)
--
2.25.1
1
2
Ye Bin (3):
dm cache metadata: Avoid returning cmd->bm wild pointer on error
dm thin metadata: Avoid returning cmd->bm wild pointer on error
dm thin metadata: Fix use-after-free in dm_bm_set_read_only
drivers/md/dm-cache-metadata.c | 8 ++++++--
drivers/md/dm-thin-metadata.c | 10 +++++++---
drivers/md/persistent-data/dm-block-manager.c | 14 ++++++++------
3 files changed, 21 insertions(+), 11 deletions(-)
--
2.25.1
1
3
From: Eugeniu Rosca <erosca(a)de.adit-jv.com>
mainline inclusion
from mainline-v5.9-rc4
commit dc07a728d49cf025f5da2c31add438d839d076c0
category: bugfix
bugzilla: NA
CVE: NA
-------------------------------------------------
Commit 52f23478081ae0 ("mm/slub.c: fix corrupted freechain in
deactivate_slab()") suffered an update when picked up from LKML [1].
Specifically, relocating 'freelist = NULL' into 'freelist_corrupted()'
created a no-op statement. Fix it by sticking to the behavior intended
in the original patch [1]. In addition, make freelist_corrupted()
immune to passing NULL instead of &freelist.
The issue has been spotted via static analysis and code review.
[1] https://lore.kernel.org/linux-mm/20200331031450.12182-1-dongli.zhang@oracle…
Fixes: 52f23478081ae0 ("mm/slub.c: fix corrupted freechain in deactivate_slab()")
Signed-off-by: Eugeniu Rosca <erosca(a)de.adit-jv.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Dongli Zhang <dongli.zhang(a)oracle.com>
Cc: Joe Jin <joe.jin(a)oracle.com>
Cc: Christoph Lameter <cl(a)linux.com>
Cc: Pekka Enberg <penberg(a)kernel.org>
Cc: David Rientjes <rientjes(a)google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
Cc: <stable(a)vger.kernel.org>
Link: https://lkml.kernel.org/r/20200824130643.10291-1-erosca@de.adit-jv.com
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
mm/slub.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 0415902a2749..887e3d67a4d7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -646,12 +646,12 @@ static void slab_fix(struct kmem_cache *s, char *fmt, ...)
}
static bool freelist_corrupted(struct kmem_cache *s, struct page *page,
- void *freelist, void *nextfree)
+ void **freelist, void *nextfree)
{
if ((s->flags & SLAB_CONSISTENCY_CHECKS) &&
- !check_valid_pointer(s, page, nextfree)) {
- object_err(s, page, freelist, "Freechain corrupt");
- freelist = NULL;
+ !check_valid_pointer(s, page, nextfree) && freelist) {
+ object_err(s, page, *freelist, "Freechain corrupt");
+ *freelist = NULL;
slab_fix(s, "Isolate corrupted freechain");
return true;
}
@@ -1342,7 +1342,7 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
int objects) {}
static bool freelist_corrupted(struct kmem_cache *s, struct page *page,
- void *freelist, void *nextfree)
+ void **freelist, void *nextfree)
{
return false;
}
@@ -2036,7 +2036,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
* 'freelist' is already corrupted. So isolate all objects
* starting at 'freelist'.
*/
- if (freelist_corrupted(s, page, freelist, nextfree))
+ if (freelist_corrupted(s, page, &freelist, nextfree))
break;
do {
--
2.25.1
1
0
Support
- SVE2
- ARMv8.5-CondM, Condition flag manipulation
- ARMv8.5-FRINT, Armv8.5 Floating-point to integer
- ARMv8.6 Matrix multiply extension
Dave Martin (2):
arm64: Expose SVE2 features for userspace
arm64: cpufeature: Fix missing ZFR0 in __read_sysreg_by_encoding()
Julien Grall (2):
arm64: cpufeature: Treat ID_AA64ZFR0_EL1 as RAZ when SVE is not
enabled
arm64: cpufeature: Effectively expose FRINT capability to userspace
Mark Brown (2):
arm64: Expose ARMv8.5 CondM capability to userspace
arm64: Expose FRINT capabilities to userspace
Steven Price (1):
arm64: cpufeature: Export matrix and other features to userspace
Documentation/arm64/cpu-feature-registers.txt | 30 ++++++++++
Documentation/arm64/elf_hwcaps.txt | 59 +++++++++++++++++++
Documentation/arm64/sve.txt | 17 ++++++
arch/arm64/Kconfig | 3 +
arch/arm64/include/asm/hwcap.h | 15 +++++
arch/arm64/include/asm/sysreg.h | 27 +++++++++
arch/arm64/include/uapi/asm/hwcap.h | 15 +++++
arch/arm64/kernel/cpufeature.c | 45 +++++++++++++-
arch/arm64/kernel/cpuinfo.c | 15 +++++
9 files changed, 225 insertions(+), 1 deletion(-)
--
2.25.1
1
7

10 Sep '20
Anshuman Khandual (1):
arm64/cpufeature: Add remaining feature bits in ID_AA64ISAR0 register
Sami Tolvanen (1):
arm64: use a common .arch preamble for inline assembly
Yuan Can (1):
config: arm64: update defconfig
Zhenyu Ye (3):
arm64: tlb: Detect the ARMv8.4 TLBI RANGE feature
arm64: enable tlbi range instructions
arm64: tlb: Use the TLBI RANGE feature in arm64
arch/arm64/Kconfig | 18 +++
arch/arm64/Makefile | 15 ++-
arch/arm64/configs/euleros_defconfig | 5 +
arch/arm64/configs/hulk_defconfig | 5 +
arch/arm64/configs/openeuler_defconfig | 5 +
arch/arm64/include/asm/compiler.h | 6 +
arch/arm64/include/asm/cpucaps.h | 3 +-
arch/arm64/include/asm/cpufeature.h | 6 +
arch/arm64/include/asm/sysreg.h | 4 +
arch/arm64/include/asm/tlbflush.h | 160 ++++++++++++++++++++-----
arch/arm64/kernel/cpufeature.c | 11 ++
11 files changed, 205 insertions(+), 33 deletions(-)
--
2.25.1
1
6