From: Hao Shen shenhao21@huawei.com
driver inclusion category: bugfix bugzilla: NA CVE: NA
--------------------------------------------------------
When rocee init fault, the rtnl_unlock execute 2 times
Signed-off-by: Hao Shen shenhao21@huawei.com Reviewed-by: Zhong Zhaohui zhongzhaohui@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c index 0b7e159..3e0afc4 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c @@ -3865,7 +3865,7 @@ static int hclge_resume(struct hnae3_ae_dev *ae_dev)
ret = hclge_notify_roce_client(hdev, HNAE3_INIT_CLIENT); if (ret) - goto err_reset_lock; + return ret;
rtnl_lock();
From: Hao Shen shenhao21@huawei.com
driver inclusion category: other bugzilla: NA CVE: NA
--------------------------
This patch is used to modify the hns3 driver version to 1.9.38.0
Signed-off-by: Hao Shen shenhao21@huawei.com Reviewed-by: Zhong Zhaohui zhongzhaohui@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/net/ethernet/hisilicon/hns3/hnae3.h | 2 +- drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_version.h | 2 +- drivers/net/ethernet/hisilicon/hns3/hns3_enet.h | 2 +- drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h | 2 +- drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h index 14b3991..0d38826 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h +++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h @@ -30,7 +30,7 @@ #include <linux/pci.h> #include <linux/types.h>
-#define HNAE3_MOD_VERSION "1.9.37.9" +#define HNAE3_MOD_VERSION "1.9.38.0"
#define HNAE3_MIN_VECTOR_NUM 2 /* first one for misc, another for IO */
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_version.h b/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_version.h index b89b110..d8dbc62 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_version.h +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_version.h @@ -4,7 +4,7 @@ #ifndef __HNS3_CAE_VERSION_H__ #define __HNS3_CAE_VERSION_H__
-#define HNS3_CAE_MOD_VERSION "1.9.37.9" +#define HNS3_CAE_MOD_VERSION "1.9.38.0"
#define CMT_ID_LEN 8 #define RESV_LEN 3 diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h index 62f34e9..8506791 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h @@ -8,7 +8,7 @@
#include "hnae3.h"
-#define HNS3_MOD_VERSION "1.9.37.9" +#define HNS3_MOD_VERSION "1.9.38.0"
extern char hns3_driver_version[];
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h index 15af034..14e6e8a 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h @@ -12,7 +12,7 @@ #include "hclge_cmd.h" #include "hnae3.h"
-#define HCLGE_MOD_VERSION "1.9.37.9" +#define HCLGE_MOD_VERSION "1.9.38.0" #define HCLGE_DRIVER_NAME "hclge"
#define HCLGE_MAX_PF_NUM 8 diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h index f89ee1b..864b1f8 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h +++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h @@ -10,7 +10,7 @@ #include "hclgevf_cmd.h" #include "hnae3.h"
-#define HCLGEVF_MOD_VERSION "1.9.37.9" +#define HCLGEVF_MOD_VERSION "1.9.38.0" #define HCLGEVF_DRIVER_NAME "hclgevf"
#define HCLGEVF_MAX_VLAN_ID 4095
From: Zefan Li lizefan@huawei.com
hulk inclusion category: bugfix bugzilla: 34583 CVE: NA
-------------------------------------------------
If systemd is configured to use hybrid mode which enables the use of both cgroup v1 and v2, systemd will create new cgroup on both the default root (v2) and netprio_cgroup hierarchy (v1) for a new session and attach task to the two cgroups. If the task does some network thing then the v2 cgroup can never be freed after the session exited.
One of our machines ran into OOM due to this memory leak.
In the scenario described above when sk_alloc() is called cgroup_sk_alloc() thought it's in v2 mode, so it stores the cgroup pointer in sk->sk_cgrp_data and increments the cgroup refcnt, but then sock_update_netprioidx() thought it's in v1 mode, so it stores netprioidx value in sk->sk_cgrp_data, so the cgroup refcnt will never be freed.
Currently we do the mode switch when someone writes to the ifpriomap cgroup control file. The easiest fix is to also do the switch when a task is attached to a new cgroup.
Fixes: bd1060a1d671 ("sock, cgroup: add sock->sk_cgroup") Reported-by: Yang Yingliang yangyingliang@huawei.com Tested-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Zefan Li lizefan@huawei.com Acked-by: Tejun Heo tj@kernel.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- net/core/netprio_cgroup.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/net/core/netprio_cgroup.c b/net/core/netprio_cgroup.c index b905747..2397866 100644 --- a/net/core/netprio_cgroup.c +++ b/net/core/netprio_cgroup.c @@ -240,6 +240,8 @@ static void net_prio_attach(struct cgroup_taskset *tset) struct task_struct *p; struct cgroup_subsys_state *css;
+ cgroup_sk_alloc_disable(); + cgroup_taskset_for_each(p, css, tset) { void *v = (void *)(unsigned long)css->cgroup->id;
From: Paolo Abeni pabeni@redhat.com
mainline inclusion from mainline-v5.7 commit eead1c2ea2509fd754c6da893a94f0e69e83ebe4 category: bugfix bugzilla: 13690 CVE: CVE-2020-10711
-------------------------------------------------
The cipso and calipso code can set the MLS_CAT attribute on successful parsing, even if the corresponding catmap has not been allocated, as per current configuration and external input.
Later, selinux code tries to access the catmap if the MLS_CAT flag is present via netlbl_catmap_getlong(). That may cause null ptr dereference while processing incoming network traffic.
Address the issue setting the MLS_CAT flag only if the catmap is really allocated. Additionally let netlbl_catmap_getlong() cope with NULL catmap.
Reported-by: Matthew Sheets matthew.sheets@gd-ms.com Fixes: 4b8feff251da ("netlabel: fix the horribly broken catmap functions") Fixes: ceba1832b1b2 ("calipso: Set the calipso socket label to match the secattr.") Signed-off-by: Paolo Abeni pabeni@redhat.com Acked-by: Paul Moore paul@paul-moore.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Yang Yingliang yangyingliang@huawei.com Reviewed-by: Wenan Mao maowenan@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- net/ipv4/cipso_ipv4.c | 6 ++++-- net/ipv6/calipso.c | 3 ++- net/netlabel/netlabel_kapi.c | 6 ++++++ 3 files changed, 12 insertions(+), 3 deletions(-)
diff --git a/net/ipv4/cipso_ipv4.c b/net/ipv4/cipso_ipv4.c index 1c21dc5..5535b72 100644 --- a/net/ipv4/cipso_ipv4.c +++ b/net/ipv4/cipso_ipv4.c @@ -1272,7 +1272,8 @@ static int cipso_v4_parsetag_rbm(const struct cipso_v4_doi *doi_def, return ret_val; }
- secattr->flags |= NETLBL_SECATTR_MLS_CAT; + if (secattr->attr.mls.cat) + secattr->flags |= NETLBL_SECATTR_MLS_CAT; }
return 0; @@ -1453,7 +1454,8 @@ static int cipso_v4_parsetag_rng(const struct cipso_v4_doi *doi_def, return ret_val; }
- secattr->flags |= NETLBL_SECATTR_MLS_CAT; + if (secattr->attr.mls.cat) + secattr->flags |= NETLBL_SECATTR_MLS_CAT; }
return 0; diff --git a/net/ipv6/calipso.c b/net/ipv6/calipso.c index 1c0bb9f..7061178 100644 --- a/net/ipv6/calipso.c +++ b/net/ipv6/calipso.c @@ -1061,7 +1061,8 @@ static int calipso_opt_getattr(const unsigned char *calipso, goto getattr_return; }
- secattr->flags |= NETLBL_SECATTR_MLS_CAT; + if (secattr->attr.mls.cat) + secattr->flags |= NETLBL_SECATTR_MLS_CAT; }
secattr->type = NETLBL_NLTYPE_CALIPSO; diff --git a/net/netlabel/netlabel_kapi.c b/net/netlabel/netlabel_kapi.c index ee3e5b6..15fe212 100644 --- a/net/netlabel/netlabel_kapi.c +++ b/net/netlabel/netlabel_kapi.c @@ -748,6 +748,12 @@ int netlbl_catmap_getlong(struct netlbl_lsm_catmap *catmap, if ((off & (BITS_PER_LONG - 1)) != 0) return -EINVAL;
+ /* a null catmap is equivalent to an empty one */ + if (!catmap) { + *offset = (u32)-1; + return 0; + } + if (off < catmap->startbit) { off = catmap->startbit; *offset = off;
From: Chiqijun chiqijun@huawei.com
driver inclusion category: bugfix bugzilla: 4472
-----------------------------------------------------------------------
The VF interrupt aggregation is optimized for SDI 3.0 SR-IOV
Signed-off-by: Chiqijun chiqijun@huawei.com Reviewed-by: Luoshaokai luoshaokai@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/net/ethernet/huawei/hinic/hinic_hw.h | 1 + drivers/net/ethernet/huawei/hinic/hinic_lld.c | 2 + drivers/net/ethernet/huawei/hinic/hinic_main.c | 252 ++++++++++++++------- .../ethernet/huawei/hinic/hinic_multi_host_mgmt.c | 10 + drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.h | 2 - drivers/net/ethernet/huawei/hinic/hinic_nic_dev.h | 2 +- 6 files changed, 188 insertions(+), 81 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw.h b/drivers/net/ethernet/huawei/hinic/hinic_hw.h index 8876c29..38c77d0 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw.h @@ -728,5 +728,6 @@ int hinic_mbox_ppf_to_vf(void *hwdev, void hinic_migrate_report(void *dev); int hinic_set_vxlan_udp_dport(void *hwdev, u32 udp_port); bool is_multi_vm_slave(void *hwdev); +bool is_multi_bm_slave(void *hwdev);
#endif diff --git a/drivers/net/ethernet/huawei/hinic/hinic_lld.c b/drivers/net/ethernet/huawei/hinic/hinic_lld.c index 8a2f280..18d29c3 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_lld.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_lld.c @@ -1728,6 +1728,8 @@ int hinic_ovs_set_vf_nic_state(struct hinic_lld_dev *lld_dev, u16 vf_func_id, uld_dev->in_vm = true; uld_dev->is_vm_slave = is_multi_vm_slave(uld_dev->hwdev); + uld_dev->is_bm_slave = + is_multi_bm_slave(uld_dev->hwdev); if (des_dev->init_state < HINIC_INIT_STATE_NIC_INITED) des_dev->init_state = HINIC_INIT_STATE_NIC_INITED; diff --git a/drivers/net/ethernet/huawei/hinic/hinic_main.c b/drivers/net/ethernet/huawei/hinic/hinic_main.c index b6da9fe..c036b9e 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_main.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_main.c @@ -61,6 +61,21 @@ #define HINIC_DEAULT_TXRX_MSIX_COALESC_TIMER_CFG 32 #define HINIC_DEAULT_TXRX_MSIX_RESEND_TIMER_CFG 7
+/* suit for sdi3.0 vm mode, change this define for test best performance */ +#define SDI_VM_PENDING_LIMT 2 +#define SDI_VM_COALESCE_TIMER_CFG 16 +#define SDI_VM_RX_PKT_RATE_HIGH 1000000 +#define SDI_VM_RX_PKT_RATE_LOW 30000 +#define SDI_VM_RX_USECS_HIGH 56 +#define SDI_VM_RX_PENDING_LIMT_HIGH 20 +#define SDI_VM_RX_USECS_LOW 16 +#define SDI_VM_RX_PENDING_LIMT_LOW 2 + +/* if qp_coalesc_use_drv_params_switch !=0, use user setting params */ +static unsigned char qp_coalesc_use_drv_params_switch; +module_param(qp_coalesc_use_drv_params_switch, byte, 0444); +MODULE_PARM_DESC(qp_coalesc_use_drv_params_switch, "QP MSI-X Interrupt coalescing parameter switch (default=0, not use drv parameter)"); + static unsigned char qp_pending_limit = HINIC_DEAULT_TXRX_MSIX_PENDING_LIMIT; module_param(qp_pending_limit, byte, 0444); MODULE_PARM_DESC(qp_pending_limit, "QP MSI-X Interrupt coalescing parameter pending_limit (default=2)"); @@ -465,7 +480,9 @@ static int hinic_poll(struct napi_struct *napi, int budget) hinic_set_msix_state(nic_dev->hwdev, irq_cfg->msix_entry_idx, HINIC_MSIX_ENABLE); - else if (!nic_dev->in_vm) + else if (!nic_dev->in_vm && + (hinic_get_func_mode(nic_dev->hwdev) == + FUNC_MOD_NORMAL_HOST)) enable_irq(irq_cfg->irq_id); }
@@ -500,7 +517,9 @@ static irqreturn_t qp_irq(int irq, void *data) hinic_set_msix_state(nic_dev->hwdev, msix_entry_idx, HINIC_MSIX_DISABLE); - } else if (!nic_dev->in_vm) { + } else if (!nic_dev->in_vm && + (hinic_get_func_mode(nic_dev->hwdev) == + FUNC_MOD_NORMAL_HOST)) { disable_irq_nosync(irq_cfg->irq_id); }
@@ -603,12 +622,6 @@ static void __calc_coal_para(struct hinic_nic_dev *nic_dev, struct hinic_intr_coal_info *q_coal, u64 rate, u8 *coalesc_timer_cfg, u8 *pending_limt) { - if (nic_dev->is_vm_slave && nic_dev->in_vm) { - *coalesc_timer_cfg = HINIC_MULTI_VM_LATENCY; - *pending_limt = HINIC_MULTI_VM_PENDING_LIMIT; - return; - } - if (rate < q_coal->pkt_rate_low) { *coalesc_timer_cfg = q_coal->rx_usecs_low; *pending_limt = q_coal->rx_pending_limt_low; @@ -635,19 +648,79 @@ static void __calc_coal_para(struct hinic_nic_dev *nic_dev, } }
-static void hinic_auto_moderation_work(struct work_struct *work) +static void update_queue_coal(struct hinic_nic_dev *nic_dev, u16 qid, + u64 rate, u64 avg_pkt_size, u64 tx_rate) { struct hinic_intr_coal_info *q_coal; + u8 coalesc_timer_cfg, pending_limt; + + q_coal = &nic_dev->intr_coalesce[qid]; + + if ((rate > HINIC_RX_RATE_THRESH && + avg_pkt_size > HINIC_AVG_PKT_SMALL) || + (nic_dev->in_vm && rate > HINIC_RX_RATE_THRESH)) { + __calc_coal_para(nic_dev, q_coal, rate, + &coalesc_timer_cfg, &pending_limt); + } else { + coalesc_timer_cfg = HINIC_LOWEST_LATENCY; + pending_limt = q_coal->rx_pending_limt_low; + } + + set_interrupt_moder(nic_dev, qid, coalesc_timer_cfg, + pending_limt); +} + +#define SDI_VM_PPS_3W 30000 +#define SDI_VM_PPS_5W 50000 + +#define SDI_VM_BPS_100MB 12500000 +#define SDI_VM_BPS_1GB 125000000 + +static void update_queue_coal_sdi_vm(struct hinic_nic_dev *nic_dev, + u16 qid, u64 rx_pps, u64 rx_bps, + u64 tx_pps, u64 tx_bps) +{ + struct hinic_intr_coal_info *q_coal = NULL; + u8 coalesc_timer_cfg, pending_limt; + + q_coal = &nic_dev->intr_coalesce[qid]; + if (qp_coalesc_use_drv_params_switch == 0) { + if (rx_pps < SDI_VM_PPS_3W && + tx_pps < SDI_VM_PPS_3W && + rx_bps < SDI_VM_BPS_100MB && + tx_bps < SDI_VM_BPS_100MB) { + set_interrupt_moder(nic_dev, qid, 0, 0); + } else if (tx_pps > SDI_VM_PPS_3W && + tx_pps < SDI_VM_PPS_5W && + tx_bps > SDI_VM_BPS_1GB) { + set_interrupt_moder(nic_dev, qid, 7, 7); + } else { + __calc_coal_para(nic_dev, q_coal, rx_pps, + &coalesc_timer_cfg, + &pending_limt); + set_interrupt_moder(nic_dev, qid, + coalesc_timer_cfg, + pending_limt); + } + } else { + __calc_coal_para(nic_dev, q_coal, rx_pps, + &coalesc_timer_cfg, + &pending_limt); + set_interrupt_moder(nic_dev, qid, coalesc_timer_cfg, + pending_limt); + } +} + +static void hinic_auto_moderation_work(struct work_struct *work) +{ struct delayed_work *delay = to_delayed_work(work); struct hinic_nic_dev *nic_dev = container_of(delay, struct hinic_nic_dev, moderation_task); unsigned long period = (unsigned long)(jiffies - nic_dev->last_moder_jiffies); - u64 rx_packets, rx_bytes, rx_pkt_diff, rate, avg_pkt_size; - u64 tx_packets, tx_bytes, tx_pkt_diff, tx_rate; - u8 coalesc_timer_cfg, pending_limt; + u64 tx_packets, tx_bytes, tx_pkt_diff, tx_rate, rx_bps, tx_bps; u16 qid;
if (!test_bit(HINIC_INTF_UP, &nic_dev->flags)) @@ -664,7 +737,6 @@ static void hinic_auto_moderation_work(struct work_struct *work) rx_bytes = nic_dev->rxqs[qid].rxq_stats.bytes; tx_packets = nic_dev->txqs[qid].txq_stats.packets; tx_bytes = nic_dev->txqs[qid].txq_stats.bytes; - q_coal = &nic_dev->intr_coalesce[qid];
rx_pkt_diff = rx_packets - nic_dev->rxqs[qid].last_moder_packets; @@ -678,25 +750,21 @@ static void hinic_auto_moderation_work(struct work_struct *work) tx_packets - nic_dev->txqs[qid].last_moder_packets; tx_rate = tx_pkt_diff * HZ / period;
- if ((rate > HINIC_RX_RATE_THRESH && - avg_pkt_size > HINIC_AVG_PKT_SMALL) || - (nic_dev->in_vm && (rate > HINIC_RX_RATE_THRESH || - (nic_dev->is_vm_slave && - tx_rate > HINIC_TX_RATE_THRESH)))) { - __calc_coal_para(nic_dev, q_coal, rate, - &coalesc_timer_cfg, &pending_limt); + rx_bps = (unsigned long)(rx_bytes - + nic_dev->rxqs[qid].last_moder_bytes) + * HZ / period; + tx_bps = (unsigned long)(tx_bytes - + nic_dev->txqs[qid].last_moder_bytes) + * HZ / period; + if ((nic_dev->is_vm_slave && nic_dev->in_vm) || + nic_dev->is_bm_slave) { + update_queue_coal_sdi_vm(nic_dev, qid, rate, rx_bps, + tx_rate, tx_bps); } else { - coalesc_timer_cfg = - (nic_dev->is_vm_slave && nic_dev->in_vm) ? - 0 : HINIC_LOWEST_LATENCY; - pending_limt = - (nic_dev->is_vm_slave && nic_dev->in_vm) ? - 0 : q_coal->rx_pending_limt_low; + update_queue_coal(nic_dev, qid, rate, avg_pkt_size, + tx_rate); }
- set_interrupt_moder(nic_dev, qid, coalesc_timer_cfg, - pending_limt); - nic_dev->rxqs[qid].last_moder_packets = rx_packets; nic_dev->rxqs[qid].last_moder_bytes = rx_bytes; nic_dev->txqs[qid].last_moder_packets = tx_packets; @@ -2436,9 +2504,82 @@ static void hinic_assign_netdev_ops(struct hinic_nic_dev *adapter) #define HINIC_DFT_PG_100GE_TXRX_MSIX_COALESC_TIMER 2 #define HINIC_DFT_PG_ARM_100GE_TXRX_MSIX_COALESC_TIMER 3
+static void update_queue_coal_param(struct hinic_nic_dev *nic_dev, + struct pci_device_id *id, u16 qid) +{ + struct hinic_intr_coal_info *info = NULL; + + info = &nic_dev->intr_coalesce[qid]; + if (!nic_dev->intr_coal_set_flag) { + switch (id->driver_data) { + case HINIC_BOARD_PG_TP_10GE: + info->pending_limt = + HINIC_DFT_PG_10GE_TXRX_MSIX_PENDING_LIMIT; + info->coalesce_timer_cfg = + HINIC_DFT_PG_10GE_TXRX_MSIX_COALESC_TIMER; + break; + case HINIC_BOARD_PG_SM_25GE: + info->pending_limt = + HINIC_DFT_PG_25GE_TXRX_MSIX_PENDING_LIMIT; + info->coalesce_timer_cfg = + HINIC_DFT_PG_ARM_25GE_TXRX_MSIX_COALESC_TIMER; + break; + case HINIC_BOARD_PG_100GE: + info->pending_limt = + HINIC_DFT_PG_100GE_TXRX_MSIX_PENDING_LIMIT; + info->coalesce_timer_cfg = + HINIC_DFT_PG_ARM_100GE_TXRX_MSIX_COALESC_TIMER; + break; + default: + info->pending_limt = qp_pending_limit; + info->coalesce_timer_cfg = qp_coalesc_timer_cfg; + break; + } + } + + info->resend_timer_cfg = HINIC_DEAULT_TXRX_MSIX_RESEND_TIMER_CFG; + info->pkt_rate_high = HINIC_RX_RATE_HIGH; + info->rx_usecs_high = qp_coalesc_timer_high; + info->rx_pending_limt_high = qp_pending_limit_high; + info->pkt_rate_low = HINIC_RX_RATE_LOW; + info->rx_usecs_low = qp_coalesc_timer_low; + info->rx_pending_limt_low = qp_pending_limit_low; + + if (nic_dev->in_vm) { + if (qp_pending_limit_high == HINIC_RX_PENDING_LIMIT_HIGH) + qp_pending_limit_high = HINIC_RX_PENDING_LIMIT_HIGH_VM; + info->pkt_rate_low = HINIC_RX_RATE_LOW_VM; + info->rx_pending_limt_high = qp_pending_limit_high; + } + + /* suit for sdi3.0 vm mode vf drv or bm mode pf/vf drv */ + if ((nic_dev->is_vm_slave && nic_dev->in_vm) || + nic_dev->is_bm_slave) { + info->pkt_rate_high = SDI_VM_RX_PKT_RATE_HIGH; + info->pkt_rate_low = SDI_VM_RX_PKT_RATE_LOW; + + if (qp_coalesc_use_drv_params_switch == 0) { + /* if arm server, maybe need to change this value + * again + */ + info->pending_limt = SDI_VM_PENDING_LIMT; + info->coalesce_timer_cfg = SDI_VM_COALESCE_TIMER_CFG; + info->rx_usecs_high = SDI_VM_RX_USECS_HIGH; + info->rx_pending_limt_high = + SDI_VM_RX_PENDING_LIMT_HIGH; + info->rx_usecs_low = SDI_VM_RX_USECS_LOW; + info->rx_pending_limt_low = SDI_VM_RX_PENDING_LIMT_LOW; + } else { + info->rx_usecs_high = qp_coalesc_timer_high; + info->rx_pending_limt_high = qp_pending_limit_high; + info->rx_usecs_low = qp_coalesc_timer_low; + info->rx_pending_limt_low = qp_pending_limit_low; + } + } +} + static void init_intr_coal_param(struct hinic_nic_dev *nic_dev) { - struct hinic_intr_coal_info *info; struct pci_device_id *id; u16 i;
@@ -2463,54 +2604,8 @@ static void init_intr_coal_param(struct hinic_nic_dev *nic_dev) break; }
- for (i = 0; i < nic_dev->max_qps; i++) { - info = &nic_dev->intr_coalesce[i]; - if (!nic_dev->intr_coal_set_flag) { - switch (id->driver_data) { - case HINIC_BOARD_PG_TP_10GE: - info->pending_limt = - HINIC_DFT_PG_10GE_TXRX_MSIX_PENDING_LIMIT; - info->coalesce_timer_cfg = - HINIC_DFT_PG_10GE_TXRX_MSIX_COALESC_TIMER; - break; - case HINIC_BOARD_PG_SM_25GE: - info->pending_limt = - HINIC_DFT_PG_25GE_TXRX_MSIX_PENDING_LIMIT; - info->coalesce_timer_cfg = - HINIC_DFT_PG_ARM_25GE_TXRX_MSIX_COALESC_TIMER; - break; - case HINIC_BOARD_PG_100GE: - info->pending_limt = - HINIC_DFT_PG_100GE_TXRX_MSIX_PENDING_LIMIT; - info->coalesce_timer_cfg = - HINIC_DFT_PG_ARM_100GE_TXRX_MSIX_COALESC_TIMER; - break; - default: - info->pending_limt = qp_pending_limit; - info->coalesce_timer_cfg = qp_coalesc_timer_cfg; - break; - } - } - - info->resend_timer_cfg = - HINIC_DEAULT_TXRX_MSIX_RESEND_TIMER_CFG; - info->pkt_rate_high = HINIC_RX_RATE_HIGH; - info->rx_usecs_high = qp_coalesc_timer_high; - info->rx_pending_limt_high = qp_pending_limit_high; - info->pkt_rate_low = HINIC_RX_RATE_LOW; - info->rx_usecs_low = qp_coalesc_timer_low; - info->rx_pending_limt_low = qp_pending_limit_low; - - if (nic_dev->in_vm) { - if (qp_pending_limit_high == - HINIC_RX_PENDING_LIMIT_HIGH) - qp_pending_limit_high = - HINIC_RX_PENDING_LIMIT_HIGH_VM; - info->pkt_rate_low = HINIC_RX_RATE_LOW_VM; - info->rx_pending_limt_high = - qp_pending_limit_high; - } - } + for (i = 0; i < nic_dev->max_qps; i++) + update_queue_coal_param(nic_dev, id, i); }
static int hinic_init_intr_coalesce(struct hinic_nic_dev *nic_dev) @@ -2832,6 +2927,7 @@ static int nic_probe(struct hinic_lld_dev *lld_dev, void **uld_dev, nic_dev->heart_status = true; nic_dev->in_vm = !hinic_is_in_host(); nic_dev->is_vm_slave = is_multi_vm_slave(lld_dev->hwdev); + nic_dev->is_bm_slave = is_multi_bm_slave(lld_dev->hwdev); nic_dev->lro_replenish_thld = lro_replenish_thld; nic_dev->rx_buff_len = (u16)(rx_buff * CONVERT_UNIT); page_num = (RX_BUFF_NUM_PER_PAGE * nic_dev->rx_buff_len) / PAGE_SIZE; diff --git a/drivers/net/ethernet/huawei/hinic/hinic_multi_host_mgmt.c b/drivers/net/ethernet/huawei/hinic/hinic_multi_host_mgmt.c index 95ea46a..3067b2f 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_multi_host_mgmt.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_multi_host_mgmt.c @@ -150,6 +150,16 @@ bool is_multi_vm_slave(void *hwdev) return (hw_dev->func_mode == FUNC_MOD_MULTI_VM_SLAVE) ? true : false; }
+bool is_multi_bm_slave(void *hwdev) +{ + struct hinic_hwdev *hw_dev = hwdev; + + if (!hwdev) + return false; + + return (hw_dev->func_mode == FUNC_MOD_MULTI_BM_SLAVE) ? true : false; +} + int rectify_host_mode(struct hinic_hwdev *hwdev) { u16 cur_sdi_mode; diff --git a/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.h b/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.h index 7bb6d1c..395f6d8 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.h @@ -53,8 +53,6 @@ #define HINIC_LRO_RX_TIMER_DEFAULT_PG_100GE 8
#define HINIC_LOWEST_LATENCY 1 -#define HINIC_MULTI_VM_LATENCY 32 -#define HINIC_MULTI_VM_PENDING_LIMIT 4 #define HINIC_RX_RATE_LOW 400000 #define HINIC_RX_COAL_TIME_LOW 20 #define HINIC_RX_PENDING_LIMIT_LOW 2 diff --git a/drivers/net/ethernet/huawei/hinic/hinic_nic_dev.h b/drivers/net/ethernet/huawei/hinic/hinic_nic_dev.h index c6879c43..e466f12 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_nic_dev.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_nic_dev.h @@ -235,7 +235,7 @@ struct hinic_nic_dev { /* interrupt coalesce must be different in virtual machine */ bool in_vm; bool is_vm_slave; - + int is_bm_slave; #ifndef HAVE_NETDEV_STATS_IN_NETDEV struct net_device_stats net_stats; #endif
From: Chiqijun chiqijun@huawei.com
driver inclusion category: bugfix bugzilla: 4472
-----------------------------------------------------------------------
Supports variable SDI master host ppf_id.
Signed-off-by: Chiqijun chiqijun@huawei.com Reviewed-by: Luoshaokai luoshaokai@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- .../ethernet/huawei/hinic/hinic_multi_host_mgmt.c | 30 ++++++++++++---------- 1 file changed, 17 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_multi_host_mgmt.c b/drivers/net/ethernet/huawei/hinic/hinic_multi_host_mgmt.c index 3067b2f..247a0b7 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_multi_host_mgmt.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_multi_host_mgmt.c @@ -44,6 +44,18 @@ (((u8)(enable) & 1U) << (host_id)) #define SLAVE_HOST_STATUS_GET(host_id, val) (!!((val) & (1U << (host_id))))
+#define MULTI_HOST_PPF_GET(host_id, val) (((val) >> ((host_id) * 4 + 16)) & 0xf) + +static inline u8 get_master_host_ppf_idx(struct hinic_hwdev *hwdev) +{ + u32 reg_val; + + reg_val = hinic_hwif_read_reg(hwdev->hwif, + HINIC_MULT_HOST_SLAVE_STATUS_ADDR); + /* master host sets host_id to 0 */ + return MULTI_HOST_PPF_GET(0, reg_val); +} + void set_slave_host_enable(struct hinic_hwdev *hwdev, u8 host_id, bool enable) { u32 reg_val; @@ -272,7 +284,7 @@ int __mbox_to_host(struct hinic_hwdev *hwdev, enum hinic_mod_type mod,
if (!mbox_hwdev->mhost_mgmt) { /* send to master host in default */ - dst_host_func_idx = 0; + dst_host_func_idx = get_master_host_ppf_idx(hwdev); } else { dst_host_func_idx = IS_MASTER_HOST(hwdev) ? mbox_hwdev->mhost_mgmt->shost_ppf_idx : @@ -881,22 +893,14 @@ int hinic_multi_host_mgmt_init(struct hinic_hwdev *hwdev) return -ENOMEM; }
+ hwdev->mhost_mgmt->mhost_ppf_idx = get_master_host_ppf_idx(hwdev); + hwdev->mhost_mgmt->shost_ppf_idx = 0; + hwdev->mhost_mgmt->shost_host_idx = 2; + err = hinic_get_hw_pf_infos(hwdev, &hwdev->mhost_mgmt->pf_infos); if (err) goto out_free_mhost_mgmt;
- /* master ppf idx fix to 0 */ - hwdev->mhost_mgmt->mhost_ppf_idx = 0; - if (IS_BMGW_MASTER_HOST(hwdev) || IS_BMGW_SLAVE_HOST(hwdev)) { - /* fix slave host ppf 6 and host 2 in bmwg mode - */ - hwdev->mhost_mgmt->shost_ppf_idx = 6; - hwdev->mhost_mgmt->shost_host_idx = 2; - } else { - hwdev->mhost_mgmt->shost_ppf_idx = 7; - hwdev->mhost_mgmt->shost_host_idx = 2; - } - hinic_register_ppf_mbox_cb(hwdev, HINIC_MOD_COMM, comm_ppf_mbox_handler); hinic_register_ppf_mbox_cb(hwdev, HINIC_MOD_L2NIC,
From: Chiqijun chiqijun@huawei.com
driver inclusion category: bugfix bugzilla: 4472
-----------------------------------------------------------------------
A VF in 1822 needs to consume 1M + 48k of VF space, and each PCI-bridge reserves io memory of 4M, so after hot-plugging more than 3 network cards, there will be insufficient bar space. To solve this problem, modify the bar space size in the 1822 SDI bare metal and virtual machine scenarios.
Signed-off-by: Chiqijun chiqijun@huawei.com Reviewed-by: Luoshaokai luoshaokai@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/net/ethernet/huawei/hinic/hinic_csr.h | 4 ++ drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.h | 7 ++ drivers/net/ethernet/huawei/hinic/hinic_hwdev.h | 6 -- drivers/net/ethernet/huawei/hinic/hinic_hwif.c | 75 +++++++++++++++++++--- drivers/net/ethernet/huawei/hinic/hinic_hwif.h | 2 + drivers/net/ethernet/huawei/hinic/hinic_lld.c | 19 ++++-- drivers/net/ethernet/huawei/hinic/hinic_port_cmd.h | 2 + 7 files changed, 93 insertions(+), 22 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_csr.h b/drivers/net/ethernet/huawei/hinic/hinic_csr.h index 85c3221..76025e5 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_csr.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_csr.h @@ -125,6 +125,10 @@ #define HINIC_CEQ_CONS_IDX_0_ADDR_BASE 0x1008 #define HINIC_CEQ_CONS_IDX_1_ADDR_BASE 0x100C
+/* For multi-host mgmt + * CEQ_CTRL_0_ADDR: bit26~29: uP write vf mode is normal(0x0),bmgw(0x1), + * vmgw(0x2) + */ #define HINIC_CSR_CEQ_CTRL_0_ADDR(idx) \ (HINIC_CEQ_CTRL_0_ADDR_BASE + (idx) * HINIC_EQ_OFF_STRIDE)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.h index 272a3d9..563188ab 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.h @@ -393,6 +393,12 @@ struct acl_service_cap { u32 scqc_sz; /* 64B */ };
+enum hinic_chip_mode { + CHIP_MODE_NORMAL, + CHIP_MODE_BMGW, + CHIP_MODE_VMGW, +}; + bool hinic_support_nic(void *hwdev, struct nic_service_cap *cap); bool hinic_support_roce(void *hwdev, struct rdma_service_cap *cap); bool hinic_support_fcoe(void *hwdev, struct fcoe_service_cap *cap); @@ -553,4 +559,5 @@ struct hinic_func_nic_state { u16 hinic_global_func_id_hw(void *hwdev); bool hinic_func_for_pt(void *hwdev); bool hinic_func_for_hwpt(void *hwdev); +u32 hinic_get_db_size(void *cfg_reg_base, enum hinic_chip_mode *chip_mode); #endif diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hwdev.h b/drivers/net/ethernet/huawei/hinic/hinic_hwdev.h index 89a761c..24391d0 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hwdev.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_hwdev.h @@ -240,12 +240,6 @@ struct hinic_heartbeat_enhanced {
#define HINIC_BOARD_TYPE_MULTI_HOST_ETH_25GE 12
-enum hinic_chip_mode { - CHIP_MODE_NORMAL, - CHIP_MODE_BMGW, - CHIP_MODE_VMGW, -}; - /* new version of roce qp not limited by power of 2 */ #define HINIC_CMD_VER_ROCE_QP 1 /* new version for add function id in multi-host */ diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hwif.c b/drivers/net/ethernet/huawei/hinic/hinic_hwif.c index d7066951f..1fc0fa5 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hwif.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hwif.c @@ -283,14 +283,19 @@ static void set_mpf(struct hinic_hwif *hwif) hinic_hwif_write_reg(hwif, addr, val); }
-static void init_db_area_idx(struct hinic_free_db_area *free_db_area) +static void init_db_area_idx(struct hinic_hwif *hwif) { + struct hinic_free_db_area *free_db_area; + u32 db_max_areas; u32 i;
- for (i = 0; i < HINIC_DB_MAX_AREAS; i++) + free_db_area = &hwif->free_db_area; + db_max_areas = hwif->db_size / HINIC_DB_PAGE_SIZE; + + for (i = 0; i < db_max_areas; i++) free_db_area->db_idx[i] = i;
- free_db_area->num_free = HINIC_DB_MAX_AREAS; + free_db_area->num_free = db_max_areas;
spin_lock_init(&free_db_area->idx_lock); } @@ -298,6 +303,7 @@ static void init_db_area_idx(struct hinic_free_db_area *free_db_area) static int get_db_idx(struct hinic_hwif *hwif, u32 *idx) { struct hinic_free_db_area *free_db_area = &hwif->free_db_area; + u32 db_max_areas = hwif->db_size / HINIC_DB_PAGE_SIZE; u32 pos; u32 pg_idx;
@@ -312,14 +318,14 @@ static int get_db_idx(struct hinic_hwif *hwif, u32 *idx) free_db_area->num_free--;
pos = free_db_area->alloc_pos++; - pos &= HINIC_DB_MAX_AREAS - 1; + pos &= db_max_areas - 1;
pg_idx = free_db_area->db_idx[pos];
free_db_area->db_idx[pos] = 0xFFFFFFFF;
/* pg_idx out of range */ - if (pg_idx >= HINIC_DB_MAX_AREAS) + if (pg_idx >= db_max_areas) goto retry;
spin_unlock(&free_db_area->idx_lock); @@ -332,15 +338,16 @@ static int get_db_idx(struct hinic_hwif *hwif, u32 *idx) static void free_db_idx(struct hinic_hwif *hwif, u32 idx) { struct hinic_free_db_area *free_db_area = &hwif->free_db_area; + u32 db_max_areas = hwif->db_size / HINIC_DB_PAGE_SIZE; u32 pos;
- if (idx >= HINIC_DB_MAX_AREAS) + if (idx >= db_max_areas) return;
spin_lock(&free_db_area->idx_lock);
pos = free_db_area->return_pos++; - pos &= HINIC_DB_MAX_AREAS - 1; + pos &= db_max_areas - 1;
free_db_area->db_idx[pos] = idx;
@@ -386,7 +393,7 @@ int hinic_alloc_db_addr(void *hwdev, void __iomem **db_base,
*db_base = hwif->db_base + idx * HINIC_DB_PAGE_SIZE;
- if (!dwqe_base) + if (!dwqe_base || hwif->chip_mode != CHIP_MODE_NORMAL) return 0;
offset = ((u64)idx) << PAGE_SHIFT; @@ -433,7 +440,9 @@ int hinic_alloc_db_phy_addr(void *hwdev, u64 *db_base, u64 *dwqe_base) return -EFAULT;
*db_base = hwif->db_base_phy + idx * HINIC_DB_PAGE_SIZE; - *dwqe_base = *db_base + HINIC_DB_DWQE_SIZE; + + if (hwif->chip_mode == CHIP_MODE_NORMAL) + *dwqe_base = *db_base + HINIC_DB_DWQE_SIZE;
return 0; } @@ -559,7 +568,13 @@ int hinic_init_hwif(struct hinic_hwdev *hwdev, void *cfg_reg_base, hwif->db_base_phy = db_base_phy; hwif->db_base = db_base; hwif->dwqe_mapping = dwqe_mapping; - init_db_area_idx(&hwif->free_db_area); + + hwif->db_size = hinic_get_db_size(cfg_reg_base, &hwif->chip_mode); + + sdk_info(hwdev->dev_hdl, "Doorbell size: 0x%x, chip mode: %d\n", + hwif->db_size, hwif->chip_mode); + + init_db_area_idx(hwif);
err = wait_hwif_ready(hwdev); if (err) { @@ -924,3 +939,43 @@ u8 hinic_ppf_idx(void *hwdev) return hwif->attr.ppf_idx; } EXPORT_SYMBOL(hinic_ppf_idx); + +#define CEQ_CTRL_0_CHIP_MODE_SHIFT 26 +#define CEQ_CTRL_0_CHIP_MODE_MASK 0xFU +#define CEQ_CTRL_0_GET(val, member) \ + (((val) >> CEQ_CTRL_0_##member##_SHIFT) & \ + CEQ_CTRL_0_##member##_MASK) + +/** + * hinic_get_db_size - get db size ceq ctrl: bit26~29: uP write vf mode is + * normal(0x0), bmgw(0x1) or vmgw(0x2) and normal mode db size is 512k, + * bmgw or vmgw mode db size is 256k + * @cfg_reg_base: pointer to cfg_reg_base + * @chip_mode: pointer to chip_mode + */ +u32 hinic_get_db_size(void *cfg_reg_base, enum hinic_chip_mode *chip_mode) +{ + u32 attr0, ctrl0; + + attr0 = be32_to_cpu(readl((u8 __iomem *)cfg_reg_base + + HINIC_CSR_FUNC_ATTR0_ADDR)); + + /* PF is always normal mode & db size is 512K */ + if (HINIC_AF0_GET(attr0, FUNC_TYPE) != TYPE_VF) { + *chip_mode = CHIP_MODE_NORMAL; + return HINIC_DB_DWQE_SIZE; + } + + ctrl0 = be32_to_cpu(readl((u8 __iomem *)cfg_reg_base + + HINIC_CSR_CEQ_CTRL_0_ADDR(0))); + + *chip_mode = CEQ_CTRL_0_GET(ctrl0, CHIP_MODE); + + switch (*chip_mode) { + case CHIP_MODE_VMGW: + case CHIP_MODE_BMGW: + return HINIC_GW_VF_DB_SIZE; + default: + return HINIC_DB_DWQE_SIZE; + } +} diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hwif.h b/drivers/net/ethernet/huawei/hinic/hinic_hwif.h index e5ac81a..9cd2ad8 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hwif.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_hwif.h @@ -64,6 +64,8 @@ struct hinic_hwif { struct hinic_func_attr attr;
void *pdev; + enum hinic_chip_mode chip_mode; + u32 db_size; };
struct hinic_dma_addr_align { diff --git a/drivers/net/ethernet/huawei/hinic/hinic_lld.c b/drivers/net/ethernet/huawei/hinic/hinic_lld.c index 18d29c3..1636bb4 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_lld.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_lld.c @@ -43,8 +43,6 @@ #define HINIC_PCI_DB_BAR 4 #define HINIC_PCI_VENDOR_ID 0x19e5
-#define HINIC_DB_DWQE_SIZE 0x00080000 - #define SELF_TEST_BAR_ADDR_OFFSET 0x883c
#define HINIC_SECOND_BASE (1000) @@ -131,6 +129,7 @@ struct hinic_pcidev { struct work_struct slave_nic_work; struct workqueue_struct *slave_nic_init_workq; struct delayed_work slave_nic_init_dwork; + enum hinic_chip_mode chip_mode; bool nic_cur_enable; bool nic_des_enable; }; @@ -1836,6 +1835,7 @@ void hinic_event_process(void *adapter, struct hinic_event_info *event)
static int mapping_bar(struct pci_dev *pdev, struct hinic_pcidev *pci_adapter) { + u32 db_dwqe_size; u64 dwqe_addr;
pci_adapter->cfg_reg_base = @@ -1854,19 +1854,25 @@ static int mapping_bar(struct pci_dev *pdev, struct hinic_pcidev *pci_adapter) goto map_intr_bar_err; }
+ db_dwqe_size = hinic_get_db_size(pci_adapter->cfg_reg_base, + &pci_adapter->chip_mode); + pci_adapter->db_base_phy = pci_resource_start(pdev, HINIC_PCI_DB_BAR); pci_adapter->db_base = ioremap(pci_adapter->db_base_phy, - HINIC_DB_DWQE_SIZE); + db_dwqe_size); if (!pci_adapter->db_base) { sdk_err(&pci_adapter->pcidev->dev, "Failed to map doorbell regs\n"); goto map_db_err; }
- dwqe_addr = pci_adapter->db_base_phy + HINIC_DB_DWQE_SIZE; + if (pci_adapter->chip_mode != CHIP_MODE_NORMAL) + return 0; + + dwqe_addr = pci_adapter->db_base_phy + db_dwqe_size;
/* arm do not support call ioremap_wc() */ - pci_adapter->dwqe_mapping = __ioremap(dwqe_addr, HINIC_DB_DWQE_SIZE, + pci_adapter->dwqe_mapping = __ioremap(dwqe_addr, db_dwqe_size, __pgprot(PROT_DEVICE_nGnRnE)); if (!pci_adapter->dwqe_mapping) { sdk_err(&pci_adapter->pcidev->dev, "Failed to io_mapping_create_wc\n"); @@ -1889,7 +1895,8 @@ static int mapping_bar(struct pci_dev *pdev, struct hinic_pcidev *pci_adapter)
static void unmapping_bar(struct hinic_pcidev *pci_adapter) { - iounmap(pci_adapter->dwqe_mapping); + if (pci_adapter->chip_mode == CHIP_MODE_NORMAL) + iounmap(pci_adapter->dwqe_mapping);
iounmap(pci_adapter->db_base); iounmap(pci_adapter->intr_reg_base); diff --git a/drivers/net/ethernet/huawei/hinic/hinic_port_cmd.h b/drivers/net/ethernet/huawei/hinic/hinic_port_cmd.h index c24c155..8730565 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_port_cmd.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_port_cmd.h @@ -527,6 +527,8 @@ enum hinic_pf_status {
/* total doorbell or direct wqe size is 512kB, db num: 128, dwqe: 128 */ #define HINIC_DB_DWQE_SIZE 0x00080000 +/* BMGW & VMGW VF db size 256k, have no dwqe space */ +#define HINIC_GW_VF_DB_SIZE 0x00040000
/* db/dwqe page size: 4K */ #define HINIC_DB_PAGE_SIZE 0x00001000ULL
From: Chiqijun chiqijun@huawei.com
driver inclusion category: bugfix bugzilla: 4472
-----------------------------------------------------------------------
SDI bare metal VF supports dynamic queue
Signed-off-by: Chiqijun chiqijun@huawei.com Reviewed-by: Luoshaokai luoshaokai@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/net/ethernet/huawei/hinic/hinic_cfg.c | 23 +++++++++++++++++++++-- drivers/net/ethernet/huawei/hinic/hinic_cfg.h | 3 ++- drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.h | 3 +++ drivers/net/ethernet/huawei/hinic/hinic_lld.c | 7 +++++++ drivers/net/ethernet/huawei/hinic/hinic_nic_io.c | 10 ++++++++++ 5 files changed, 43 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_cfg.c b/drivers/net/ethernet/huawei/hinic/hinic_cfg.c index 70fe149..341637e 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_cfg.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_cfg.c @@ -264,12 +264,15 @@ static void parse_l2nic_res_cap(struct service_cap *cap, nic_cap->max_rqs = dev_cap->nic_max_rq + 1; nic_cap->vf_max_sqs = dev_cap->nic_vf_max_sq + 1; nic_cap->vf_max_rqs = dev_cap->nic_vf_max_rq + 1; + nic_cap->max_queue_allowed = 0; + nic_cap->dynamic_qp = 0; } else { nic_cap->max_sqs = dev_cap->nic_max_sq; nic_cap->max_rqs = dev_cap->nic_max_rq; nic_cap->vf_max_sqs = 0; nic_cap->vf_max_rqs = 0; nic_cap->max_queue_allowed = dev_cap->max_queue_allowed; + nic_cap->dynamic_qp = dev_cap->ovs_dq_en; }
if (dev_cap->nic_lro_en) @@ -510,6 +513,9 @@ static void parse_ovs_res_cap(struct service_cap *cap, ovs_cap->dev_ovs_cap.max_pctxs = dev_cap->ovs_max_qpc; ovs_cap->dev_ovs_cap.max_cqs = 0;
+ if (type == TYPE_PF || type == TYPE_PPF) + ovs_cap->dev_ovs_cap.dynamic_qp_en = dev_cap->ovs_dq_en; + pr_info("Get ovs resource capbility, max_qpc: 0x%x\n", ovs_cap->dev_ovs_cap.max_pctxs); } @@ -1329,6 +1335,7 @@ static int cfg_mbx_pf_proc_vf_msg(void *hwdev, u16 vf_id, u8 cmd, void *buf_in,
/* OVS VF resources */ dev_cap->ovs_max_qpc = ovs_cap->max_pctxs; + dev_cap->ovs_dq_en = ovs_cap->dynamic_qp_en;
*out_size = sizeof(*dev_cap);
@@ -1352,8 +1359,10 @@ static int cfg_mbx_pf_proc_vf_msg(void *hwdev, u16 vf_id, u8 cmd, void *buf_in, dev_cap->nic_max_sq = dev_cap_tmp.nic_max_sq + 1; dev_cap->nic_max_rq = dev_cap_tmp.nic_max_rq + 1; dev_cap->max_queue_allowed = dev_cap_tmp.max_queue_allowed; - sdk_info(dev->dev_hdl, "func_id(%u) fixed qnum %u max_queue_allowed %u\n", - func_id, dev_cap->nic_max_sq, dev_cap->max_queue_allowed); + + sdk_info(dev->dev_hdl, "func_id(%u) %s qnum %u max_queue_allowed %u\n", + func_id, (ovs_cap->dynamic_qp_en ? "dynamic" : "fixed"), + dev_cap->nic_max_sq, dev_cap->max_queue_allowed);
return 0; } @@ -1751,6 +1760,16 @@ bool hinic_support_ft(void *hwdev) } EXPORT_SYMBOL(hinic_support_ft);
+bool hinic_support_dynamic_q(void *hwdev) +{ + struct hinic_hwdev *dev = hwdev; + + if (!hwdev) + return false; + + return dev->cfg_mgmt->svc_cap.nic_cap.dynamic_qp ? true : false; +} + bool hinic_func_for_mgmt(void *hwdev) { struct hinic_hwdev *dev = hwdev; diff --git a/drivers/net/ethernet/huawei/hinic/hinic_cfg.h b/drivers/net/ethernet/huawei/hinic/hinic_cfg.h index 50d11cc6..f6596e9 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_cfg.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_cfg.h @@ -442,7 +442,8 @@ struct hinic_dev_cap {
/* OVS */ u32 ovs_max_qpc; - u32 rsvd6; + u8 ovs_dq_en; + u8 rsvd5[3];
/* ToE */ u32 toe_max_pctx; diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.h index 563188ab..a2242cc 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.h @@ -73,6 +73,7 @@ struct nic_service_cap { u8 tso_sz; /* TSO context space: n*16B */
u16 max_queue_allowed; + u16 dynamic_qp; /* support dynamic queue */ };
struct dev_roce_svc_own_cap { @@ -358,6 +359,7 @@ struct dev_ovs_svc_cap { /* PF resources */ u32 max_pctxs; /* Parent Context: max specifications 1M */ u32 max_cqs; + u8 dynamic_qp_en;
/* VF resources */ u32 vf_max_pctxs; /* Parent Context: max specifications 1M */ @@ -412,6 +414,7 @@ enum hinic_chip_mode { bool hinic_support_rdma(void *hwdev, struct rdma_service_cap *cap); bool hinic_support_ft(void *hwdev); bool hinic_func_for_mgmt(void *hwdev); +bool hinic_support_dynamic_q(void *hwdev);
int hinic_set_toe_enable(void *hwdev, bool enable); bool hinic_get_toe_enable(void *hwdev); diff --git a/drivers/net/ethernet/huawei/hinic/hinic_lld.c b/drivers/net/ethernet/huawei/hinic/hinic_lld.c index 1636bb4..f851bb2 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_lld.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_lld.c @@ -1644,6 +1644,9 @@ static int __set_nic_func_state(struct hinic_pcidev *pci_adapter) }
if (enable_nic) { + if (is_multi_bm_slave(pci_adapter->hwdev)) + hinic_set_vf_dev_cap(pci_adapter->hwdev); + err = attach_uld(pci_adapter, SERVICE_T_NIC, &g_uld_info[SERVICE_T_NIC]); if (err) { @@ -2060,6 +2063,10 @@ static void hinic_set_vf_load_state(struct hinic_pcidev *pci_adapter, if (hinic_func_type(pci_adapter->hwdev) == TYPE_VF) return;
+ /* The VF on the BM slave side must be probed */ + if (is_multi_bm_slave(pci_adapter->hwdev)) + vf_load_state = false; + func_id = hinic_global_func_id_hw(pci_adapter->hwdev);
chip_node = pci_adapter->chip_node; diff --git a/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c b/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c index e00ae11..bd5c869 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c @@ -30,6 +30,8 @@ #include "hinic_nic_io.h" #include "hinic_nic.h" #include "hinic_ctx_def.h" +#include "hinic_wq.h" +#include "hinic_cmdq.h"
#define HINIC_DEAULT_TX_CI_PENDING_LIMIT 0 #define HINIC_DEAULT_TX_CI_COALESCING_TIME 0 @@ -778,6 +780,14 @@ int hinic_init_nic_hwdev(void *hwdev, u16 rx_buff_len) if (!hwdev) return -EINVAL;
+ if (is_multi_bm_slave(hwdev) && hinic_support_dynamic_q(hwdev)) { + err = hinic_reinit_cmdq_ctxts(dev); + if (err) { + nic_err(dev->dev_hdl, "Failed to reinit cmdq\n"); + return err; + } + } + nic_io = dev->nic_io;
err = hinic_get_base_qpn(hwdev, &global_qpn);
From: Chiqijun chiqijun@huawei.com
driver inclusion category: bugfix bugzilla: 4472
-----------------------------------------------------------------------
It takes a long time to obtain the firmware statistics, which can easily cause the hot migration to fail. Therefore, delete the firmware statistics of the VF, and the corresponding statistics can be obtained from the PF.
Signed-off-by: Chiqijun chiqijun@huawei.com Reviewed-by: Luoshaokai luoshaokai@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/net/ethernet/huawei/hinic/hinic_ethtool.c | 36 ++++++++++++++--------- 1 file changed, 22 insertions(+), 14 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_ethtool.c b/drivers/net/ethernet/huawei/hinic/hinic_ethtool.c index 92bad0e..f321365 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_ethtool.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_ethtool.c @@ -1170,11 +1170,13 @@ static int hinic_get_sset_count(struct net_device *netdev, int sset) case ETH_SS_STATS: q_num = nic_dev->num_qps; count = ARRAY_LEN(hinic_netdev_stats) + - ARRAY_LEN(hinic_nic_dev_stats) + - ARRAY_LEN(hinic_function_stats) + + ARRAY_LEN(hinic_nic_dev_stats) + (ARRAY_LEN(hinic_tx_queue_stats) + ARRAY_LEN(hinic_rx_queue_stats)) * q_num;
+ if (!HINIC_FUNC_IS_VF(nic_dev->hwdev)) + count += ARRAY_LEN(hinic_function_stats); + if (!HINIC_FUNC_IS_VF(nic_dev->hwdev) && FUNC_SUPPORT_PORT_SETTING(nic_dev->hwdev)) count += ARRAY_LEN(hinic_port_stats); @@ -1625,15 +1627,19 @@ static void hinic_get_ethtool_stats(struct net_device *netdev, data[i] = (hinic_nic_dev_stats[j].size == sizeof(u64)) ? *(u64 *)p : *(u32 *)p; } - err = hinic_get_vport_stats(nic_dev->hwdev, &vport_stats); - if (err) - nicif_err(nic_dev, drv, netdev, - "Failed to get function stats from fw\n");
- for (j = 0; j < ARRAY_LEN(hinic_function_stats); j++, i++) { - p = (char *)(&vport_stats) + hinic_function_stats[j].offset; - data[i] = (hinic_function_stats[j].size == - sizeof(u64)) ? *(u64 *)p : *(u32 *)p; + if (!HINIC_FUNC_IS_VF(nic_dev->hwdev)) { + err = hinic_get_vport_stats(nic_dev->hwdev, &vport_stats); + if (err) + nicif_err(nic_dev, drv, netdev, + "Failed to get function stats from fw\n"); + + for (j = 0; j < ARRAY_LEN(hinic_function_stats); j++, i++) { + p = (char *)(&vport_stats) + + hinic_function_stats[j].offset; + data[i] = (hinic_function_stats[j].size == + sizeof(u64)) ? *(u64 *)p : *(u32 *)p; + } }
if (!HINIC_FUNC_IS_VF(nic_dev->hwdev) && @@ -1689,10 +1695,12 @@ static void hinic_get_strings(struct net_device *netdev, p += ETH_GSTRING_LEN; }
- for (i = 0; i < ARRAY_LEN(hinic_function_stats); i++) { - memcpy(p, hinic_function_stats[i].name, - ETH_GSTRING_LEN); - p += ETH_GSTRING_LEN; + if (!HINIC_FUNC_IS_VF(nic_dev->hwdev)) { + for (i = 0; i < ARRAY_LEN(hinic_function_stats); i++) { + memcpy(p, hinic_function_stats[i].name, + ETH_GSTRING_LEN); + p += ETH_GSTRING_LEN; + } }
if (!HINIC_FUNC_IS_VF(nic_dev->hwdev) &&
From: Chiqijun chiqijun@huawei.com
driver inclusion category: bugfix bugzilla: 4472
-----------------------------------------------------------------------
OVS does not support the csum offload capability of tunnel packet. If the protocol stack does the csum offloading of tunnel packet, it will cause the packet'csum error; therefore, the csum offload capability of tunnel packet needs to be disabled in SDI mode.
Signed-off-by: Chiqijun chiqijun@huawei.com Reviewed-by: Luoshaokai luoshaokai@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/net/ethernet/huawei/hinic/hinic_hw.h | 5 +++++ drivers/net/ethernet/huawei/hinic/hinic_hwdev.h | 3 ++- drivers/net/ethernet/huawei/hinic/hinic_main.c | 16 ++++++++++------ 3 files changed, 17 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw.h b/drivers/net/ethernet/huawei/hinic/hinic_hw.h index 38c77d0..404be1f 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw.h @@ -325,6 +325,8 @@ enum hinic_func_cap { HINIC_FUNC_SUPP_CHANGE_MAC = 1 << 9, /* OVS don't support SCTP_CRC/HW_VLAN/LRO */ HINIC_FUNC_OFFLOAD_OVS_UNSUPP = 1 << 10, + /* OVS don't support encap-tso/encap-csum */ + HINIC_FUNC_SUPP_ENCAP_TSO_CSUM = 1 << 11, };
#define FUNC_SUPPORT_MGMT(hwdev) \ @@ -363,6 +365,9 @@ enum hinic_func_cap { #define FUNC_SUPPORT_LRO(hwdev) \ (!(hinic_get_func_feature_cap(hwdev) & \ HINIC_FUNC_OFFLOAD_OVS_UNSUPP)) +#define FUNC_SUPPORT_ENCAP_TSO_CSUM(hwdev) \ + (!!(hinic_get_func_feature_cap(hwdev) & \ + HINIC_FUNC_SUPP_ENCAP_TSO_CSUM))
int hinic_init_hwdev(struct hinic_init_para *para); int hinic_set_vf_dev_cap(void *hwdev); diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hwdev.h b/drivers/net/ethernet/huawei/hinic/hinic_hwdev.h index 24391d0..a096f91 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hwdev.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_hwdev.h @@ -198,7 +198,8 @@ struct hinic_heartbeat_enhanced { HINIC_FUNC_SUPP_DFX_REG | \ HINIC_FUNC_SUPP_RX_MODE | \ HINIC_FUNC_SUPP_SET_VF_MAC_VLAN | \ - HINIC_FUNC_SUPP_CHANGE_MAC) + HINIC_FUNC_SUPP_CHANGE_MAC | \ + HINIC_FUNC_SUPP_ENCAP_TSO_CSUM) #define HINIC_MULTI_BM_MASTER (HINIC_FUNC_MGMT | HINIC_FUNC_PORT | \ HINIC_FUNC_SUPP_DFX_REG | \ HINIC_FUNC_SUPP_RX_MODE | \ diff --git a/drivers/net/ethernet/huawei/hinic/hinic_main.c b/drivers/net/ethernet/huawei/hinic/hinic_main.c index c036b9e..83af4a8 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_main.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_main.c @@ -2243,10 +2243,12 @@ static void netdev_feature_init(struct net_device *netdev)
netdev->vlan_features = netdev->features;
+ if (FUNC_SUPPORT_ENCAP_TSO_CSUM(nic_dev->hwdev)) { #ifdef HAVE_ENCAPSULATION_TSO - netdev->features |= NETIF_F_GSO_UDP_TUNNEL | - NETIF_F_GSO_UDP_TUNNEL_CSUM; + netdev->features |= NETIF_F_GSO_UDP_TUNNEL | + NETIF_F_GSO_UDP_TUNNEL_CSUM; #endif /* HAVE_ENCAPSULATION_TSO */ + }
if (FUNC_SUPPORT_HW_VLAN(nic_dev->hwdev)) { #if defined(NETIF_F_HW_VLAN_CTAG_TX) @@ -2300,16 +2302,18 @@ static void netdev_feature_init(struct net_device *netdev) netdev->priv_flags |= IFF_UNICAST_FLT; #endif
+ if (FUNC_SUPPORT_ENCAP_TSO_CSUM(nic_dev->hwdev)) { #ifdef HAVE_ENCAPSULATION_CSUM - netdev->hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM - | NETIF_F_SCTP_CRC | NETIF_F_SG + netdev->hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM + | NETIF_F_SCTP_CRC | NETIF_F_SG; #ifdef HAVE_ENCAPSULATION_TSO - | NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_TSO_ECN + netdev->hw_enc_features |= NETIF_F_TSO | NETIF_F_TSO6 + | NETIF_F_TSO_ECN | NETIF_F_GSO_UDP_TUNNEL_CSUM | NETIF_F_GSO_UDP_TUNNEL; - #endif /* HAVE_ENCAPSULATION_TSO */ #endif /* HAVE_ENCAPSULATION_CSUM */ + } }
#define MOD_PARA_VALIDATE_NUM_QPS(nic_dev, num_qps, out_qps) { \
From: Chiqijun chiqijun@huawei.com
driver inclusion category: bugfix bugzilla: 4472
-----------------------------------------------------------------------
When the virtual machine is no-load hot migration, the last memory to be migrated is too large and needs to be optimized. You can achieve the effect of memory optimization by reducing the depth of CEQ / AEQ on the VF.
Signed-off-by: Chiqijun chiqijun@huawei.com Reviewed-by: Luoshaokai luoshaokai@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/net/ethernet/huawei/hinic/hinic_eqs.c | 18 ++++++++++++++++-- drivers/net/ethernet/huawei/hinic/hinic_eqs.h | 3 +++ 2 files changed, 19 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_eqs.c b/drivers/net/ethernet/huawei/hinic/hinic_eqs.c index 3fee9fb..024b43f 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_eqs.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_eqs.c @@ -1223,6 +1223,7 @@ int hinic_aeqs_init(struct hinic_hwdev *hwdev, u16 num_aeqs, struct hinic_aeqs *aeqs; int err; u16 i, q_id; + u32 aeq_len;
aeqs = kzalloc(sizeof(*aeqs), GFP_KERNEL); if (!aeqs) @@ -1245,8 +1246,14 @@ int hinic_aeqs_init(struct hinic_hwdev *hwdev, u16 num_aeqs, g_aeq_len = HINIC_DEFAULT_AEQ_LEN; }
+ if (HINIC_FUNC_TYPE(hwdev) == TYPE_VF && + hwdev->hwif->chip_mode != CHIP_MODE_NORMAL) + aeq_len = HINIC_VMGW_DEFAULT_AEQ_LEN; + else + aeq_len = g_aeq_len; + for (q_id = 0; q_id < num_aeqs; q_id++) { - err = init_eq(&aeqs->aeq[q_id], hwdev, q_id, g_aeq_len, + err = init_eq(&aeqs->aeq[q_id], hwdev, q_id, aeq_len, HINIC_AEQ, &msix_entries[q_id]); if (err) { sdk_err(hwdev->dev_hdl, "Failed to init aeq %d\n", @@ -1307,6 +1314,7 @@ int hinic_ceqs_init(struct hinic_hwdev *hwdev, u16 num_ceqs, struct hinic_ceqs *ceqs; int err; u16 i, q_id; + u32 ceq_len;
ceqs = kzalloc(sizeof(*ceqs), GFP_KERNEL); if (!ceqs) @@ -1322,6 +1330,12 @@ int hinic_ceqs_init(struct hinic_hwdev *hwdev, u16 num_ceqs, g_ceq_len = HINIC_DEFAULT_CEQ_LEN; }
+ if (HINIC_FUNC_TYPE(hwdev) == TYPE_VF && + hwdev->hwif->chip_mode != CHIP_MODE_NORMAL) + ceq_len = HINIC_VMGW_DEFAULT_CEQ_LEN; + else + ceq_len = g_ceq_len; + if (!g_num_ceqe_in_tasklet) { sdk_warn(hwdev->dev_hdl, "Module Parameter g_num_ceqe_in_tasklet can not be zero, resetting to %d\n", HINIC_TASK_PROCESS_EQE_LIMIT); @@ -1329,7 +1343,7 @@ int hinic_ceqs_init(struct hinic_hwdev *hwdev, u16 num_ceqs, }
for (q_id = 0; q_id < num_ceqs; q_id++) { - err = init_eq(&ceqs->ceq[q_id], hwdev, q_id, g_ceq_len, + err = init_eq(&ceqs->ceq[q_id], hwdev, q_id, ceq_len, HINIC_CEQ, &msix_entries[q_id]); if (err) { sdk_err(hwdev->dev_hdl, "Failed to init ceq %d\n", diff --git a/drivers/net/ethernet/huawei/hinic/hinic_eqs.h b/drivers/net/ethernet/huawei/hinic/hinic_eqs.h index 5813c4b..84840df 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_eqs.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_eqs.h @@ -35,6 +35,9 @@ #define HINIC_DEFAULT_AEQ_LEN 0x10000 #define HINIC_DEFAULT_CEQ_LEN 0x10000
+#define HINIC_VMGW_DEFAULT_AEQ_LEN 128 +#define HINIC_VMGW_DEFAULT_CEQ_LEN 1024 + #define HINIC_MIN_AEQ_LEN 64 #define HINIC_MAX_AEQ_LEN (512 * 1024) #define HINIC_MIN_CEQ_LEN 64
From: Chiqijun chiqijun@huawei.com
driver inclusion category: bugfix bugzilla: 4472
-----------------------------------------------------------------------
According to the number of queues set by the VF driver during hot migration, determine whether data migration is required, and clear the number of queues after the driver is reloaded to prevent accidental migration of useless data.
Signed-off-by: Chiqijun chiqijun@huawei.com Reviewed-by: Luoshaokai luoshaokai@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/net/ethernet/huawei/hinic/hinic_eqs.c | 12 ++++++++++-- drivers/net/ethernet/huawei/hinic/hinic_eqs.h | 2 -- drivers/net/ethernet/huawei/hinic/hinic_hw.h | 2 ++ drivers/net/ethernet/huawei/hinic/hinic_lld.c | 1 + drivers/net/ethernet/huawei/hinic/hinic_nic_io.c | 2 ++ 5 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_eqs.c b/drivers/net/ethernet/huawei/hinic/hinic_eqs.c index 024b43f..b8ea8ed 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_eqs.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_eqs.c @@ -229,9 +229,17 @@ static irqreturn_t aeq_interrupt(int irq, void *data); static irqreturn_t ceq_interrupt(int irq, void *data);
-void hinic_qps_num_set(struct hinic_hwdev *hwdev, u32 num_qps) +/** + * hinic_qps_num_set - set the number of queues that are actually opened, + * and instructs the migration driver to migrate specified queues + * during VF live migration. + * + * @hwdev: the pointer to hw device + * @num_qps: number of queue + */ +void hinic_qps_num_set(void *hwdev, u32 num_qps) { - struct hinic_hwif *hwif = hwdev->hwif; + struct hinic_hwif *hwif = ((struct hinic_hwdev *)hwdev)->hwif; u32 addr, val, ctrl;
addr = HINIC_CSR_AEQ_CTRL_0_ADDR(0); diff --git a/drivers/net/ethernet/huawei/hinic/hinic_eqs.h b/drivers/net/ethernet/huawei/hinic/hinic_eqs.h index 84840df..5035f90 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_eqs.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_eqs.h @@ -159,8 +159,6 @@ enum hinic_msg_pipe_state {
void hinic_func_own_bit_set(struct hinic_hwdev *hwdev, u32 cfg);
-void hinic_qps_num_set(struct hinic_hwdev *hwdev, u32 num_qps); - int hinic_aeqs_init(struct hinic_hwdev *hwdev, u16 num_aeqs, struct irq_info *msix_entries);
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw.h b/drivers/net/ethernet/huawei/hinic/hinic_hw.h index 404be1f..0947201 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw.h @@ -377,6 +377,8 @@ enum hinic_func_cap { void hinic_ppf_hwdev_unreg(void *hwdev); void hinic_ppf_hwdev_reg(void *hwdev, void *ppf_hwdev);
+void hinic_qps_num_set(void *hwdev, u32 num_qps); + bool hinic_is_hwdev_mod_inited(void *hwdev, enum hinic_hwdev_init_state state); enum hinic_func_mode hinic_get_func_mode(void *hwdev); u64 hinic_get_func_feature_cap(void *hwdev); diff --git a/drivers/net/ethernet/huawei/hinic/hinic_lld.c b/drivers/net/ethernet/huawei/hinic/hinic_lld.c index f851bb2..e8e446a 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_lld.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_lld.c @@ -2318,6 +2318,7 @@ static int hinic_func_init(struct pci_dev *pdev, true : disable_vf_load;
hinic_set_vf_load_state(pci_adapter, vf_load_state); + hinic_qps_num_set(pci_adapter->hwdev, 0);
pci_adapter->lld_dev.pdev = pdev; pci_adapter->lld_dev.hwdev = pci_adapter->hwdev; diff --git a/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c b/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c index bd5c869..8d13105 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c @@ -763,6 +763,8 @@ void hinic_free_qp_ctxts(void *hwdev) if (!hwdev) return;
+ hinic_qps_num_set(hwdev, 0); + err = hinic_clean_root_ctxt(hwdev); if (err) nic_err(((struct hinic_hwdev *)hwdev)->dev_hdl,
From: Chiqijun chiqijun@huawei.com
driver inclusion category: bugfix bugzilla: 4472
-----------------------------------------------------------------------
The communication between the VF driver and the firmware depends on the AEQ channel. During the hot migration process, interruption may be lost. If the interrupt retransmission is not configured, polling is used, and the AEQ interrupt retransmission is preferentially configured.
Signed-off-by: Chiqijun chiqijun@huawei.com Reviewed-by: Luoshaokai luoshaokai@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/net/ethernet/huawei/hinic/hinic_hw.h | 3 +- drivers/net/ethernet/huawei/hinic/hinic_hwdev.c | 104 ++++++++++++------------ drivers/net/ethernet/huawei/hinic/hinic_mbox.c | 18 ++-- drivers/net/ethernet/huawei/hinic/hinic_mbox.h | 9 ++ 4 files changed, 78 insertions(+), 56 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw.h b/drivers/net/ethernet/huawei/hinic/hinic_hw.h index 0947201..e489a00 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw.h @@ -199,7 +199,8 @@ struct nic_interrupt_info {
int hinic_get_interrupt_cfg(void *hwdev, struct nic_interrupt_info *interrupt_info); - +int hinic_set_interrupt_cfg_direct(void *hwdev, + struct nic_interrupt_info *interrupt_info); int hinic_set_interrupt_cfg(void *hwdev, struct nic_interrupt_info interrupt_info);
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hwdev.c b/drivers/net/ethernet/huawei/hinic/hinic_hwdev.c index 8d6675c..101b671 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hwdev.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hwdev.c @@ -1572,45 +1572,27 @@ int hinic_get_interrupt_cfg(void *hwdev, } EXPORT_SYMBOL(hinic_get_interrupt_cfg);
-int hinic_set_interrupt_cfg(void *hwdev, - struct nic_interrupt_info interrupt_info) +int hinic_set_interrupt_cfg_direct(void *hwdev, + struct nic_interrupt_info *interrupt_info) { struct hinic_hwdev *nic_hwdev = hwdev; struct hinic_msix_config msix_cfg = {0}; - struct nic_interrupt_info temp_info; u16 out_size = sizeof(msix_cfg); int err;
if (!hwdev) return -EINVAL;
- temp_info.msix_index = interrupt_info.msix_index; - - err = hinic_get_interrupt_cfg(hwdev, &temp_info); - if (err) - return -EINVAL; - err = hinic_global_func_id_get(hwdev, &msix_cfg.func_id); if (err) return err;
- msix_cfg.msix_index = (u16)interrupt_info.msix_index; - msix_cfg.lli_credit_cnt = temp_info.lli_credit_limit; - msix_cfg.lli_tmier_cnt = temp_info.lli_timer_cfg; - msix_cfg.pending_cnt = temp_info.pending_limt; - msix_cfg.coalesct_timer_cnt = temp_info.coalesc_timer_cfg; - msix_cfg.resend_timer_cnt = temp_info.resend_timer_cfg; - - if (interrupt_info.lli_set) { - msix_cfg.lli_credit_cnt = interrupt_info.lli_credit_limit; - msix_cfg.lli_tmier_cnt = interrupt_info.lli_timer_cfg; - } - - if (interrupt_info.interrupt_coalesc_set) { - msix_cfg.pending_cnt = interrupt_info.pending_limt; - msix_cfg.coalesct_timer_cnt = interrupt_info.coalesc_timer_cfg; - msix_cfg.resend_timer_cnt = interrupt_info.resend_timer_cfg; - } + msix_cfg.msix_index = (u16)interrupt_info->msix_index; + msix_cfg.lli_credit_cnt = interrupt_info->lli_credit_limit; + msix_cfg.lli_tmier_cnt = interrupt_info->lli_timer_cfg; + msix_cfg.pending_cnt = interrupt_info->pending_limt; + msix_cfg.coalesct_timer_cnt = interrupt_info->coalesc_timer_cfg; + msix_cfg.resend_timer_cnt = interrupt_info->resend_timer_cfg;
err = hinic_msg_to_mgmt_sync(hwdev, HINIC_MOD_COMM, HINIC_MGMT_CMD_MSI_CTRL_REG_WR_BY_UP, @@ -1619,11 +1601,40 @@ int hinic_set_interrupt_cfg(void *hwdev, if (err || !out_size || msix_cfg.status) { sdk_err(nic_hwdev->dev_hdl, "Failed to set interrupt config, err: %d, status: 0x%x, out size: 0x%x\n", err, msix_cfg.status, out_size); - return -EINVAL; + return -EFAULT; }
return 0; } + +int hinic_set_interrupt_cfg(void *hwdev, + struct nic_interrupt_info interrupt_info) +{ + struct nic_interrupt_info temp_info; + int err; + + if (!hwdev) + return -EINVAL; + + temp_info.msix_index = interrupt_info.msix_index; + + err = hinic_get_interrupt_cfg(hwdev, &temp_info); + if (err) + return -EINVAL; + + if (!interrupt_info.lli_set) { + interrupt_info.lli_credit_limit = temp_info.lli_credit_limit; + interrupt_info.lli_timer_cfg = temp_info.lli_timer_cfg; + } + + if (!interrupt_info.interrupt_coalesc_set) { + interrupt_info.pending_limt = temp_info.pending_limt; + interrupt_info.coalesc_timer_cfg = temp_info.coalesc_timer_cfg; + interrupt_info.resend_timer_cfg = temp_info.resend_timer_cfg; + } + + return hinic_set_interrupt_cfg_direct(hwdev, &interrupt_info); +} EXPORT_SYMBOL(hinic_set_interrupt_cfg);
void hinic_misx_intr_clear_resend_bit(void *hwdev, u16 msix_idx, @@ -1650,7 +1661,7 @@ static int init_aeqs_msix_attr(struct hinic_hwdev *hwdev) struct hinic_aeqs *aeqs = hwdev->aeqs; struct nic_interrupt_info info = {0}; struct hinic_eq *eq; - u16 q_id; + int q_id; int err;
info.lli_set = 0; @@ -1659,10 +1670,10 @@ static int init_aeqs_msix_attr(struct hinic_hwdev *hwdev) info.coalesc_timer_cfg = HINIC_DEAULT_EQ_MSIX_COALESC_TIMER_CFG; info.resend_timer_cfg = HINIC_DEAULT_EQ_MSIX_RESEND_TIMER_CFG;
- for (q_id = 0; q_id < aeqs->num_aeqs; q_id++) { + for (q_id = aeqs->num_aeqs - 1; q_id >= 0; q_id--) { eq = &aeqs->aeq[q_id]; info.msix_index = eq->eq_irq.msix_entry_idx; - err = hinic_set_interrupt_cfg(hwdev, info); + err = hinic_set_interrupt_cfg_direct(hwdev, &info); if (err) { sdk_err(hwdev->dev_hdl, "Set msix attr for aeq %d failed\n", q_id); @@ -1670,6 +1681,8 @@ static int init_aeqs_msix_attr(struct hinic_hwdev *hwdev) } }
+ hinic_set_mbox_seg_ack_mod(hwdev, HINIC_MBOX_SEND_MSG_INT); + return 0; }
@@ -2320,24 +2333,6 @@ static int __get_func_misc_info(struct hinic_hwdev *hwdev) return 0; }
-static int __init_eqs_msix_attr(struct hinic_hwdev *hwdev) -{ - int err; - - err = init_aeqs_msix_attr(hwdev); - if (err) { - sdk_err(hwdev->dev_hdl, "Failed to init aeqs msix attr\n"); - return err; - } - - err = init_ceqs_msix_attr(hwdev); - if (err) { - sdk_err(hwdev->dev_hdl, "Failed to init ceqs msix attr\n"); - return err; - } - - return 0; -}
/* initialize communication channel */ int hinic_init_comm_ch(struct hinic_hwdev *hwdev) @@ -2375,6 +2370,12 @@ int hinic_init_comm_ch(struct hinic_hwdev *hwdev) goto func_to_func_init_err; }
+ err = init_aeqs_msix_attr(hwdev); + if (err) { + sdk_err(hwdev->dev_hdl, "Failed to init aeqs msix attr\n"); + goto aeqs_msix_attr_init_err; + } + err = __get_func_misc_info(hwdev); if (err) { sdk_err(hwdev->dev_hdl, "Failed to get function msic information\n"); @@ -2406,9 +2407,11 @@ int hinic_init_comm_ch(struct hinic_hwdev *hwdev) goto ceqs_init_err; }
- err = __init_eqs_msix_attr(hwdev); - if (err) + err = init_ceqs_msix_attr(hwdev); + if (err) { + sdk_err(hwdev->dev_hdl, "Failed to init ceqs msix attr\n"); goto init_eqs_msix_err; + }
/* set default wq page_size */ hwdev->wq_page_size = HINIC_DEFAULT_WQ_PAGE_SIZE; @@ -2469,6 +2472,7 @@ int hinic_init_comm_ch(struct hinic_hwdev *hwdev) l2nic_reset_err: rectify_mode_err: get_func_info_err: +aeqs_msix_attr_init_err: func_to_func_init_err: return err;
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_mbox.c b/drivers/net/ethernet/huawei/hinic/hinic_mbox.c index ce6aa36..32c27a1 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_mbox.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_mbox.c @@ -173,10 +173,6 @@ enum hinic_hwif_direction_type { HINIC_HWIF_RESPONSE = 1, };
-enum mbox_send_mod { - MBOX_SEND_MSG_INT, -}; - enum mbox_seg_type { NOT_LAST_SEG, LAST_SEG, @@ -1146,7 +1142,8 @@ static int send_mbox_to_func(struct hinic_mbox_func_to_func *func_to_func, }
err = send_mbox_seg(func_to_func, header, dst_func, msg_seg, - seg_len, MBOX_SEND_MSG_INT, msg_info); + seg_len, func_to_func->send_ack_mod, + msg_info); if (err) { sdk_err(hwdev->dev_hdl, "Failed to send mbox seg, seq_id=0x%llx\n", HINIC_MBOX_HEADER_GET(header, SEQID)); @@ -1601,6 +1598,15 @@ int hinic_vf_mbox_random_id_init(struct hinic_hwdev *hwdev) return err; }
+void hinic_set_mbox_seg_ack_mod(struct hinic_hwdev *hwdev, + enum hinic_mbox_send_mod mod) +{ + if (!hwdev || !hwdev->func_to_func) + return; + + hwdev->func_to_func->send_ack_mod = mod; +} + int hinic_func_to_func_init(struct hinic_hwdev *hwdev) { struct hinic_mbox_func_to_func *func_to_func; @@ -1646,6 +1652,8 @@ int hinic_func_to_func_init(struct hinic_hwdev *hwdev)
prepare_send_mbox(func_to_func);
+ func_to_func->send_ack_mod = HINIC_MBOX_SEND_MSG_POLL; + return 0;
alloc_wb_status_err: diff --git a/drivers/net/ethernet/huawei/hinic/hinic_mbox.h b/drivers/net/ethernet/huawei/hinic/hinic_mbox.h index cfb118a..b76b960 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_mbox.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_mbox.h @@ -101,6 +101,11 @@ enum hinic_mbox_cb_state { HINIC_PPF_TO_PF_MBOX_CB_RUNNIG, };
+enum hinic_mbox_send_mod { + HINIC_MBOX_SEND_MSG_INT, + HINIC_MBOX_SEND_MSG_POLL, +}; + struct hinic_mbox_func_to_func { struct hinic_hwdev *hwdev;
@@ -130,6 +135,7 @@ struct hinic_mbox_func_to_func { u32 *vf_mbx_old_rand_id; u32 *vf_mbx_rand_id; bool support_vf_random; + enum hinic_mbox_send_mod send_ack_mod; };
struct hinic_mbox_work { @@ -229,4 +235,7 @@ int __hinic_mbox_to_vf(void *hwdev, int vf_to_pf_handler(void *handle, u16 vf_id, u8 cmd, void *buf_in, u16 in_size, void *buf_out, u16 *out_size);
+void hinic_set_mbox_seg_ack_mod(struct hinic_hwdev *hwdev, + enum hinic_mbox_send_mod mod); + #endif
From: Wang ShaoBo bobo.shaobowang@huawei.com
hulk inclusion category: bugfix bugzilla: 28338/31814 CVE: NA
-----------------------------------------------
CONFIG_JUMP_LABEL depends on CC_HAS_ASM_GOTO, some older compiler may not support.
if CONFIG_JUMP_LABEL closed, compile error as follow:
kernel/livepatch/core.c: In function ‘klp_init_object_loaded’: kernel/livepatch/core.c:1084:2: error: implicit declaration of function ‘jump_label_register’ [-Werror=implicit-function-declaration] ret = jump_label_register(patch->mod); ^
Fixes: 292937f547e6 ("livepatch/core: support jump_label") Signed-off-by: Wang ShaoBo bobo.shaobowang@huawei.com Reviewed-by: Cheng Jian cj.chengjian@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- include/linux/jump_label.h | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h index c2d4a21..8028836 100644 --- a/include/linux/jump_label.h +++ b/include/linux/jump_label.h @@ -240,6 +240,11 @@ static inline int jump_label_apply_nops(struct module *mod) return 0; }
+static inline int jump_label_register(struct module *mod) +{ + return 0; +} + static inline void static_key_enable(struct static_key *key) { STATIC_KEY_CHECK_USE(key);