From: Shunfeng Yang yangshunfeng2@huawei.com
mainline inclusion from mainline-v5.8 commit 3ec5f54f7a0f category: bugfix bugzilla: NA CVE: NA
The hns NIC driver divides the reset process into 3 status: initialization, hardware resetting and softwaring restting. RoCE driver gets reset status by interfaces provided by NIC driver and commands will not be sent to the IMP if the driver is in any above status. The main reason for this issue is that there is a time gap between status 1 and 2, if the RoCE driver sends commands to the IMP during this gap, the IMP will stop working because it is not ready.
Signed-off-by: Shunfeng Yang yangshunfeng2@huawei.com Signed-off-by: Yangyang Li liyangyang20@huawei.com Reviewed-by: chunzhi hu huchunzhi@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 07932c1e029ab..eb9654707a213 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -957,7 +957,7 @@ static int hns_roce_v2_rst_process_cmd(struct hns_roce_dev *hr_dev) instance_stage = handle->rinfo.instance_state; reset_stage = handle->rinfo.reset_state; reset_cnt = ops->ae_dev_reset_cnt(handle); - hw_resetting = ops->get_hw_reset_stat(handle); + hw_resetting = ops->get_cmdq_stat(handle); sw_resetting = ops->ae_dev_resetting(handle);
if (reset_cnt != hr_dev->reset_cnt)
From: Shunfeng Yang yangshunfeng2@huawei.com
mainline inclusion from mainline-v5.9 commit 221109e64316 category: bugfix bugzilla: NA CVE: NA
HIP08 doesn't support modifying the maximum number of outstanding WR in an SRQ. However, the driver does not return a failure message, and users may mistakenly think that the resizing is executed successfully. So the driver needs to intercept this operation.
Signed-off-by: Shunfeng Yang yangshunfeng2@huawei.com Signed-off-by: Yangyang Li liyangyang20@huawei.com Reviewed-by: chunzhi hu huchunzhi@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index eb9654707a213..81a72a897e7f4 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -6921,6 +6921,10 @@ static int hns_roce_v2_modify_srq(struct ib_srq *ibsrq, struct hns_roce_cmd_mailbox *mailbox; int ret;
+ /* Resizing SRQs is not supported yet */ + if (srq_attr_mask & IB_SRQ_MAX_WR) + return -EINVAL; + if (srq_attr_mask & IB_SRQ_LIMIT) { if (srq_attr->srq_limit >= srq->max) { dev_err(hr_dev->dev,
From: Shunfeng Yang yangshunfeng2@huawei.com
mainline inclusion from mainline-v5.12 commit b5df9b7a2f965b7903850d8f89846ffe0080b84b category: bugfix bugzilla: NA CVE: NA
According to the IB Specification, srq_limit shouldn't be configured during SRQ creation. If a user set srq_limit at this time, the driver should forced it to zero, or the result of creating SRQ will conflict with the result of querying SRQ.
Signed-off-by: Shunfeng Yang yangshunfeng2@huawei.com Signed-off-by: Yangyang Li liyangyang20@huawei.com Reviewed-by: chunzhi hu huchunzhi@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/infiniband/hw/hns/hns_roce_srq.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c index 630bf17c281c9..14724df9c8e94 100644 --- a/drivers/infiniband/hw/hns/hns_roce_srq.c +++ b/drivers/infiniband/hw/hns/hns_roce_srq.c @@ -529,6 +529,7 @@ struct ib_srq *hns_roce_create_srq(struct ib_pd *pd, srq->ibsrq.ext.xrc.srq_num = srq->srqn; srq_init_attr->attr.max_wr = srq->max; srq_init_attr->attr.max_sge = srq->max_gs - srq->rsv_sge; + srq_init_attr->attr.srq_limit = 0;
if (pd->uobject) { if (ib_copy_to_udata(udata, &srq->srqn, sizeof(__u32))) {
From: Shunfeng Yang yangshunfeng2@huawei.com
mainline inclusion from mainline-v5.13 commit 847d19a45146 category: bugfix bugzilla: NA CVE: NA
Implement the ops named get_dev_fw_str to support ib_get_device_fw_str().
Signed-off-by: Shunfeng Yang yangshunfeng2@huawei.com Signed-off-by: Yangyang Li liyangyang20@huawei.com Reviewed-by: chunzhi hu huchunzhi@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/infiniband/hw/hns/hns_roce_main.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c index 9ed050f790a40..a604e6d931544 100644 --- a/drivers/infiniband/hw/hns/hns_roce_main.c +++ b/drivers/infiniband/hw/hns/hns_roce_main.c @@ -701,6 +701,19 @@ static void hns_roce_disassociate_ucontext(struct ib_ucontext *ibcontext) mutex_unlock(&context->vma_list_mutex); }
+static void hns_roce_get_fw_ver(struct ib_device *device, char *str) +{ + u64 fw_ver = to_hr_dev(device)->caps.fw_ver; + unsigned int major, minor, sub_minor; + + major = upper_32_bits(fw_ver); + minor = high_16_bits(lower_32_bits(fw_ver)); + sub_minor = low_16_bits(fw_ver); + + snprintf(str, IB_FW_VERSION_NAME_MAX, "%u.%u.%04u", major, minor, + sub_minor); +} + static const char * const hns_roce_hw_stats_name[] = { "pd_alloc", "pd_dealloc", @@ -923,6 +936,9 @@ static int hns_roce_register_device(struct hns_roce_dev *hr_dev) ib_dev->dealloc_ucontext = hns_roce_dealloc_ucontext; ib_dev->mmap = hns_roce_mmap;
+ /* FW */ + ib_dev->get_dev_fw_str = hns_roce_get_fw_ver; + /* PD */ ib_dev->alloc_pd = hns_roce_alloc_pd; ib_dev->dealloc_pd = hns_roce_dealloc_pd;
From: Shunfeng Yang yangshunfeng2@huawei.com
mainline inclusion from mainline-v5.13 commit 24f3f1cd5154 category: cleanup bugzilla: NA CVE: NA
Avoid enabling RQ inline on UD
Signed-off-by: Shunfeng Yang yangshunfeng2@huawei.com Signed-off-by: Yangyang Li liyangyang20@huawei.com Reviewed-by: chunzhi hu huchunzhi@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 81a72a897e7f4..8165a9c5bb12a 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -4036,8 +4036,9 @@ static void modify_qp_reset_to_init(struct ib_qp *ibqp, context->rq_db_record_addr = cpu_to_le32(hr_qp->rdb.dma >> 32); qpc_mask->rq_db_record_addr = 0;
- roce_set_bit(context->byte_76_srqn_op_en, V2_QPC_BYTE_76_RQIE_S, - (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RQ_INLINE) ? 1 : 0); + if (ibqp->qp_type != IB_QPT_UD && ibqp->qp_type != IB_QPT_GSI) + roce_set_bit(context->byte_76_srqn_op_en, V2_QPC_BYTE_76_RQIE_S, + !!(hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RQ_INLINE)); roce_set_bit(qpc_mask->byte_76_srqn_op_en, V2_QPC_BYTE_76_RQIE_S, 0);
roce_set_field(context->byte_80_rnr_rx_cqn, V2_QPC_BYTE_80_RX_CQN_M,
From: Shunfeng Yang yangshunfeng2@huawei.com
mainline inclusion from mainline-v5.13 commit 9eab614338cd category: bugfix bugzilla: NA CVE: NA
When querying QP, the ULPs should be informed of the max length of inline data supported by the hardware.
Signed-off-by: Shunfeng Yang yangshunfeng2@huawei.com Signed-off-by: Yangyang Li liyangyang20@huawei.com Reviewed-by: chunzhi hu huchunzhi@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 8165a9c5bb12a..7bc24a77b794e 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -5403,6 +5403,7 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr, qp_attr->cur_qp_state = qp_attr->qp_state; qp_attr->cap.max_recv_wr = hr_qp->rq.wqe_cnt; qp_attr->cap.max_recv_sge = hr_qp->rq.max_gs - hr_qp->rq.rsv_sge; + qp_attr->cap.max_inline_data = hr_dev->caps.max_sq_inline;
if (!ibqp->uobject) { qp_attr->cap.max_send_wr = hr_qp->sq.wqe_cnt;
From: Shunfeng Yang yangshunfeng2@huawei.com
mainline inclusion from mainline-v5.14 commit e13026578b72 category: cleanup bugzilla: NA CVE: NA
Force rewrite inline flag of WQE.
Signed-off-by: Shunfeng Yang yangshunfeng2@huawei.com Signed-off-by: Yangyang Li liyangyang20@huawei.com Reviewed-by: chunzhi hu huchunzhi@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 7bc24a77b794e..4c3787cb61255 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -264,6 +264,9 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, struct ib_send_wr *wr, int j = 0; int i;
+ roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_INLINE_S, + !!(wr->send_flags & IB_SEND_INLINE)); + if (wr->send_flags & IB_SEND_INLINE && valid_num_sge) { if (le32_to_cpu(rc_sq_wqe->msg_len) > hr_dev->caps.max_sq_inline) { @@ -284,9 +287,6 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, struct ib_send_wr *wr, wr->sg_list[i].length); wqe += wr->sg_list[i].length; } - - roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_INLINE_S, - 1); } else { if (valid_num_sge <= HNS_ROCE_V2_UC_RC_SGE_NUM_IN_WQE) { for (i = 0; i < wr->num_sge; i++) {
From: Shunfeng Yang yangshunfeng2@huawei.com
mainline inclusion from mainline-v5.5 commit e8a07de57ea4 category: bugfix bugzilla: NA CVE: NA
The parameters npages used to initial mtt of srq->idx_que shouldn't be same with srq's. And page_shift should be calculated from idx_buf_pg_sz.
Signed-off-by: Shunfeng Yang yangshunfeng2@huawei.com Signed-off-by: Yangyang Li liyangyang20@huawei.com Reviewed-by: chunzhi hu huchunzhi@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/infiniband/hw/hns/hns_roce_hw_v2.h | 2 +- drivers/infiniband/hw/hns/hns_roce_srq.c | 27 +++++++++++----------- 2 files changed, 15 insertions(+), 14 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h index 11318105826a5..95b8832247f0e 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h @@ -1398,7 +1398,7 @@ struct hns_roce_query_pf_caps_d { };
#define V2_QUERY_PF_CAPS_D_NUM_SRQS_S 0 -#define V2_QUERY_PF_CAPS_D_NUM_SRQS_M GENMASK(20, 0) +#define V2_QUERY_PF_CAPS_D_NUM_SRQS_M GENMASK(19, 0)
#define V2_QUERY_PF_CAPS_D_RQWQE_HOP_NUM_S 20 #define V2_QUERY_PF_CAPS_D_RQWQE_HOP_NUM_M GENMASK(21, 20) diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c index 14724df9c8e94..292e0e503884e 100644 --- a/drivers/infiniband/hw/hns/hns_roce_srq.c +++ b/drivers/infiniband/hw/hns/hns_roce_srq.c @@ -222,8 +222,7 @@ static int create_user_srq(struct ib_pd *pd, struct hns_roce_srq *srq, { struct hns_roce_dev *hr_dev = to_hr_dev(pd->device); struct hns_roce_ib_create_srq ucmd; - u32 page_shift; - u32 npages; + struct hns_roce_buf *buf; int ret;
if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd))) @@ -235,11 +234,13 @@ static int create_user_srq(struct ib_pd *pd, struct hns_roce_srq *srq, return PTR_ERR(srq->umem);
if (hr_dev->caps.srqwqe_buf_pg_sz) { - npages = (ib_umem_page_count(srq->umem) + - (1 << hr_dev->caps.srqwqe_buf_pg_sz) - 1) / - (1 << hr_dev->caps.srqwqe_buf_pg_sz); - page_shift = PAGE_SHIFT + hr_dev->caps.srqwqe_buf_pg_sz; - ret = hns_roce_mtt_init(hr_dev, npages, page_shift, &srq->mtt); + buf = srq->buf; + buf->npages = (ib_umem_page_count(srq->umem) + + (1 << hr_dev->caps.srqwqe_buf_pg_sz) - 1) / + (1 << hr_dev->caps.srqwqe_buf_pg_sz); + buf->page_shift = PAGE_SHIFT + hr_dev->caps.srqwqe_buf_pg_sz; + ret = hns_roce_mtt_init(hr_dev, buf->npages, buf->page_shift, + &srq->mtt); } else ret = hns_roce_mtt_init(hr_dev, ib_umem_page_count(srq->umem), srq->umem->page_shift, &srq->mtt); @@ -262,12 +263,12 @@ static int create_user_srq(struct ib_pd *pd, struct hns_roce_srq *srq, }
if (hr_dev->caps.idx_buf_pg_sz) { - npages = (ib_umem_page_count(srq->idx_que.umem) + - (1 << hr_dev->caps.idx_buf_pg_sz) - 1) / - (1 << hr_dev->caps.idx_buf_pg_sz); - page_shift = PAGE_SHIFT + hr_dev->caps.idx_buf_pg_sz; - ret = hns_roce_mtt_init(hr_dev, npages, page_shift, - &srq->idx_que.mtt); + buf = srq->idx_que.idx_buf; + buf->npages = DIV_ROUND_UP(ib_umem_page_count(srq->idx_que.umem), + 1 << hr_dev->caps.idx_buf_pg_sz); + buf->page_shift = PAGE_SHIFT + hr_dev->caps.idx_buf_pg_sz; + ret = hns_roce_mtt_init(hr_dev, buf->npages, buf->page_shift, + &srq->idx_que.mtt); } else { ret = hns_roce_mtt_init(hr_dev, ib_umem_page_count(srq->idx_que.umem),
From: Shunfeng Yang yangshunfeng2@huawei.com
mainline inclusion from mainline-v5.12 commit 1620f09b96ec category: bugfix bugzilla: NA CVE: NA
If a user posts WR by wr_list, the head pointer of idx_queue won't be updated until all wqes are filled, so the judgment of whether head equals to tail will get a wrong result. Fix above issue and move the head and tail pointer from the srq structure into the idx_queue structure. After idx_queue is filled with wqe idx, the head pointer of it will increase.
Signed-off-by: Shunfeng Yang yangshunfeng2@huawei.com Signed-off-by: Yangyang Li liyangyang20@huawei.com Reviewed-by: chunzhi hu huchunzhi@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/infiniband/hw/hns/hns_roce_device.h | 5 +++-- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 19 ++++++++++++++----- drivers/infiniband/hw/hns/hns_roce_srq.c | 5 +++-- 3 files changed, 20 insertions(+), 9 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h index 6e1ec2d77ce2a..d11b6066a5308 100644 --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h @@ -635,6 +635,8 @@ struct hns_roce_idx_que { struct ib_umem *umem; struct hns_roce_mtt mtt; unsigned long *bitmap; + u32 head; + u32 tail; };
struct hns_roce_srq { @@ -656,8 +658,6 @@ struct hns_roce_srq { struct hns_roce_mtt mtt; struct hns_roce_idx_que idx_que; spinlock_t lock; - int head; - int tail; u16 wqe_ctr; struct mutex mutex; }; @@ -713,6 +713,7 @@ struct hns_roce_av { u8 stat_rate; u8 hop_limit; u32 flowlabel; + u16 udp_sport; u8 sl; u8 tclass; u8 dgid[HNS_ROCE_GID_SIZE]; diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 4c3787cb61255..298fba6e349c9 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -3035,7 +3035,7 @@ static void hns_roce_free_srq_wqe(struct hns_roce_srq *srq, int wqe_index) bitmap_num = wqe_index / BITS_PER_LONG_LONG; bit_num = wqe_index % BITS_PER_LONG_LONG; srq->idx_que.bitmap[bitmap_num] |= (1ULL << bit_num); - srq->tail++; + srq->idx_que.tail++;
spin_unlock(&srq->lock); } @@ -7003,6 +7003,15 @@ int hns_roce_v2_query_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr) return ret; }
+int hns_roce_srqwq_overflow(struct hns_roce_srq *srq, int nreq) +{ + struct hns_roce_idx_que *idx_que = &srq->idx_que; + unsigned int cur; + + cur = idx_que->head - idx_que->tail; + return cur + nreq >= srq->max - 1; +} + static int find_empty_entry(struct hns_roce_idx_que *idx_que) { int bit_num; @@ -7052,7 +7061,7 @@ static int hns_roce_v2_post_srq_recv(struct ib_srq *ibsrq,
spin_lock_irqsave(&srq->lock, flags);
- ind = srq->head & (srq->max - 1); + ind = srq->idx_que.head & (srq->max - 1); max_sge = srq->max_gs - srq->rsv_sge; for (nreq = 0; wr; ++nreq, wr = wr->next) { if (unlikely(wr->num_sge > max_sge)) { @@ -7064,7 +7073,7 @@ static int hns_roce_v2_post_srq_recv(struct ib_srq *ibsrq, break; }
- if (unlikely(srq->head == srq->tail)) { + if (unlikely(hns_roce_srqwq_overflow(srq, nreq))) { dev_err(hr_dev->dev, "srq(0x%lx) head equals tail\n", srq->srqn); ret = -ENOMEM; @@ -7094,7 +7103,7 @@ static int hns_roce_v2_post_srq_recv(struct ib_srq *ibsrq, }
if (likely(nreq)) { - srq->head += nreq; + srq->idx_que.head += nreq;
/* * Make sure that descriptors are written before @@ -7105,7 +7114,7 @@ static int hns_roce_v2_post_srq_recv(struct ib_srq *ibsrq, srq_db.byte_4 = cpu_to_le32(HNS_ROCE_V2_SRQ_DB << V2_DB_BYTE_4_CMD_S | (srq->srqn & V2_DB_BYTE_4_TAG_M)); - srq_db.parameter = cpu_to_le32(srq->head); + srq_db.parameter = cpu_to_le32(srq->idx_que.head);
hns_roce_write64(hr_dev, (__le32 *)&srq_db, srq->db_reg_l);
diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c index 292e0e503884e..40ce6abdcd374 100644 --- a/drivers/infiniband/hw/hns/hns_roce_srq.c +++ b/drivers/infiniband/hw/hns/hns_roce_srq.c @@ -336,6 +336,9 @@ static int hns_roce_create_idx_que(struct ib_pd *pd, struct hns_roce_srq *srq, for (i = 0; i < bitmap_num; i++) idx_que->bitmap[i] = ~(0UL);
+ idx_que->head = 0; + idx_que->tail = 0; + return 0; }
@@ -352,8 +355,6 @@ static int create_kernel_srq(struct ib_pd *pd, struct hns_roce_srq *srq, return -ENOMEM;
srq->buf = kbuf; - srq->head = 0; - srq->tail = srq->max - 1; srq->wqe_ctr = 0;
ret = hns_roce_mtt_init(hr_dev, kbuf->npages, kbuf->page_shift,
From: Shunfeng Yang yangshunfeng2@huawei.com
mainline inclusion from mainline-v5.12 commit bb74fe7e81c8 category: bugfix bugzilla: NA CVE: NA
When an error occurs, the qp_table must be cleared, regardless of whether the SRQ feature is enabled.
Signed-off-by: Shunfeng Yang yangshunfeng2@huawei.com Signed-off-by: Yangyang Li liyangyang20@huawei.com Reviewed-by: chunzhi hu huchunzhi@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/infiniband/hw/hns/hns_roce_main.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c index a604e6d931544..f3112683b9b24 100644 --- a/drivers/infiniband/hw/hns/hns_roce_main.c +++ b/drivers/infiniband/hw/hns/hns_roce_main.c @@ -1316,8 +1316,7 @@ static int hns_roce_setup_hca(struct hns_roce_dev *hr_dev) return 0;
err_qp_table_free: - if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ) - hns_roce_cleanup_qp_table(hr_dev); + hns_roce_cleanup_qp_table(hr_dev);
err_cq_table_free: hns_roce_cleanup_cq_table(hr_dev);
From: Shunfeng Yang yangshunfeng2@huawei.com
mainline inclusion from mainline-v5.13 commit af06b628a6bd category: bugfix bugzilla: NA CVE: NA
When re-insmod hns-roce-hw-v2.ko and init CMDQ, we need to reinit PI and CI of CMDQ. But, after last rmmod hns-roce-hw-v2.ko, the CMDQ is NOT reset, firmware still is waiting software to send cmd.. So, if we re-init PI first, the firmware would thought that the software sent new cmds. then it start to process CMDQ. It may be a long time for driver. If driver start to init hardware and really send cmds, the status of PI and CI are unexpected.
Signed-off-by: Shunfeng Yang yangshunfeng2@huawei.com Signed-off-by: Yangyang Li liyangyang20@huawei.com Reviewed-by: chunzhi hu huchunzhi@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 298fba6e349c9..51f2c5e26f76c 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -1039,8 +1039,10 @@ static void hns_roce_cmq_init_regs(struct hns_roce_dev *hr_dev, bool ring_type) upper_32_bits(dma)); roce_write(hr_dev, ROCEE_TX_CMQ_DEPTH_REG, ring->desc_num >> HNS_ROCE_CMQ_DESC_NUM_S); - roce_write(hr_dev, ROCEE_TX_CMQ_HEAD_REG, 0); + + /* Make sure to write tail first and then head */ roce_write(hr_dev, ROCEE_TX_CMQ_TAIL_REG, 0); + roce_write(hr_dev, ROCEE_TX_CMQ_HEAD_REG, 0); } else { roce_write(hr_dev, ROCEE_RX_CMQ_BASEADDR_L_REG, (u32)dma); roce_write(hr_dev, ROCEE_RX_CMQ_BASEADDR_H_REG,
From: Shunfeng Yang yangshunfeng2@huawei.com
mainline inclusion from mainline-v5.9 commit 172505cfa3a8 category: cleanup bugzilla: NA CVE: NA
Add check for the validity of sl configuration.
Signed-off-by: Shunfeng Yang yangshunfeng2@huawei.com Signed-off-by: Yangyang Li liyangyang20@huawei.com Reviewed-by: chunzhi hu huchunzhi@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 15 ++++++++++++--- drivers/infiniband/hw/hns/hns_roce_hw_v2.h | 2 ++ 2 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 51f2c5e26f76c..afb89565b6c45 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -4875,11 +4875,18 @@ static int hns_roce_v2_set_path(struct ib_qp *ibqp, V2_QPC_BYTE_28_FL_S, 0); memcpy(context->dgid, grh->dgid.raw, sizeof(grh->dgid.raw)); memset(qpc_mask->dgid, 0, sizeof(grh->dgid.raw)); + + hr_qp->sl = rdma_ah_get_sl(&attr->ah_attr); + if (unlikely(hr_qp->sl > MAX_SERVICE_LEVEL)) { + dev_err(hr_dev->dev, + "failed to fill QPC, sl (%d) shouldn't be larger than %d.\n", + hr_qp->sl, MAX_SERVICE_LEVEL); + return -EINVAL; + } roce_set_field(context->byte_28_at_fl, V2_QPC_BYTE_28_SL_M, - V2_QPC_BYTE_28_SL_S, rdma_ah_get_sl(&attr->ah_attr)); + V2_QPC_BYTE_28_SL_S, hr_qp->sl); roce_set_field(qpc_mask->byte_28_at_fl, V2_QPC_BYTE_28_SL_M, V2_QPC_BYTE_28_SL_S, 0); - hr_qp->sl = rdma_ah_get_sl(&attr->ah_attr);
return 0; } @@ -5399,7 +5406,9 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr, qp_attr->retry_cnt = roce_get_field(context->byte_212_lsn, V2_QPC_BYTE_212_RETRY_CNT_M, V2_QPC_BYTE_212_RETRY_CNT_S); - qp_attr->rnr_retry = (u8)le32_to_cpu(context->rq_rnr_timer); + qp_attr->rnr_retry = roce_get_field(context->byte_244_rnr_rxack, + V2_QPC_BYTE_244_RNR_CNT_M, + V2_QPC_BYTE_244_RNR_CNT_S);
done: qp_attr->cur_qp_state = qp_attr->qp_state; diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h index 95b8832247f0e..fe030f20ef394 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h @@ -2047,6 +2047,8 @@ struct hns_roce_eq_context { #define HNS_ROCE_V2_AEQE_EVENT_QUEUE_NUM_S 0 #define HNS_ROCE_V2_AEQE_EVENT_QUEUE_NUM_M GENMASK(23, 0)
+#define MAX_SERVICE_LEVEL 0x7 + struct hns_roce_wqe_atomic_seg { __le64 fetchadd_swap_data; __le64 cmp_data;
From: Shunfeng Yang yangshunfeng2@huawei.com
mainline inclusion from mainline-v5.9 commit fbed9d2be292 category: bugfix bugzilla: NA CVE: NA
The hardware will add AckReq flag in BTH header according to the value of ack_req_freq to request ACK from responder for the packets with this flag. It should be greater than or equal to lp_pktn_ini instead of using a fixed value.
Signed-off-by: Shunfeng Yang yangshunfeng2@huawei.com Signed-off-by: Yangyang Li liyangyang20@huawei.com Reviewed-by: chunzhi hu huchunzhi@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 42 +++++++++++++--------- drivers/infiniband/hw/hns/hns_roce_qp.c | 6 ++-- 2 files changed, 30 insertions(+), 18 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index afb89565b6c45..32022a0d018f2 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -1922,8 +1922,8 @@ static void calc_pg_sz(int obj_num, int obj_size, int hop_num, int ctx_bt_num, int *buf_page_size, int *bt_page_size, u32 hem_type) { u64 obj_per_chunk; - int bt_chunk_sz = 1 << PAGE_SHIFT; - int obj_chunk_sz = 1 << PAGE_SHIFT; + u64 bt_chunk_sz = 1 << PAGE_SHIFT; + u64 obj_chunk_sz = 1 << PAGE_SHIFT;
*buf_page_size = 0; *bt_page_size = 0; @@ -4143,8 +4143,6 @@ static void modify_qp_reset_to_init(struct ib_qp *ibqp, V2_QPC_BYTE_168_IRRL_IDX_LSB_M, V2_QPC_BYTE_168_IRRL_IDX_LSB_S, 0);
- roce_set_field(context->byte_172_sq_psn, V2_QPC_BYTE_172_ACK_REQ_FREQ_M, - V2_QPC_BYTE_172_ACK_REQ_FREQ_S, 4); roce_set_field(qpc_mask->byte_172_sq_psn, V2_QPC_BYTE_172_ACK_REQ_FREQ_M, V2_QPC_BYTE_172_ACK_REQ_FREQ_S, 0); @@ -4377,6 +4375,8 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp, dma_addr_t dma_handle_3; dma_addr_t dma_handle_2; u64 wqe_sge_ba; + u8 lp_pktn_ini; + enum ib_mtu mtu; u8 port_num; u64 *mtts_3; u64 *mtts_2; @@ -4553,21 +4553,25 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp, roce_set_field(qpc_mask->byte_52_udpspn_dmac, V2_QPC_BYTE_52_DMAC_M, V2_QPC_BYTE_52_DMAC_S, 0);
- /* mtu*(2^LP_PKTN_INI) should not bigger then 1 message length 64kb */ - roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_LP_PKTN_INI_M, - V2_QPC_BYTE_56_LP_PKTN_INI_S, 0); - roce_set_field(qpc_mask->byte_56_dqpn_err, V2_QPC_BYTE_56_LP_PKTN_INI_M, - V2_QPC_BYTE_56_LP_PKTN_INI_S, 0); - if (ibqp->qp_type == IB_QPT_GSI || ibqp->qp_type == IB_QPT_UD) + mtu = IB_MTU_4096; + else + mtu = attr->path_mtu; + + if (attr_mask & IB_QP_PATH_MTU) { roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M, - V2_QPC_BYTE_24_MTU_S, IB_MTU_4096); - else if (attr_mask & IB_QP_PATH_MTU) - roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M, - V2_QPC_BYTE_24_MTU_S, attr->path_mtu); + V2_QPC_BYTE_24_MTU_S, mtu); + roce_set_field(qpc_mask->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M, + V2_QPC_BYTE_24_MTU_S, 0); + }
- roce_set_field(qpc_mask->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M, - V2_QPC_BYTE_24_MTU_S, 0); +#define MAX_LP_MSG_LEN 65536 + /* MTU * (2 ^ LP_PKTN_INI) shouldn't be bigger than 64KB */ + lp_pktn_ini = ilog2(MAX_LP_MSG_LEN / ib_mtu_enum_to_int(mtu)); + roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_LP_PKTN_INI_M, + V2_QPC_BYTE_56_LP_PKTN_INI_S, lp_pktn_ini); + roce_set_field(qpc_mask->byte_56_dqpn_err, V2_QPC_BYTE_56_LP_PKTN_INI_M, + V2_QPC_BYTE_56_LP_PKTN_INI_S, 0);
roce_set_field(context->byte_84_rq_ci_pi, V2_QPC_BYTE_84_RQ_PRODUCER_IDX_M, @@ -4579,6 +4583,12 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp, roce_set_field(qpc_mask->byte_84_rq_ci_pi, V2_QPC_BYTE_84_RQ_CONSUMER_IDX_M, V2_QPC_BYTE_84_RQ_CONSUMER_IDX_S, 0); + /* ACK_REQ_FREQ should be larger than or equal to LP_PKTN_INI */ + roce_set_field(context->byte_172_sq_psn, V2_QPC_BYTE_172_ACK_REQ_FREQ_M, + V2_QPC_BYTE_172_ACK_REQ_FREQ_S, lp_pktn_ini); + roce_set_field(qpc_mask->byte_172_sq_psn, + V2_QPC_BYTE_172_ACK_REQ_FREQ_M, + V2_QPC_BYTE_172_ACK_REQ_FREQ_S, 0); roce_set_bit(qpc_mask->byte_108_rx_reqepsn, V2_QPC_BYTE_108_RX_REQ_PSN_ERR_S, 0); roce_set_field(qpc_mask->byte_96_rx_reqmsn, V2_QPC_BYTE_96_RX_REQ_MSN_M, diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c index 9a4118b2fb78f..6f266932c9880 100644 --- a/drivers/infiniband/hw/hns/hns_roce_qp.c +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c @@ -1234,8 +1234,10 @@ int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
mutex_lock(&hr_qp->mutex);
- cur_state = attr_mask & IB_QP_CUR_STATE ? - attr->cur_qp_state : (enum ib_qp_state)hr_qp->state; + if (attr_mask & IB_QP_CUR_STATE && attr->cur_qp_state != hr_qp->state) + goto out; + + cur_state = hr_qp->state; new_state = attr_mask & IB_QP_STATE ? attr->qp_state : cur_state;
From: Shunfeng Yang yangshunfeng2@huawei.com
mainline inclusion from mainline-v5.11 commit 0fd0175e30e4 category: bugfix bugzilla: NA CVE: NA
When querying QP, the ULPs should be informed of the max length of inline data supported by the hardware.
Signed-off-by: Shunfeng Yang yangshunfeng2@huawei.com Signed-off-by: Yangyang Li liyangyang20@huawei.com Reviewed-by: chunzhi hu huchunzhi@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 31 +++++++++++++++------- 1 file changed, 21 insertions(+), 10 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 32022a0d018f2..5f70692688f6a 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -143,7 +143,6 @@ static void set_extend_sge(struct hns_roce_qp *qp, struct ib_send_wr *wr, int fi_sge_num; int se_sge_num; int shift; - int i;
if (qp->ibqp.qp_type == IB_QPT_RC || qp->ibqp.qp_type == IB_QPT_UC) num_in_wqe = HNS_ROCE_V2_UC_RC_SGE_NUM_IN_WQE; @@ -162,20 +161,32 @@ static void set_extend_sge(struct hns_roce_qp *qp, struct ib_send_wr *wr, sizeof(struct hns_roce_v2_wqe_data_seg); if (extend_sge_num > fi_sge_num) { se_sge_num = extend_sge_num - fi_sge_num; - for (i = 0; i < fi_sge_num; i++) { - set_data_seg_v2(dseg++, sg + i); - (*sge_ind)++; + while (fi_sge_num > 0) { + if (likely(sg->length)) { + set_data_seg_v2(dseg++, sg); + (*sge_ind)++; + fi_sge_num--; + } + sg++; } dseg = get_send_extend_sge(qp, (*sge_ind) & (qp->sge.sge_cnt - 1)); - for (i = 0; i < se_sge_num; i++) { - set_data_seg_v2(dseg++, sg + fi_sge_num + i); - (*sge_ind)++; + while (se_sge_num > 0) { + if (likely(sg->length)) { + set_data_seg_v2(dseg++, sg + fi_sge_num); + (*sge_ind)++; + se_sge_num--; + } + sg++; } } else { - for (i = 0; i < extend_sge_num; i++) { - set_data_seg_v2(dseg++, sg + i); - (*sge_ind)++; + while (extend_sge_num > 0) { + if (likely(sg->length)) { + set_data_seg_v2(dseg++, sg); + (*sge_ind)++; + extend_sge_num--; + } + sg++; } } }
From: Shunfeng Yang yangshunfeng2@huawei.com
mainline inclusion from mainline-v5.3 commit 97545b10221 category: bugfix bugzilla: NA CVE: NA
When the user submits more than 32 work request to a srq queue at a time, it needs to find the corresponding number of entries in the bitmap in the idx queue. However, the original lookup function named ffs only processes 32 bits of the array element, When the number of srq wqe issued exceeds 32, the ffs will only process the lower 32 bits of the elements, it will not be able to get the correct wqe index for srq wqe.
Signed-off-by: Shunfeng Yang yangshunfeng2@huawei.com Signed-off-by: Yangyang Li liyangyang20@huawei.com Reviewed-by: chunzhi hu huchunzhi@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 34 ++++++++++++---------- drivers/infiniband/hw/hns/hns_roce_srq.c | 13 ++------- 2 files changed, 21 insertions(+), 26 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 5f70692688f6a..f712a0b1b5f85 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -3039,15 +3039,10 @@ static void *get_srq_wqe(struct hns_roce_srq *srq, int n)
static void hns_roce_free_srq_wqe(struct hns_roce_srq *srq, int wqe_index) { - u32 bitmap_num; - int bit_num; - /* always called with interrupts disabled. */ spin_lock(&srq->lock);
- bitmap_num = wqe_index / BITS_PER_LONG_LONG; - bit_num = wqe_index % BITS_PER_LONG_LONG; - srq->idx_que.bitmap[bitmap_num] |= (1ULL << bit_num); + bitmap_clear(srq->idx_que.bitmap, wqe_index, 1); srq->idx_que.tail++;
spin_unlock(&srq->lock); @@ -7044,18 +7039,19 @@ int hns_roce_srqwq_overflow(struct hns_roce_srq *srq, int nreq) return cur + nreq >= srq->max - 1; }
-static int find_empty_entry(struct hns_roce_idx_que *idx_que) +static int find_empty_entry(struct hns_roce_idx_que *idx_que, + unsigned long size) { - int bit_num; - int i; + int wqe_idx;
- /* bitmap[i] is set zero if all bits are allocated */ - for (i = 0; idx_que->bitmap[i] == 0; ++i) - ; - bit_num = __ffs64(idx_que->bitmap[i]) + 1; - idx_que->bitmap[i] &= ~(1ULL << (bit_num - 1)); + if (unlikely(bitmap_full(idx_que->bitmap, size))) + return -ENOSPC; + + wqe_idx = find_first_zero_bit(idx_que->bitmap, size); + + bitmap_set(idx_que->bitmap, wqe_idx, 1);
- return i * BITS_PER_LONG_LONG + (bit_num - 1); + return wqe_idx; }
static void fill_idx_queue(struct hns_roce_idx_que *idx_que, @@ -7113,7 +7109,13 @@ static int hns_roce_v2_post_srq_recv(struct ib_srq *ibsrq, break; }
- wqe_idx = find_empty_entry(&srq->idx_que); + wqe_idx = find_empty_entry(&srq->idx_que, srq->max); + if (wqe_idx < 0) { + ret = -ENOMEM; + *bad_wr = wr; + break; + } + fill_idx_queue(&srq->idx_que, ind, wqe_idx); wqe = get_srq_wqe(srq, wqe_idx); dseg = (struct hns_roce_v2_wqe_data_seg *)wqe; diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c index 40ce6abdcd374..52a9b23c13e0b 100644 --- a/drivers/infiniband/hw/hns/hns_roce_srq.c +++ b/drivers/infiniband/hw/hns/hns_roce_srq.c @@ -312,29 +312,22 @@ static int hns_roce_create_idx_que(struct ib_pd *pd, struct hns_roce_srq *srq, struct hns_roce_dev *hr_dev = to_hr_dev(pd->device); struct hns_roce_idx_que *idx_que = &srq->idx_que; struct hns_roce_buf *kbuf; - u32 bitmap_num; - int i;
idx_que->entry_sz = HNS_ROCE_IDX_QUE_ENTRY_SZ; - bitmap_num = HNS_ROCE_ALIGN_UP(srq->max, BITS_PER_LONG_LONG);
- idx_que->bitmap = kcalloc(1, bitmap_num / BITS_PER_BYTE, GFP_KERNEL); + idx_que->bitmap = bitmap_zalloc(srq->max, GFP_KERNEL); if (!idx_que->bitmap) return -ENOMEM;
- bitmap_num = bitmap_num / BITS_PER_LONG_LONG; - idx_que->buf_size = srq->max * idx_que->entry_sz;
kbuf = hns_roce_buf_alloc(hr_dev, idx_que->buf_size, page_shift, 0); if (IS_ERR(kbuf)) { - kfree(idx_que->bitmap); + bitmap_free(idx_que->bitmap); return -ENOMEM; }
idx_que->idx_buf = kbuf; - for (i = 0; i < bitmap_num; i++) - idx_que->bitmap[i] = ~(0UL);
idx_que->head = 0; idx_que->tail = 0; @@ -405,7 +398,7 @@ static int create_kernel_srq(struct ib_pd *pd, struct hns_roce_srq *srq,
err_kernel_create_idx: hns_roce_buf_free(hr_dev, srq->idx_que.idx_buf); - kfree(srq->idx_que.bitmap); + bitmap_free(srq->idx_que.bitmap);
err_kernel_srq_mtt: hns_roce_mtt_cleanup(hr_dev, &srq->mtt);
From: Shunfeng Yang yangshunfeng2@huawei.com
mainline inclusion from mainline-v5.14 commit 5e6370d7cc75 category: bugfix bugzilla: NA CVE: NA
The HEM page size for QPC timer and CQC timer is always 4K and there's no need to calculate a different size by the hns driver, otherwise the ROCEE may access an invalid address.
Signed-off-by: Shunfeng Yang yangshunfeng2@huawei.com Signed-off-by: Yangyang Li liyangyang20@huawei.com Reviewed-by: chunzhi hu huchunzhi@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index f712a0b1b5f85..ff8d9c2966ac8 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -2167,10 +2167,6 @@ static int hns_roce_query_pf_caps(struct hns_roce_dev *hr_dev) &caps->scc_ctx_buf_pg_sz, &caps->scc_ctx_ba_pg_sz, HEM_TYPE_SCC_CTX); - calc_pg_sz(caps->num_cqc_timer, caps->cqc_timer_entry_sz, - caps->cqc_timer_hop_num, caps->cqc_timer_bt_num, - &caps->cqc_timer_buf_pg_sz, - &caps->cqc_timer_ba_pg_sz, HEM_TYPE_CQC_TIMER); }
calc_pg_sz(caps->num_cqe_segs, caps->mtt_entry_sz, caps->cqe_hop_num, @@ -5420,11 +5416,11 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr, V2_QPC_BYTE_28_AT_M, V2_QPC_BYTE_28_AT_S); qp_attr->retry_cnt = roce_get_field(context->byte_212_lsn, - V2_QPC_BYTE_212_RETRY_CNT_M, - V2_QPC_BYTE_212_RETRY_CNT_S); + V2_QPC_BYTE_212_RETRY_NUM_INIT_M, + V2_QPC_BYTE_212_RETRY_NUM_INIT_S); qp_attr->rnr_retry = roce_get_field(context->byte_244_rnr_rxack, - V2_QPC_BYTE_244_RNR_CNT_M, - V2_QPC_BYTE_244_RNR_CNT_S); + V2_QPC_BYTE_244_RNR_NUM_INIT_M, + V2_QPC_BYTE_244_RNR_NUM_INIT_S);
done: qp_attr->cur_qp_state = qp_attr->qp_state;