From: Jack Wang jinpu.wang@cloud.ionos.com
stable inclusion from stable-5.10.50 commit fcd8d6371a469aae96367f82a14870e4beef8b2f bugzilla: 174522 https://gitee.com/openeuler/kernel/issues/I4DNFY
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
[ Upstream commit 5e91eabf66c854f16ca2e954e5c68939bc81601e ]
Currently rtrs when create_qp use a coarse numbers (bigger in general), which leads to hardware create more resources which only waste memory with no benefits.
For max_send_wr, we don't really need alway max_qp_wr size when creating qp, reduce it to cq_size.
For max_recv_wr, cq_size is enough.
With the patch when sess_queue_depth=128, per session (2 paths) memory consumption reduced from 188 MB to 65MB
When always_invalidate is enabled, we need send more wr, so treat it special.
Fixes: 9cb837480424e ("RDMA/rtrs: server: main functionality") Link: https://lore.kernel.org/r/20210614090337.29557-2-jinpu.wang@ionos.com Signed-off-by: Jack Wang jinpu.wang@cloud.ionos.com Reviewed-by: Md Haris Iqbal haris.iqbal@cloud.ionos.com Signed-off-by: Gioh Kim gi-oh.kim@ionos.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org Signed-off-by: Chen Jun chenjun102@huawei.com Acked-by: Weilong Chen chenweilong@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com --- drivers/infiniband/ulp/rtrs/rtrs-srv.c | 38 +++++++++++++++++--------- 1 file changed, 25 insertions(+), 13 deletions(-)
diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c index e1041023d143..b033bfa9f383 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c @@ -1614,7 +1614,7 @@ static int create_con(struct rtrs_srv_sess *sess, struct rtrs_sess *s = &sess->s; struct rtrs_srv_con *con;
- u32 cq_size, wr_queue_size; + u32 cq_size, max_send_wr, max_recv_wr, wr_limit; int err, cq_vector;
con = kzalloc(sizeof(*con), GFP_KERNEL); @@ -1635,30 +1635,42 @@ static int create_con(struct rtrs_srv_sess *sess, * All receive and all send (each requiring invalidate) * + 2 for drain and heartbeat */ - wr_queue_size = SERVICE_CON_QUEUE_DEPTH * 3 + 2; - cq_size = wr_queue_size; + max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; + max_recv_wr = SERVICE_CON_QUEUE_DEPTH + 2; + cq_size = max_send_wr + max_recv_wr; } else { - /* - * If we have all receive requests posted and - * all write requests posted and each read request - * requires an invalidate request + drain - * and qp gets into error state. - */ - cq_size = srv->queue_depth * 3 + 1; /* * In theory we might have queue_depth * 32 * outstanding requests if an unsafe global key is used * and we have queue_depth read requests each consisting * of 32 different addresses. div 3 for mlx5. */ - wr_queue_size = sess->s.dev->ib_dev->attrs.max_qp_wr / 3; + wr_limit = sess->s.dev->ib_dev->attrs.max_qp_wr / 3; + /* when always_invlaidate enalbed, we need linv+rinv+mr+imm */ + if (always_invalidate) + max_send_wr = + min_t(int, wr_limit, + srv->queue_depth * (1 + 4) + 1); + else + max_send_wr = + min_t(int, wr_limit, + srv->queue_depth * (1 + 2) + 1); + + max_recv_wr = srv->queue_depth + 1; + /* + * If we have all receive requests posted and + * all write requests posted and each read request + * requires an invalidate request + drain + * and qp gets into error state. + */ + cq_size = max_send_wr + max_recv_wr; } - atomic_set(&con->sq_wr_avail, wr_queue_size); + atomic_set(&con->sq_wr_avail, max_send_wr); cq_vector = rtrs_srv_get_next_cq_vector(sess);
/* TODO: SOFTIRQ can be faster, but be careful with softirq context */ err = rtrs_cq_qp_create(&sess->s, &con->c, 1, cq_vector, cq_size, - wr_queue_size, wr_queue_size, + max_send_wr, max_recv_wr, IB_POLL_WORKQUEUE); if (err) { rtrs_err(s, "rtrs_cq_qp_create(), err: %d\n", err);