From: Tianjia Zhang tianjia.zhang@linux.alibaba.com
stable inclusion from stable-v5.10.84 commit a8d18fb4d11bdac881931b9ca305d3ce5daa0eea bugzilla: 186030 https://gitee.com/openeuler/kernel/issues/I4QV2F
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit 5961060692f8b17cd2080620a3d27b95d2ae05ca upstream.
When the TLS cipher suite uses CCM mode, including AES CCM and SM4 CCM, the first byte of the B0 block is flags, and the real IV starts from the second byte. The XOR operation of the IV and rec_seq should be skip this byte, that is, add the iv_offset.
Fixes: f295b3ae9f59 ("net/tls: Add support of AES128-CCM based ciphers") Signed-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com Cc: Vakul Garg vakul.garg@nxp.com Cc: stable@vger.kernel.org # v5.2+ Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- net/tls/tls_sw.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index 122d5daed8b6..8cd011ea9fbb 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -515,7 +515,7 @@ static int tls_do_encryption(struct sock *sk, memcpy(&rec->iv_data[iv_offset], tls_ctx->tx.iv, prot->iv_size + prot->salt_size);
- xor_iv_with_seq(prot->version, rec->iv_data, tls_ctx->tx.rec_seq); + xor_iv_with_seq(prot->version, rec->iv_data + iv_offset, tls_ctx->tx.rec_seq);
sge->offset += prot->prepend_size; sge->length -= prot->prepend_size; @@ -1487,7 +1487,7 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb, else memcpy(iv + iv_offset, tls_ctx->rx.iv, prot->salt_size);
- xor_iv_with_seq(prot->version, iv, tls_ctx->rx.rec_seq); + xor_iv_with_seq(prot->version, iv + iv_offset, tls_ctx->rx.rec_seq);
/* Prepare AAD */ tls_make_aad(aad, rxm->full_len - prot->overhead_size +