From: Eiichi Tsukata eiichi.tsukata@nutanix.com
stable inclusion from linux-4.19.213 commit c57fdeff69b152185fafabd37e6bfecfce51efda category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4OJ3D CVE: NA
--------------------------------
commit a2d859e3fc97e79d907761550dbc03ff1b36479c upstream.
sctp_make_strreset_req() makes repeated calls to sctp_addto_chunk() which will automatically account for padding on each call. inreq and outreq are already 4 bytes aligned, but the payload is not and doing SCTP_PAD4(a + b) (which _sctp_make_chunk() did implicitly here) is different from SCTP_PAD4(a) + SCTP_PAD4(b) and not enough. It led to possible attempt to use more buffer than it was allocated and triggered a BUG_ON.
Cc: Vlad Yasevich vyasevich@gmail.com Cc: Neil Horman nhorman@tuxdriver.com Cc: Greg KH gregkh@linuxfoundation.org Fixes: cc16f00f6529 ("sctp: add support for generating stream reconf ssn reset request chunk") Reported-by: Eiichi Tsukata eiichi.tsukata@nutanix.com Signed-off-by: Eiichi Tsukata eiichi.tsukata@nutanix.com Signed-off-by: Marcelo Ricardo Leitner marcelo.leitner@gmail.com Signed-off-by: Marcelo Ricardo Leitner mleitner@redhat.com Reviewed-by: Xin Long lucien.xin@gmail.com Link: https://lore.kernel.org/r/b97c1f8b0c7ff79ac4ed206fc2c49d3612e0850c.163415684... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Wang Hai wanghai38@huawei.com Reviewed-by: Wei Yongjun weiyongjun1@huawei.com Reviewed-by: Yue Haibing yuehaibing@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- net/sctp/sm_make_chunk.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c index 0789109c2d093..35e1fb708f4a7 100644 --- a/net/sctp/sm_make_chunk.c +++ b/net/sctp/sm_make_chunk.c @@ -3673,7 +3673,7 @@ struct sctp_chunk *sctp_make_strreset_req( outlen = (sizeof(outreq) + stream_len) * out; inlen = (sizeof(inreq) + stream_len) * in;
- retval = sctp_make_reconf(asoc, outlen + inlen); + retval = sctp_make_reconf(asoc, SCTP_PAD4(outlen) + SCTP_PAD4(inlen)); if (!retval) return NULL;
From: Jesper Dangaard Brouer brouer@redhat.com
stable inclusion from linux-v4.19.185 commit 8c1a77ae15ce70a72f26f4bb83c50f769011220c category: bugfix bugzilla: NA CVE: CVE-2021-0941
--------------------------------
commit 6306c1189e77a513bf02720450bb43bd4ba5d8ae upstream.
Multiple BPF-helpers that can manipulate/increase the size of the SKB uses __bpf_skb_max_len() as the max-length. This function limit size against the current net_device MTU (skb->dev->mtu).
When a BPF-prog grow the packet size, then it should not be limited to the MTU. The MTU is a transmit limitation, and software receiving this packet should be allowed to increase the size. Further more, current MTU check in __bpf_skb_max_len uses the MTU from ingress/current net_device, which in case of redirects uses the wrong net_device.
This patch keeps a sanity max limit of SKB_MAX_ALLOC (16KiB). The real limit is elsewhere in the system. Jesper's testing[1] showed it was not possible to exceed 8KiB when expanding the SKB size via BPF-helper. The limiting factor is the define KMALLOC_MAX_CACHE_SIZE which is 8192 for SLUB-allocator (CONFIG_SLUB) in-case PAGE_SIZE is 4096. This define is in-effect due to this being called from softirq context see code __gfp_pfmemalloc_flags() and __do_kmalloc_node(). Jakub's testing showed that frames above 16KiB can cause NICs to reset (but not crash). Keep this sanity limit at this level as memory layer can differ based on kernel config.
[1] https://github.com/xdp-project/bpf-examples/tree/master/MTU-tests
Signed-off-by: Jesper Dangaard Brouer brouer@redhat.com Signed-off-by: Daniel Borkmann daniel@iogearbox.net Acked-by: John Fastabend john.fastabend@gmail.com Link: https://lore.kernel.org/bpf/161287788936.790810.2937823995775097177.stgit@fi... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Wang Hai wanghai38@huawei.com Reviewed-by: Wei Yongjun weiyongjun1@huawei.com Reviewed-by: Yue Haibing yuehaibing@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- net/core/filter.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/net/core/filter.c b/net/core/filter.c index 54fa2b827096d..5f24a6b828024 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -2832,18 +2832,14 @@ static int bpf_skb_net_shrink(struct sk_buff *skb, u32 len_diff) return 0; }
-static u32 __bpf_skb_max_len(const struct sk_buff *skb) -{ - return skb->dev ? skb->dev->mtu + skb->dev->hard_header_len : - SKB_MAX_ALLOC; -} +#define BPF_SKB_MAX_LEN SKB_MAX_ALLOC
static int bpf_skb_adjust_net(struct sk_buff *skb, s32 len_diff) { bool trans_same = skb->transport_header == skb->network_header; u32 len_cur, len_diff_abs = abs(len_diff); u32 len_min = bpf_skb_net_base_len(skb); - u32 len_max = __bpf_skb_max_len(skb); + u32 len_max = BPF_SKB_MAX_LEN; __be16 proto = skb_protocol(skb, true); bool shrink = len_diff < 0; int ret; @@ -2922,7 +2918,7 @@ static int bpf_skb_trim_rcsum(struct sk_buff *skb, unsigned int new_len) static inline int __bpf_skb_change_tail(struct sk_buff *skb, u32 new_len, u64 flags) { - u32 max_len = __bpf_skb_max_len(skb); + u32 max_len = BPF_SKB_MAX_LEN; u32 min_len = __bpf_skb_min_len(skb); int ret;
@@ -2998,7 +2994,7 @@ static const struct bpf_func_proto sk_skb_change_tail_proto = { static inline int __bpf_skb_change_head(struct sk_buff *skb, u32 head_room, u64 flags) { - u32 max_len = __bpf_skb_max_len(skb); + u32 max_len = BPF_SKB_MAX_LEN; u32 new_len = skb->len + head_room; int ret;