
From: Eric Dumazet <edumazet@google.com> mainline inclusion from mainline-v6.9-rc5 commit 0f022d32c3eca477fbf79a205243a6123ed0fe11 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I9L5IO CVE: CVE-2024-27010 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i... -------------------------------- When the mirred action is used on a classful egress qdisc and a packet is mirrored or redirected to self we hit a qdisc lock deadlock. See trace below. [..... other info removed for brevity....] [ 82.890906] [ 82.890906] ============================================ [ 82.890906] WARNING: possible recursive locking detected [ 82.890906] 6.8.0-05205-g77fadd89fe2d-dirty #213 Tainted: G W [ 82.890906] -------------------------------------------- [ 82.890906] ping/418 is trying to acquire lock: [ 82.890906] ffff888006994110 (&sch->q.lock){+.-.}-{3:3}, at: __dev_queue_xmit+0x1778/0x3550 [ 82.890906] [ 82.890906] but task is already holding lock: [ 82.890906] ffff888006994110 (&sch->q.lock){+.-.}-{3:3}, at: __dev_queue_xmit+0x1778/0x3550 [ 82.890906] [ 82.890906] other info that might help us debug this: [ 82.890906] Possible unsafe locking scenario: [ 82.890906] [ 82.890906] CPU0 [ 82.890906] ---- [ 82.890906] lock(&sch->q.lock); [ 82.890906] lock(&sch->q.lock); [ 82.890906] [ 82.890906] *** DEADLOCK *** [ 82.890906] [..... other info removed for brevity....] Example setup (eth0->eth0) to recreate tc qdisc add dev eth0 root handle 1: htb default 30 tc filter add dev eth0 handle 1: protocol ip prio 2 matchall \ action mirred egress redirect dev eth0 Another example(eth0->eth1->eth0) to recreate tc qdisc add dev eth0 root handle 1: htb default 30 tc filter add dev eth0 handle 1: protocol ip prio 2 matchall \ action mirred egress redirect dev eth1 tc qdisc add dev eth1 root handle 1: htb default 30 tc filter add dev eth1 handle 1: protocol ip prio 2 matchall \ action mirred egress redirect dev eth0 We fix this by adding an owner field (CPU id) to struct Qdisc set after root qdisc is entered. When the softirq enters it a second time, if the qdisc owner is the same CPU, the packet is dropped to break the loop. Reported-by: Mingshuai Ren <renmingshuai@huawei.com> Closes: https://lore.kernel.org/netdev/20240314111713.5979-1-renmingshuai@huawei.com... Fixes: 3bcb846ca4cf ("net: get rid of spin_trylock() in net_tx_action()") Fixes: e578d9c02587 ("net: sched: use counter to break reclassify loops") Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Victor Nogueira <victor@mojatatu.com> Reviewed-by: Pedro Tammela <pctammela@mojatatu.com> Tested-by: Jamal Hadi Salim <jhs@mojatatu.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Link: https://lore.kernel.org/r/20240415210728.36949-1-victor@mojatatu.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> Conflicts: include/net/sch_generic.h net/core/dev.c net/sched/sch_generic.c [This is a conflict caused by commit 97604c65bcda("net: sched: remove one pair of atomic operations") and 70713dddf3d2("net_sched: introduce tracepoint trace_qdisc_enqueue()") and d62607c3fe45("net: rename reference+tracking helpers") those are not merged.] Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com> --- include/net/sch_generic.h | 1 + net/core/dev.c | 6 ++++++ net/sched/sch_generic.c | 1 + 3 files changed, 8 insertions(+) diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index 8271b10bc30e..dee304f2eeaf 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -104,6 +104,7 @@ struct Qdisc { struct gnet_stats_basic_packed bstats; seqcount_t running; struct gnet_stats_queue qstats; + int owner; unsigned long state; struct Qdisc *next_sched; struct sk_buff_head skb_bad_txq; diff --git a/net/core/dev.c b/net/core/dev.c index 0109bfccadb8..1cac2557f1b9 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -3785,6 +3785,10 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q, return rc; } + if (unlikely(READ_ONCE(q->owner) == smp_processor_id())) { + kfree_skb(skb); + return NET_XMIT_DROP; + } /* * Heuristic to force contended enqueues to serialize on a * separate lock before trying to get qdisc main lock. @@ -3820,7 +3824,9 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q, qdisc_run_end(q); rc = NET_XMIT_SUCCESS; } else { + WRITE_ONCE(q->owner, smp_processor_id()); rc = q->enqueue(skb, q, &to_free) & NET_XMIT_MASK; + WRITE_ONCE(q->owner, -1); if (qdisc_run_begin(q)) { if (unlikely(contended)) { spin_unlock(&q->busylock); diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c index ecdd9e83f2f4..a3e9dd348500 100644 --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -899,6 +899,7 @@ struct Qdisc *qdisc_alloc(struct netdev_queue *dev_queue, sch->dequeue = ops->dequeue; sch->dev_queue = dev_queue; sch->empty = true; + sch->owner = -1; dev_hold(dev); refcount_set(&sch->refcnt, 1); -- 2.34.1