On 2021/3/16 7:41, David Miller wrote:
From: Yunsheng Lin linyunsheng@huawei.com Date: Mon, 15 Mar 2021 17:30:10 +0800
Currently qdisc_lock(q) is taken before enqueuing and dequeuing for lockless qdisc's skb_bad_txq/gso_skb queue, qdisc->seqlock is also taken, which can provide the same protection as qdisc_lock(q).
This patch removes the unnecessay qdisc_lock(q) protection for lockless qdisc' skb_bad_txq/gso_skb queue.
And dev_reset_queue() takes the qdisc->seqlock for lockless qdisc besides taking the qdisc_lock(q) when doing the qdisc reset, some_qdisc_is_busy() takes both qdisc->seqlock and qdisc_lock(q) when checking qdisc status. It is unnecessary to take both lock while the fast path only take one lock, so this patch also changes it to only take qdisc_lock(q) for locked qdisc, and only take qdisc->seqlock for lockless qdisc.
Since qdisc->seqlock is taken for lockless qdisc when calling qdisc_is_running() in some_qdisc_is_busy(), use qdisc->running to decide if the lockless qdisc is running.
Signed-off-by: Yunsheng Lin linyunsheng@huawei.com
What about other things protected by this lock, such as statistics and qlen?
This change looks too risky to me.
Ok, If that is the case, maybe we just remove qdisc->seqlock and use qdisc_lock(q) for lockless qdisc too, so that we do not need to worry about "lockless qdisc' other things protected by qdisc_lock(q)".
At least for the fast path, taking two locks for lockless qdisc hurts performance when handling requeued skb, especially if the lockless qdisc supports TCQ_F_CAN_BYPASS.
.