On 2021/6/1 12:51, Jakub Kicinski wrote:
On Mon, 31 May 2021 20:40:01 +0800 Yunsheng Lin wrote:
On 2021/5/31 9:10, Yunsheng Lin wrote:
On 2021/5/31 8:40, Yunsheng Lin wrote:
On 2021/5/31 4:21, Jakub Kicinski wrote:
[...] >>>
CPU1 CPU2
qdisc_run_begin(q) . . enqueue skb1 dequeue skb1 . . . netdevice stopped and MISSED is clear . . nolock_qdisc_is_empty() return true requeue skb . . . . . . . qdisc_run_end(q) . . qdisc_run_begin(q) . transmit skb2 directly . transmit the requeued skb1
The above sequence diagram seems more correct, it is basically about how to avoid transmitting a packet directly bypassing the requeued packet.
I see, thanks! That explains the need. Perhaps we can rephrase the comment? Maybe:
/* Retest nolock_qdisc_is_empty() within the protection
* of q->seqlock to protect from racing with requeuing.
*/
Yes if we still decide to preserve the nolock_qdisc_is_empty() rechecking under q->seqlock.
I had did some interesting testing to show how adjust a small number of code has some notiable performance degrade.
- I used below patch to remove the nolock_qdisc_is_empty() testing under q->seqlock.
@@ -3763,17 +3763,6 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q, if (q->flags & TCQ_F_NOLOCK) { if (q->flags & TCQ_F_CAN_BYPASS && nolock_qdisc_is_empty(q) && qdisc_run_begin(q)) {
/* Retest nolock_qdisc_is_empty() within the protection
* of q->seqlock to ensure qdisc is indeed empty.
*/
if (unlikely(!nolock_qdisc_is_empty(q))) {
rc = q->enqueue(skb, q, &to_free) & NET_XMIT_MASK;
__qdisc_run(q);
qdisc_run_end(q);
goto no_lock_out;
}
qdisc_bstats_cpu_update(q, skb); if (sch_direct_xmit(skb, q, dev, txq, NULL, true) && !nolock_qdisc_is_empty(q))
@@ -3786,7 +3775,6 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q, rc = q->enqueue(skb, q, &to_free) & NET_XMIT_MASK; qdisc_run(q);
-no_lock_out: if (unlikely(to_free)) kfree_skb_list(to_free); return rc;
which has the below performance improvement:
threads v1 v1 + above patch delta 1 3.21Mpps 3.20Mpps -0.3% 2 5.56Mpps 5.94Mpps +4.9% 4 5.58Mpps 5.60Mpps +0.3% 8 2.76Mpps 2.77Mpps +0.3% 16 2.23Mpps 2.23Mpps +0.0%
v1 = this patchset.
- After the above testing, it seems worthwhile to remove the nolock_qdisc_is_empty() testing under q->seqlock, so I used below patch to make sure nolock_qdisc_is_empty() always return false for netdev queue stopped case。
--- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -38,6 +38,15 @@ EXPORT_SYMBOL(default_qdisc_ops); static void qdisc_maybe_clear_missed(struct Qdisc *q, const struct netdev_queue *txq) {
set_bit(__QDISC_STATE_DRAINING, &q->state);
/* Make sure DRAINING is set before clearing MISSED
* to make sure nolock_qdisc_is_empty() always return
* false for aoviding transmitting a packet directly
* bypassing the requeued packet.
*/
smp_mb__after_atomic();
clear_bit(__QDISC_STATE_MISSED, &q->state); /* Make sure the below netif_xmit_frozen_or_stopped()
@@ -52,8 +61,6 @@ static void qdisc_maybe_clear_missed(struct Qdisc *q, */ if (!netif_xmit_frozen_or_stopped(txq)) set_bit(__QDISC_STATE_MISSED, &q->state);
else
set_bit(__QDISC_STATE_DRAINING, &q->state);
}
But this would not be enough because we may also clear MISSING in pfifo_fast_dequeue()?
For the MISSING clearing in pfifo_fast_dequeue(), it seems it looks like the data race described in RFC v3 too?
CPU1 CPU2 CPU3 qdisc_run_begin(q) . . . MISSED is set . MISSED is cleared . . q->dequeue() . . . enqueue skb1 check MISSED # true qdisc_run_end(q) . . . . qdisc_run_begin(q) # true . MISSED is set send skb2 directly
which has the below performance data:
threads v1 v1 + above two patch delta 1 3.21Mpps 3.20Mpps -0.3% 2 5.56Mpps 5.94Mpps +4.9% 4 5.58Mpps 5.02Mpps -10% 8 2.76Mpps 2.77Mpps +0.3% 16 2.23Mpps 2.23Mpps +0.0%
So the adjustment in qdisc_maybe_clear_missed() seems to have caused about 10% performance degradation for 4 threads case.
And the cpu topdown perf data suggested that icache missed and bad Speculation play the main factor to those performance difference.
I tried to control the above factor by removing the inline function and add likely and unlikely tag for netif_xmit_frozen_or_stopped() in sch_generic.c.
And after removing the inline mark for function in sch_generic.c and add likely/unlikely tag for netif_xmit_frozen_or_stopped() checking in in sch_generic.c, we got notiable performance improvement for 1/2 threads case(some performance improvement for ip forwarding test too), but not for 4 threads case.
So it seems we need to ignore the performance degradation for 4 threads case? or any idea?
No ideas, are the threads pinned to CPUs in some particular way?
The pktgen seems already runnig a thread for each CPU, so I do not need to do the pinning myself, for the 4 threads case, it runs on the 0~3 cpu.
It seems more related to specific cpu implemantaion.
.