Linuxarm
Threads by month
- ----- 2025 -----
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- 1 participants
- 792 discussions

[sched/fair] 5f94d1b650: stress-ng.sock.ops_per_sec -25.2% regression
by kernel test robot 05 May '21
by kernel test robot 05 May '21
05 May '21
Greeting,
FYI, we noticed a -25.2% regression of stress-ng.sock.ops_per_sec due to commit:
commit: 5f94d1b650d14e01f383e3d02e40c13a9ccecaf4 ("[PATCH] sched/fair: don't use waker's cpu if the waker of sync wake-up is interrupt")
url: https://github.com/0day-ci/linux/commits/Barry-Song/sched-fair-don-t-use-wa…
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git 2ea46c6fc9452ac100ad907b051d797225847e33
in testcase: stress-ng
on test machine: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
with following parameters:
nr_threads: 10%
disk: 1HDD
testtime: 60s
fs: ext4
class: os
test: sock
cpufreq_governor: performance
ucode: 0x5003006
In addition to that, the commit also has significant impact on the following tests:
+------------------+-------------------------------------------------------------------------------------+
| testcase: change | apachebench: apachebench.requests_per_second -5.6% regression |
| test machine | 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory |
| test parameters | cluster=cs-localhost |
| | concurrency=8000 |
| | cpufreq_governor=performance |
| | runtime=300s |
| | ucode=0x5003006 |
+------------------+-------------------------------------------------------------------------------------+
| testcase: change | netperf: netperf.Throughput_Mbps -52.4% regression |
| test machine | 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory |
| test parameters | cluster=cs-localhost |
| | cpufreq_governor=performance |
| | ip=ipv4 |
| | nr_threads=1 |
| | runtime=300s |
| | test=UDP_STREAM |
| | ucode=0x5003006 |
+------------------+-------------------------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
bin/lkp run generated-yaml-file
=========================================================================================
class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime/ucode:
os/gcc-9/performance/1HDD/ext4/x86_64-rhel-8.3/10%/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp5/sock/stress-ng/60s/0x5003006
commit:
2ea46c6fc9 ("cpumask/hotplug: Fix cpu_dying() state tracking")
5f94d1b650 ("sched/fair: don't use waker's cpu if the waker of sync wake-up is interrupt")
2ea46c6fc9452ac1 5f94d1b650d14e01f383e3d02e4
---------------- ---------------------------
%stddev %change %stddev
\ | \
668165 -25.2% 499709 ± 7% stress-ng.sock.ops
11135 -25.2% 8328 ± 7% stress-ng.sock.ops_per_sec
2633 ± 5% -18.0% 2159 ± 8% stress-ng.time.involuntary_context_switches
11225 ± 2% -6.1% 10545 stress-ng.time.minor_page_faults
1283 -0.9% 1272 stress-ng.time.percent_of_cpu_this_job_got
32.76 -21.3% 25.78 ± 6% stress-ng.time.user_time
39827551 -9.3% 36139210 ± 4% stress-ng.time.voluntary_context_switches
0.53 -0.1 0.44 ± 4% mpstat.cpu.all.usr%
1218481 -8.8% 1111783 ± 4% vmstat.system.cs
36137621 +30.6% 47196861 ± 12% cpuidle.C1.time
9079420 +19.3% 10827569 ± 8% cpuidle.C1.usage
73682855 -15.0% 62612480 ± 6% cpuidle.POLL.time
30399709 -17.5% 25089524 ± 9% cpuidle.POLL.usage
20602586 ± 4% -23.7% 15709836 ± 12% numa-numastat.node0.local_node
20665440 ± 4% -23.8% 15744289 ± 11% numa-numastat.node0.numa_hit
21017060 ± 3% -25.6% 15638989 ± 12% numa-numastat.node1.local_node
21040859 ± 3% -25.4% 15691162 ± 12% numa-numastat.node1.numa_hit
11110566 -22.4% 8621534 ± 10% numa-vmstat.node0.numa_hit
11047694 -22.3% 8586688 ± 10% numa-vmstat.node0.numa_local
10652347 -23.7% 8131841 ± 8% numa-vmstat.node1.numa_hit
10462586 -24.4% 7914174 ± 9% numa-vmstat.node1.numa_local
6200 ± 2% -11.7% 5472 ± 4% slabinfo.dmaengine-unmap-16.active_objs
6204 ± 2% -11.7% 5478 ± 4% slabinfo.dmaengine-unmap-16.num_objs
8326 ± 2% -24.8% 6257 ± 8% slabinfo.sock_inode_cache.active_objs
8326 ± 2% -24.8% 6257 ± 8% slabinfo.sock_inode_cache.num_objs
9070645 +19.3% 10820329 ± 8% turbostat.C1
0.58 +0.2 0.77 ± 12% turbostat.C1%
54.00 +4.8% 56.57 turbostat.PkgTmp
202.21 +1.4% 205.06 turbostat.PkgWatt
80.33 +17.3% 94.21 ± 3% turbostat.RAMWatt
23149 -2.7% 22523 proc-vmstat.nr_slab_reclaimable
1054 ± 25% -59.0% 432.43 ± 20% proc-vmstat.numa_hint_faults
41537249 -24.9% 31186035 ± 7% proc-vmstat.numa_hit
41450577 -25.0% 31099388 ± 7% proc-vmstat.numa_local
3.312e+08 -24.8% 2.491e+08 ± 7% proc-vmstat.pgalloc_normal
3.311e+08 -24.8% 2.489e+08 ± 7% proc-vmstat.pgfree
273665 ± 3% -38.2% 169120 ± 17% interrupts.CAL:Function_call_interrupts
3560 ± 33% -44.7% 1968 ± 35% interrupts.CPU1.CAL:Function_call_interrupts
2900 ± 37% -50.5% 1436 ± 58% interrupts.CPU13.CAL:Function_call_interrupts
488.67 ± 29% -62.2% 184.71 ± 63% interrupts.CPU24.RES:Rescheduling_interrupts
3783 ± 19% -59.9% 1518 ± 40% interrupts.CPU25.CAL:Function_call_interrupts
282.67 ± 29% -56.6% 122.57 ± 54% interrupts.CPU25.RES:Rescheduling_interrupts
3736 ± 38% -66.6% 1246 ± 34% interrupts.CPU27.CAL:Function_call_interrupts
268.50 ± 39% -71.3% 77.00 ± 49% interrupts.CPU27.RES:Rescheduling_interrupts
3493 ± 41% -52.3% 1667 ± 40% interrupts.CPU36.CAL:Function_call_interrupts
3883 ± 54% -55.7% 1719 ± 46% interrupts.CPU5.CAL:Function_call_interrupts
4026 ± 20% -55.6% 1788 ± 37% interrupts.CPU50.CAL:Function_call_interrupts
4144 ± 23% -51.2% 2022 ± 70% interrupts.CPU50.NMI:Non-maskable_interrupts
4144 ± 23% -51.2% 2022 ± 70% interrupts.CPU50.PMI:Performance_monitoring_interrupts
2631 ± 38% -46.5% 1408 ± 35% interrupts.CPU52.CAL:Function_call_interrupts
4379 ± 54% -59.0% 1796 ± 78% interrupts.CPU55.CAL:Function_call_interrupts
2710 ± 35% -58.4% 1127 ± 63% interrupts.CPU56.CAL:Function_call_interrupts
183.67 ± 21% -65.4% 63.57 ± 58% interrupts.CPU56.RES:Rescheduling_interrupts
2969 ± 37% -52.6% 1407 ± 48% interrupts.CPU57.CAL:Function_call_interrupts
3712 ± 18% -73.9% 968.57 ± 96% interrupts.CPU60.NMI:Non-maskable_interrupts
3712 ± 18% -73.9% 968.57 ± 96% interrupts.CPU60.PMI:Performance_monitoring_interrupts
2946 ± 27% -56.2% 1289 ± 52% interrupts.CPU61.CAL:Function_call_interrupts
2221 ± 35% -59.7% 895.29 ± 89% interrupts.CPU61.NMI:Non-maskable_interrupts
2221 ± 35% -59.7% 895.29 ± 89% interrupts.CPU61.PMI:Performance_monitoring_interrupts
186.50 ± 29% -52.0% 89.43 ± 76% interrupts.CPU65.RES:Rescheduling_interrupts
4913 ± 42% -73.8% 1286 ± 51% interrupts.CPU67.CAL:Function_call_interrupts
3640 ± 42% -63.3% 1336 ± 37% interrupts.CPU75.CAL:Function_call_interrupts
4197 ± 41% -74.5% 1071 ± 51% interrupts.CPU76.CAL:Function_call_interrupts
274.00 ± 46% -66.9% 90.57 ± 70% interrupts.CPU76.RES:Rescheduling_interrupts
4458 ± 33% -48.2% 2307 ± 41% interrupts.CPU78.CAL:Function_call_interrupts
174.00 ± 21% -42.9% 99.43 ± 42% interrupts.CPU91.RES:Rescheduling_interrupts
2615 ± 43% -52.8% 1235 ± 41% interrupts.CPU93.CAL:Function_call_interrupts
17598 ± 4% -23.0% 13556 ± 14% softirqs.CPU0.RCU
13220 ± 6% -20.5% 10514 ± 10% softirqs.CPU1.RCU
12111 ± 13% -19.9% 9699 ± 9% softirqs.CPU16.RCU
12269 ± 4% -18.3% 10030 ± 13% softirqs.CPU2.RCU
15935 ± 10% -15.7% 13430 ± 20% softirqs.CPU24.RCU
14418 ± 14% -26.6% 10583 ± 14% softirqs.CPU25.RCU
788851 ± 32% -55.0% 354883 ± 28% softirqs.CPU27.NET_RX
13652 ± 13% -28.7% 9734 ± 5% softirqs.CPU27.RCU
13432 ± 10% -15.7% 11328 ± 4% softirqs.CPU27.SCHED
11413 ± 12% -17.5% 9415 ± 15% softirqs.CPU4.RCU
13077 ± 13% -13.7% 11288 ± 21% softirqs.CPU49.SCHED
14547 ± 12% -22.2% 11314 ± 8% softirqs.CPU50.RCU
13654 ± 3% -11.6% 12076 ± 11% softirqs.CPU52.RCU
830025 ± 36% -63.2% 305684 ± 58% softirqs.CPU56.NET_RX
13872 ± 16% -38.1% 8591 ± 10% softirqs.CPU56.RCU
13666 ± 10% -29.7% 9607 ± 17% softirqs.CPU57.RCU
13660 ± 13% -26.0% 10105 ± 13% softirqs.CPU60.RCU
13861 ± 11% -12.2% 12172 ± 10% softirqs.CPU60.SCHED
12378 ± 9% -25.7% 9199 ± 13% softirqs.CPU61.RCU
13237 ± 12% -17.8% 10879 ± 10% softirqs.CPU63.RCU
14837 ± 20% -26.4% 10923 ± 12% softirqs.CPU65.RCU
13078 ± 10% -20.0% 10462 ± 9% softirqs.CPU68.RCU
1014501 ± 29% -51.1% 496322 ± 35% softirqs.CPU76.NET_RX
14929 ± 14% -33.3% 9964 ± 8% softirqs.CPU76.RCU
14824 ± 9% -20.2% 11834 ± 9% softirqs.CPU76.SCHED
14956 ± 15% -29.7% 10517 ± 16% softirqs.CPU79.RCU
14781 ± 18% -30.9% 10210 ± 20% softirqs.CPU80.RCU
665014 ± 50% -46.2% 357487 ± 55% softirqs.CPU83.NET_RX
12417 ± 14% -25.5% 9246 ± 9% softirqs.CPU83.RCU
12715 ± 10% -19.8% 10191 ± 10% softirqs.CPU86.RCU
54301315 -10.1% 48817727 ± 3% softirqs.NET_RX
1165787 ± 2% -14.0% 1002845 ± 2% softirqs.RCU
21.24 -3.7% 20.44 ± 2% perf-stat.i.MPKI
7.91e+09 -18.5% 6.447e+09 ± 5% perf-stat.i.branch-instructions
1.218e+08 -17.6% 1.003e+08 ± 5% perf-stat.i.branch-misses
0.96 ± 31% +24.3 25.26 ± 28% perf-stat.i.cache-miss-rate%
4298146 ± 3% +3705.0% 1.635e+08 ± 22% perf-stat.i.cache-misses
8.4e+08 -20.9% 6.648e+08 ± 7% perf-stat.i.cache-references
1262713 -9.0% 1149476 ± 4% perf-stat.i.context-switches
1.30 +20.3% 1.57 ± 5% perf-stat.i.cpi
133.25 ± 2% -5.4% 126.01 ± 2% perf-stat.i.cpu-migrations
11560 ± 3% -96.1% 449.24 ± 21% perf-stat.i.cycles-between-cache-misses
1.12e+10 -18.1% 9.179e+09 ± 5% perf-stat.i.dTLB-loads
41181 ± 15% -31.8% 28074 ± 15% perf-stat.i.dTLB-store-misses
6.345e+09 -18.1% 5.195e+09 ± 5% perf-stat.i.dTLB-stores
94303727 -16.4% 78848232 ± 5% perf-stat.i.iTLB-load-misses
3.898e+10 -18.5% 3.178e+10 ± 5% perf-stat.i.instructions
0.78 -17.1% 0.64 ± 5% perf-stat.i.ipc
0.85 ± 8% -23.3% 0.65 ± 4% perf-stat.i.metric.K/sec
274.05 -18.0% 224.70 ± 5% perf-stat.i.metric.M/sec
81.79 +15.2 96.96 perf-stat.i.node-load-miss-rate%
874824 ± 3% +7178.9% 63677834 ± 23% perf-stat.i.node-load-misses
192930 ± 4% +534.0% 1223150 ± 21% perf-stat.i.node-loads
87.79 ± 2% +9.5 97.27 perf-stat.i.node-store-miss-rate%
657426 ± 3% +1728.3% 12019579 ± 22% perf-stat.i.node-store-misses
83513 ± 17% -51.2% 40778 ± 6% perf-stat.i.node-stores
21.55 -3.0% 20.90 perf-stat.overall.MPKI
1.54 +0.0 1.56 perf-stat.overall.branch-miss-rate%
0.51 ± 3% +24.6 25.11 ± 28% perf-stat.overall.cache-miss-rate%
1.28 +22.6% 1.57 ± 5% perf-stat.overall.cpi
11588 ± 3% -97.2% 322.82 ± 26% perf-stat.overall.cycles-between-cache-misses
0.78 -18.2% 0.64 ± 5% perf-stat.overall.ipc
81.93 +16.2 98.10 perf-stat.overall.node-load-miss-rate%
88.73 ± 2% +10.9 99.64 perf-stat.overall.node-store-miss-rate%
7.782e+09 -18.5% 6.343e+09 ± 5% perf-stat.ps.branch-instructions
1.198e+08 -17.6% 98698837 ± 5% perf-stat.ps.branch-misses
4230041 ± 3% +3703.2% 1.609e+08 ± 22% perf-stat.ps.cache-misses
8.265e+08 -20.9% 6.541e+08 ± 7% perf-stat.ps.cache-references
1242259 -9.0% 1130898 ± 4% perf-stat.ps.context-switches
131.16 ± 2% -5.4% 124.02 ± 2% perf-stat.ps.cpu-migrations
1.102e+10 -18.1% 9.031e+09 ± 5% perf-stat.ps.dTLB-loads
40640 ± 15% -31.9% 27660 ± 15% perf-stat.ps.dTLB-store-misses
6.243e+09 -18.1% 5.111e+09 ± 5% perf-stat.ps.dTLB-stores
92776465 -16.4% 77575353 ± 5% perf-stat.ps.iTLB-load-misses
3.835e+10 -18.5% 3.127e+10 ± 5% perf-stat.ps.instructions
860793 ± 3% +7176.9% 62639147 ± 23% perf-stat.ps.node-load-misses
189892 ± 4% +533.7% 1203290 ± 21% perf-stat.ps.node-loads
646849 ± 3% +1727.9% 11823793 ± 22% perf-stat.ps.node-store-misses
82269 ± 17% -51.1% 40208 ± 6% perf-stat.ps.node-stores
2.442e+12 -18.9% 1.98e+12 ± 5% perf-stat.total.instructions
5.42 ± 7% -1.0 4.38 ± 9% perf-profile.calltrace.cycles-pp.__tcp_push_pending_frames.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg.__sys_sendto
5.29 ± 8% -1.0 4.25 ± 9% perf-profile.calltrace.cycles-pp.tcp_write_xmit.__tcp_push_pending_frames.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg
2.44 ± 8% -0.8 1.64 ± 14% perf-profile.calltrace.cycles-pp.__tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_sendmsg_locked.tcp_sendmsg
2.33 ± 8% -0.8 1.56 ± 15% perf-profile.calltrace.cycles-pp.__ip_queue_xmit.__tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_sendmsg_locked
0.66 ± 8% -0.3 0.39 ± 63% perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
1.28 ± 7% -0.3 1.01 ± 10% perf-profile.calltrace.cycles-pp.sock_do_ioctl.sock_ioctl.do_vfs_ioctl.__x64_sys_ioctl.do_syscall_64
1.37 ± 8% -0.3 1.11 ± 10% perf-profile.calltrace.cycles-pp.lock_sock_nested.tcp_sendmsg.sock_sendmsg.__sys_sendto.__x64_sys_sendto
0.89 ± 7% -0.2 0.68 ± 11% perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
0.81 ± 7% -0.2 0.66 ± 7% perf-profile.calltrace.cycles-pp.tcp_rcv_space_adjust.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg.__sys_recvfrom
0.61 ± 8% +0.2 0.82 ± 12% perf-profile.calltrace.cycles-pp.__release_sock.release_sock.tcp_recvmsg.inet_recvmsg.__sys_recvfrom
0.45 ± 44% +0.3 0.71 ± 13% perf-profile.calltrace.cycles-pp.tcp_v4_do_rcv.__release_sock.release_sock.tcp_recvmsg.inet_recvmsg
0.34 ± 70% +0.3 0.65 ± 12% perf-profile.calltrace.cycles-pp.tcp_rcv_established.tcp_v4_do_rcv.__release_sock.release_sock.tcp_recvmsg
1.77 ± 7% -0.4 1.40 ± 9% perf-profile.children.cycles-pp.sock_ioctl
1.74 ± 5% -0.3 1.39 ± 9% perf-profile.children.cycles-pp.__dev_queue_xmit
1.15 ± 7% -0.3 0.85 ± 10% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.94 ± 8% -0.2 0.71 ± 10% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.93 ± 4% -0.2 0.76 ± 9% perf-profile.children.cycles-pp.dev_hard_start_xmit
0.61 ± 9% -0.2 0.45 ± 20% perf-profile.children.cycles-pp.update_rq_clock
0.71 ± 8% -0.2 0.55 ± 11% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.64 ± 5% -0.1 0.50 ± 8% perf-profile.children.cycles-pp.__entry_text_start
0.66 ± 5% -0.1 0.52 ± 14% perf-profile.children.cycles-pp.aa_sk_perm
0.84 ± 5% -0.1 0.71 ± 9% perf-profile.children.cycles-pp.loopback_xmit
0.57 ± 7% -0.1 0.45 ± 10% perf-profile.children.cycles-pp.tcp_mstamp_refresh
0.45 ± 6% -0.1 0.33 ± 13% perf-profile.children.cycles-pp.security_socket_sendmsg
0.45 ± 8% -0.1 0.34 ± 12% perf-profile.children.cycles-pp.sockfd_lookup_light
0.35 ± 12% -0.1 0.24 ± 11% perf-profile.children.cycles-pp.__virt_addr_valid
0.55 ± 8% -0.1 0.45 ± 11% perf-profile.children.cycles-pp.___might_sleep
0.40 ± 8% -0.1 0.30 ± 11% perf-profile.children.cycles-pp.__fput
0.41 ± 8% -0.1 0.31 ± 11% perf-profile.children.cycles-pp.task_work_run
0.41 ± 7% -0.1 0.34 ± 12% perf-profile.children.cycles-pp.__might_fault
0.26 ± 8% -0.1 0.19 ± 8% perf-profile.children.cycles-pp.sock_close
0.24 ± 9% -0.1 0.18 ± 9% perf-profile.children.cycles-pp.inet_release
0.25 ± 8% -0.1 0.19 ± 8% perf-profile.children.cycles-pp.__sock_release
0.20 ± 7% -0.1 0.14 ± 12% perf-profile.children.cycles-pp.__tcp_close
0.22 ± 8% -0.1 0.16 ± 9% perf-profile.children.cycles-pp.tcp_close
0.19 ± 11% -0.1 0.14 ± 18% perf-profile.children.cycles-pp.ttwu_do_wakeup
0.28 ± 7% -0.1 0.22 ± 13% perf-profile.children.cycles-pp.__cond_resched
0.15 ± 9% -0.1 0.09 ± 21% perf-profile.children.cycles-pp.resched_curr
0.18 ± 9% -0.1 0.13 ± 19% perf-profile.children.cycles-pp.check_preempt_curr
0.18 ± 6% -0.0 0.13 ± 8% perf-profile.children.cycles-pp.__put_user_nocheck_4
0.14 ± 10% -0.0 0.10 ± 16% perf-profile.children.cycles-pp.__sk_mem_schedule
0.19 ± 8% -0.0 0.15 ± 10% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.16 ± 9% -0.0 0.12 ± 19% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.13 ± 9% -0.0 0.09 ± 19% perf-profile.children.cycles-pp.__sk_mem_raise_allocated
0.18 ± 8% -0.0 0.14 ± 11% perf-profile.children.cycles-pp.do_tcp_setsockopt
0.21 ± 7% -0.0 0.17 ± 10% perf-profile.children.cycles-pp.tcp_rcv_synsent_state_process
0.11 ± 4% -0.0 0.08 ± 13% perf-profile.children.cycles-pp.syscall_exit_to_user_mode_prepare
0.14 ± 9% -0.0 0.11 ± 9% perf-profile.children.cycles-pp.open_related_ns
0.13 ± 6% -0.0 0.10 ± 16% perf-profile.children.cycles-pp.__sys_accept4
0.12 ± 5% -0.0 0.09 ± 10% perf-profile.children.cycles-pp.tcp_fin
0.07 ± 7% -0.0 0.04 ± 64% perf-profile.children.cycles-pp.__x64_sys_accept4
0.13 ± 9% -0.0 0.10 ± 14% perf-profile.children.cycles-pp.import_single_range
0.09 ± 9% -0.0 0.07 ± 13% perf-profile.children.cycles-pp.__ip_finish_output
0.08 -0.0 0.06 ± 13% perf-profile.children.cycles-pp.__sock_wfree
0.07 ± 6% -0.0 0.05 ± 42% perf-profile.children.cycles-pp.ksys_read
0.07 ± 6% -0.0 0.05 ± 41% perf-profile.children.cycles-pp.vfs_read
0.09 ± 4% -0.0 0.07 ± 12% perf-profile.children.cycles-pp.tcp_update_recv_tstamps
0.09 ± 14% +0.0 0.13 ± 12% perf-profile.children.cycles-pp.perf_tp_event
0.07 ± 11% +0.0 0.11 ± 13% perf-profile.children.cycles-pp.ip_copy_addrs
0.15 ± 9% +0.1 0.20 ± 10% perf-profile.children.cycles-pp.nr_iowait_cpu
0.08 ± 12% +0.1 0.17 ± 13% perf-profile.children.cycles-pp.available_idle_cpu
0.00 +0.2 0.22 ± 38% perf-profile.children.cycles-pp.ttwu_queue_wakelist
0.00 +0.2 0.23 ± 44% perf-profile.children.cycles-pp.sched_ttwu_pending
0.00 +0.4 0.37 ± 46% perf-profile.children.cycles-pp.flush_smp_call_function_from_idle
1.15 ± 7% -0.3 0.85 ± 10% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.47 ± 9% -0.2 0.32 ± 19% perf-profile.self.cycles-pp.update_rq_clock
0.62 ± 5% -0.2 0.47 ± 12% perf-profile.self.cycles-pp.__dev_queue_xmit
0.64 ± 5% -0.2 0.49 ± 8% perf-profile.self.cycles-pp.__entry_text_start
0.50 ± 8% -0.1 0.39 ± 11% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.33 ± 12% -0.1 0.22 ± 12% perf-profile.self.cycles-pp.__virt_addr_valid
0.45 ± 4% -0.1 0.35 ± 14% perf-profile.self.cycles-pp.aa_sk_perm
0.33 ± 5% -0.1 0.26 ± 14% perf-profile.self.cycles-pp.set_next_entity
0.09 ± 8% -0.1 0.04 ± 63% perf-profile.self.cycles-pp.dev_hard_start_xmit
0.15 ± 9% -0.1 0.09 ± 21% perf-profile.self.cycles-pp.resched_curr
0.27 ± 7% -0.1 0.22 ± 12% perf-profile.self.cycles-pp.__sys_sendto
0.23 ± 7% -0.1 0.18 ± 9% perf-profile.self.cycles-pp.sock_ioctl
0.21 ± 6% -0.1 0.16 ± 10% perf-profile.self.cycles-pp._copy_to_iter
0.18 ± 5% -0.0 0.13 ± 8% perf-profile.self.cycles-pp.__put_user_nocheck_4
0.15 ± 10% -0.0 0.10 ± 11% perf-profile.self.cycles-pp.syscall_exit_to_user_mode
0.19 ± 11% -0.0 0.14 ± 14% perf-profile.self.cycles-pp.inet_ioctl
0.16 ± 9% -0.0 0.12 ± 20% perf-profile.self.cycles-pp.switch_mm_irqs_off
0.19 ± 9% -0.0 0.15 ± 12% perf-profile.self.cycles-pp.__sys_recvfrom
0.14 ± 7% -0.0 0.09 ± 12% perf-profile.self.cycles-pp.lock_sock_nested
0.13 ± 9% -0.0 0.09 ± 16% perf-profile.self.cycles-pp.__sk_mem_raise_allocated
0.19 ± 5% -0.0 0.16 ± 9% perf-profile.self.cycles-pp.process_backlog
0.07 ± 8% -0.0 0.04 ± 63% perf-profile.self.cycles-pp.tcp_mstamp_refresh
0.12 ± 7% -0.0 0.09 ± 15% perf-profile.self.cycles-pp.sockfd_lookup_light
0.10 ± 4% -0.0 0.07 ± 12% perf-profile.self.cycles-pp.syscall_exit_to_user_mode_prepare
0.08 -0.0 0.05 ± 6% perf-profile.self.cycles-pp.__sock_wfree
0.14 ± 6% -0.0 0.11 ± 10% perf-profile.self.cycles-pp.__cond_resched
0.08 ± 10% -0.0 0.06 ± 10% perf-profile.self.cycles-pp.sock_do_ioctl
0.08 ± 16% +0.0 0.12 ± 13% perf-profile.self.cycles-pp.perf_tp_event
0.07 ± 11% +0.0 0.11 ± 15% perf-profile.self.cycles-pp.ip_copy_addrs
0.15 ± 9% +0.1 0.20 ± 10% perf-profile.self.cycles-pp.nr_iowait_cpu
0.03 ± 99% +0.1 0.09 ± 26% perf-profile.self.cycles-pp.__wake_up_common
0.00 +0.1 0.08 ± 28% perf-profile.self.cycles-pp.ttwu_queue_wakelist
0.08 ± 12% +0.1 0.17 ± 14% perf-profile.self.cycles-pp.available_idle_cpu
0.13 ± 11% +0.1 0.25 ± 18% perf-profile.self.cycles-pp.tcp_sendmsg
stress-ng.sock.ops_per_sec
11500 +-------------------------------------------------------------------+
|++.+++ +.+++++. + +.+++++.+++++.++++.++++ +++.++++.+++++.++|
11000 |-+ +.++++ + + +.++ |
10500 |-+ |
| |
10000 |-+ |
9500 |-+ O |
| O O O O O O O |
9000 |-+ |
8500 |O+ O |
| O O O O O O |
8000 |-+ O O O OO OO O OO O |
7500 |-+ O O |
| O |
7000 +-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-csl-2sp8: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
=========================================================================================
cluster/compiler/concurrency/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/testcase/ucode:
cs-localhost/gcc-9/8000/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/lkp-csl-2sp8/apachebench/0x5003006
commit:
2ea46c6fc9 ("cpumask/hotplug: Fix cpu_dying() state tracking")
5f94d1b650 ("sched/fair: don't use waker's cpu if the waker of sync wake-up is interrupt")
2ea46c6fc9452ac1 5f94d1b650d14e01f383e3d02e4
---------------- ---------------------------
%stddev %change %stddev
\ | \
35891 -5.6% 33871 apachebench.requests_per_second
28.96 +5.8% 30.64 apachebench.time.elapsed_time
28.96 +5.8% 30.64 apachebench.time.elapsed_time.max
23.00 +8.7% 25.00 ± 2% apachebench.time.percent_of_cpu_this_job_got
222.89 +6.0% 236.22 apachebench.time_per_request
385478 -5.6% 363787 apachebench.transfer_rate
1.67 ± 22% -0.5 1.17 mpstat.cpu.all.irq%
1.02 +0.1 1.12 mpstat.cpu.all.sys%
0.07 ± 50% -59.4% 0.03 ± 43% perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read
0.06 ± 55% -50.8% 0.03 ± 42% perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read
3037 ± 76% +195.7% 8981 ± 51% softirqs.CPU15.NET_RX
822.17 ±127% +1070.4% 9622 ±144% softirqs.CPU37.NET_RX
2388 ±102% +443.8% 12988 ± 58% softirqs.CPU67.NET_RX
11797 ± 4% +15.8% 13658 ± 4% softirqs.TIMER
12572413 ± 6% -16.9% 10443087 ± 5% cpuidle.C1.time
511677 ± 8% -12.7% 446730 ± 7% cpuidle.C1.usage
1.375e+09 ± 74% +110.9% 2.899e+09 cpuidle.C1E.time
4565230 ± 26% +46.7% 6695295 cpuidle.C1E.usage
97938 ± 27% -58.4% 40694 ± 39% cpuidle.POLL.usage
606.17 ±153% -98.5% 9.29 ±148% interrupts.CPU12.TLB:TLB_shootdowns
571.00 ±143% -96.7% 18.57 ±110% interrupts.CPU29.TLB:TLB_shootdowns
530.00 ±146% -91.1% 47.43 ±184% interrupts.CPU30.TLB:TLB_shootdowns
1657 ±190% -99.6% 6.86 ± 74% interrupts.CPU5.TLB:TLB_shootdowns
57.50 ± 67% -62.0% 21.86 ± 32% interrupts.CPU95.TLB:TLB_shootdowns
3736 ± 7% -32.5% 2521 ± 10% interrupts.RES:Rescheduling_interrupts
1792 ± 6% +11.4% 1996 turbostat.Bzy_MHz
501037 ± 8% -12.7% 437439 ± 7% turbostat.C1
0.41 ± 6% -0.1 0.32 ± 5% turbostat.C1%
4561770 ± 26% +46.7% 6690622 turbostat.C1E
1857351 ± 68% -98.2% 32984 ±139% turbostat.C6
105.14 +20.6% 126.78 turbostat.PkgWatt
1.397e+09 -4.4% 1.335e+09 perf-stat.i.branch-instructions
40257529 ± 15% -20.1% 32145859 ± 2% perf-stat.i.branch-misses
9.06 ± 57% +9.2 18.23 ± 5% perf-stat.i.cache-miss-rate%
3159646 ± 13% +223.7% 10228041 ± 11% perf-stat.i.cache-misses
4.25 ± 24% -31.2% 2.93 perf-stat.i.cpi
925.45 ± 40% -80.1% 184.12 ± 3% perf-stat.i.cpu-migrations
4096 ± 10% -68.6% 1284 ± 9% perf-stat.i.cycles-between-cache-misses
0.11 ± 68% -0.1 0.01 ± 3% perf-stat.i.dTLB-store-miss-rate%
9.43e+08 -3.9% 9.059e+08 perf-stat.i.dTLB-stores
6.644e+09 -4.3% 6.357e+09 perf-stat.i.instructions
12.04 ± 5% +22.6% 14.76 ± 9% perf-stat.i.major-faults
44.87 ± 2% -5.1% 42.58 perf-stat.i.metric.M/sec
42764 -4.5% 40848 perf-stat.i.minor-faults
70.15 ± 4% +15.5 85.61 perf-stat.i.node-load-miss-rate%
330592 ± 22% +926.0% 3391986 ± 14% perf-stat.i.node-load-misses
157887 ± 4% +36.3% 215209 ± 3% perf-stat.i.node-loads
56.69 ± 10% +24.5 81.14 perf-stat.i.node-store-miss-rate%
110581 ± 16% +786.3% 980040 ± 15% perf-stat.i.node-store-misses
120278 ± 7% -19.9% 96300 ± 5% perf-stat.i.node-stores
42776 -4.5% 40862 perf-stat.i.page-faults
3.14 ± 16% +8.9 12.04 ± 11% perf-stat.overall.cache-miss-rate%
3656 ± 8% -71.4% 1045 ± 14% perf-stat.overall.cycles-between-cache-misses
67.09 ± 6% +26.8 93.93 perf-stat.overall.node-load-miss-rate%
47.64 ± 10% +43.2 90.83 perf-stat.overall.node-store-miss-rate%
1.355e+09 -4.3% 1.296e+09 perf-stat.ps.branch-instructions
39024819 ± 15% -20.0% 31208731 ± 2% perf-stat.ps.branch-misses
3059351 ± 13% +224.2% 9917945 ± 11% perf-stat.ps.cache-misses
893.60 ± 40% -80.0% 178.29 ± 3% perf-stat.ps.cpu-migrations
9.144e+08 -3.9% 8.791e+08 perf-stat.ps.dTLB-stores
6.446e+09 -4.3% 6.17e+09 perf-stat.ps.instructions
11.65 ± 5% +22.9% 14.31 ± 9% perf-stat.ps.major-faults
41450 -4.4% 39628 perf-stat.ps.minor-faults
320695 ± 22% +925.8% 3289591 ± 14% perf-stat.ps.node-load-misses
152926 ± 4% +36.5% 208688 ± 4% perf-stat.ps.node-loads
107301 ± 16% +786.0% 950672 ± 15% perf-stat.ps.node-store-misses
116716 ± 7% -19.9% 93516 ± 5% perf-stat.ps.node-stores
41461 -4.4% 39642 perf-stat.ps.page-faults
2.80 ± 9% -0.6 2.19 ± 14% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
2.69 ± 10% -0.6 2.12 ± 14% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.68 ± 9% -0.4 0.33 ± 87% perf-profile.calltrace.cycles-pp.ret_from_fork
0.68 ± 9% -0.4 0.33 ± 87% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.83 ± 9% -0.1 0.69 ± 6% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.81 ± 9% -0.1 0.67 ± 6% perf-profile.calltrace.cycles-pp.__schedule.schedule_idle.do_idle.cpu_startup_entry.start_secondary
0.49 ± 45% +0.2 0.68 ± 7% perf-profile.calltrace.cycles-pp.tcp_recvmsg_locked.tcp_recvmsg.inet6_recvmsg.sock_read_iter.new_sync_read
0.73 ± 10% +0.3 1.05 ± 6% perf-profile.calltrace.cycles-pp.__skb_datagram_iter.skb_copy_datagram_iter.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg
0.73 ± 10% +0.3 1.06 ± 6% perf-profile.calltrace.cycles-pp.skb_copy_datagram_iter.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg.sock_read_iter
1.69 ± 7% +0.3 2.02 ± 6% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin._copy_from_iter_full.tcp_sendmsg_locked.tcp_sendmsg
2.45 ± 6% +0.4 2.80 ± 4% perf-profile.calltrace.cycles-pp.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish
1.98 ± 6% +0.4 2.34 ± 6% perf-profile.calltrace.cycles-pp.copyin._copy_from_iter_full.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg
2.46 ± 6% +0.4 2.82 ± 4% perf-profile.calltrace.cycles-pp.tcp_v4_do_rcv.tcp_v4_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver
2.00 ± 6% +0.4 2.37 ± 6% perf-profile.calltrace.cycles-pp._copy_from_iter_full.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg.sock_write_iter
0.52 ± 45% +0.4 0.88 ± 7% perf-profile.calltrace.cycles-pp._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.tcp_recvmsg_locked.tcp_recvmsg
0.49 ± 45% +0.4 0.86 ± 7% perf-profile.calltrace.cycles-pp.copyout._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.tcp_recvmsg_locked
0.48 ± 45% +0.4 0.85 ± 8% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter
1.51 ± 9% +0.4 1.90 ± 7% perf-profile.calltrace.cycles-pp.read
1.43 ± 9% +0.4 1.82 ± 7% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
1.40 ± 10% +0.4 1.79 ± 7% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
1.41 ± 9% +0.4 1.80 ± 7% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
1.00 ± 10% +0.4 1.40 ± 6% perf-profile.calltrace.cycles-pp.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg.sock_read_iter.new_sync_read
1.34 ± 10% +0.4 1.74 ± 8% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
1.11 ± 10% +0.4 1.51 ± 7% perf-profile.calltrace.cycles-pp.tcp_recvmsg.inet_recvmsg.sock_read_iter.new_sync_read.vfs_read
1.14 ± 10% +0.4 1.54 ± 7% perf-profile.calltrace.cycles-pp.inet_recvmsg.sock_read_iter.new_sync_read.vfs_read.ksys_read
3.02 ± 7% +0.4 3.46 ± 4% perf-profile.calltrace.cycles-pp.ip_local_deliver_finish.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core.process_backlog
2.95 ± 7% +0.4 3.40 ± 4% perf-profile.calltrace.cycles-pp.tcp_v4_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver.ip_rcv
3.00 ± 7% +0.4 3.44 ± 4% perf-profile.calltrace.cycles-pp.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core
0.20 ±141% +0.4 0.65 ± 12% perf-profile.calltrace.cycles-pp.ktime_get.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
3.34 ± 6% +0.4 3.79 ± 4% perf-profile.calltrace.cycles-pp.process_backlog.__napi_poll.net_rx_action.__softirqentry_text_start.do_softirq
3.35 ± 6% +0.4 3.80 ± 4% perf-profile.calltrace.cycles-pp.__napi_poll.net_rx_action.__softirqentry_text_start.do_softirq.__local_bh_enable_ip
3.03 ± 7% +0.4 3.48 ± 4% perf-profile.calltrace.cycles-pp.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core.process_backlog.__napi_poll
3.70 ± 6% +0.5 4.15 ± 4% perf-profile.calltrace.cycles-pp.__local_bh_enable_ip.ip_finish_output2.ip_output.__ip_queue_xmit.__tcp_transmit_skb
3.66 ± 6% +0.5 4.12 ± 4% perf-profile.calltrace.cycles-pp.do_softirq.__local_bh_enable_ip.ip_finish_output2.ip_output.__ip_queue_xmit
3.13 ± 6% +0.5 3.59 ± 4% perf-profile.calltrace.cycles-pp.ip_rcv.__netif_receive_skb_one_core.process_backlog.__napi_poll.net_rx_action
3.23 ± 6% +0.5 3.69 ± 4% perf-profile.calltrace.cycles-pp.__netif_receive_skb_one_core.process_backlog.__napi_poll.net_rx_action.__softirqentry_text_start
2.20 ± 7% +0.5 2.70 ± 6% perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.13 ± 7% +0.5 2.63 ± 5% perf-profile.calltrace.cycles-pp.sock_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64
1.79 ± 6% -0.2 1.55 ± 5% perf-profile.children.cycles-pp.__schedule
0.76 ± 10% -0.2 0.53 ± 15% perf-profile.children.cycles-pp.console_unlock
0.76 ± 10% -0.2 0.54 ± 14% perf-profile.children.cycles-pp.vprintk_emit
0.64 ± 9% -0.2 0.44 ± 14% perf-profile.children.cycles-pp.serial8250_console_write
0.61 ± 9% -0.2 0.41 ± 15% perf-profile.children.cycles-pp.uart_console_write
0.59 ± 9% -0.2 0.40 ± 14% perf-profile.children.cycles-pp.wait_for_xmitr
0.57 ± 10% -0.2 0.38 ± 15% perf-profile.children.cycles-pp.serial8250_console_putchar
0.50 ± 11% -0.2 0.34 ± 13% perf-profile.children.cycles-pp.io_serial_in
0.53 ± 16% -0.2 0.36 ± 23% perf-profile.children.cycles-pp.devkmsg_write.cold
0.53 ± 16% -0.2 0.36 ± 23% perf-profile.children.cycles-pp.devkmsg_emit
0.68 ± 9% -0.2 0.52 ± 14% perf-profile.children.cycles-pp.ret_from_fork
0.68 ± 9% -0.2 0.52 ± 14% perf-profile.children.cycles-pp.kthread
0.61 ± 11% -0.2 0.46 ± 17% perf-profile.children.cycles-pp.worker_thread
0.61 ± 11% -0.2 0.46 ± 18% perf-profile.children.cycles-pp.process_one_work
0.60 ± 11% -0.2 0.45 ± 19% perf-profile.children.cycles-pp.drm_fb_helper_damage_work
0.60 ± 11% -0.1 0.45 ± 19% perf-profile.children.cycles-pp.memcpy_toio
0.84 ± 8% -0.1 0.70 ± 6% perf-profile.children.cycles-pp.schedule_idle
1.25 ± 6% -0.1 1.12 ± 5% perf-profile.children.cycles-pp.schedule_hrtimeout_range_clock
0.18 ± 31% -0.1 0.09 ± 9% perf-profile.children.cycles-pp.rcu_idle_exit
0.59 ± 5% -0.1 0.52 ± 5% perf-profile.children.cycles-pp.unmap_page_range
0.35 ± 5% -0.1 0.28 ± 7% perf-profile.children.cycles-pp._raw_spin_lock
0.41 ± 5% -0.1 0.34 ± 7% perf-profile.children.cycles-pp.dequeue_task_fair
0.21 ± 11% -0.1 0.15 ± 13% perf-profile.children.cycles-pp.set_next_entity
0.65 ± 2% -0.1 0.58 ± 5% perf-profile.children.cycles-pp.unmap_vmas
0.13 ± 29% -0.1 0.07 ± 11% perf-profile.children.cycles-pp.rcu_eqs_exit
0.30 ± 7% -0.1 0.25 ± 6% perf-profile.children.cycles-pp.asm_sysvec_call_function
0.21 ± 14% -0.1 0.16 ± 8% perf-profile.children.cycles-pp.__libc_fork
0.20 ± 16% -0.1 0.15 ± 10% perf-profile.children.cycles-pp.__do_sys_clone
0.20 ± 16% -0.1 0.15 ± 10% perf-profile.children.cycles-pp.kernel_clone
0.18 ± 17% -0.1 0.13 ± 13% perf-profile.children.cycles-pp.dup_mmap
0.35 ± 9% -0.1 0.29 ± 8% perf-profile.children.cycles-pp.pick_next_task_fair
0.18 ± 16% -0.1 0.13 ± 14% perf-profile.children.cycles-pp.dup_mm
0.19 ± 15% -0.1 0.14 ± 9% perf-profile.children.cycles-pp.copy_process
0.12 ± 21% -0.0 0.07 ± 12% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.34 ± 7% -0.0 0.29 ± 7% perf-profile.children.cycles-pp.dequeue_entity
0.11 ± 20% -0.0 0.07 ± 13% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.16 ± 5% -0.0 0.12 ± 8% perf-profile.children.cycles-pp.__sysvec_call_function
0.23 ± 4% -0.0 0.19 ± 5% perf-profile.children.cycles-pp.sysvec_call_function
0.10 ± 12% -0.0 0.07 ± 17% perf-profile.children.cycles-pp.sk_filter_trim_cap
0.08 ± 27% -0.0 0.06 ± 13% perf-profile.children.cycles-pp._find_next_bit
0.07 ± 19% +0.0 0.10 ± 13% perf-profile.children.cycles-pp.skb_entail
0.16 ± 5% +0.0 0.19 ± 12% perf-profile.children.cycles-pp.flush_smp_call_function_queue
0.19 ± 9% +0.0 0.23 ± 7% perf-profile.children.cycles-pp.common_file_perm
0.07 ± 18% +0.0 0.11 ± 11% perf-profile.children.cycles-pp.sk_page_frag_refill
0.07 ± 14% +0.0 0.11 ± 11% perf-profile.children.cycles-pp.skb_page_frag_refill
0.10 ± 14% +0.1 0.15 ± 16% perf-profile.children.cycles-pp.__skb_clone
0.21 ± 13% +0.1 0.26 ± 7% perf-profile.children.cycles-pp.__inet_lookup_established
0.02 ±142% +0.1 0.07 ± 26% perf-profile.children.cycles-pp.lockref_get_not_dead
0.02 ±141% +0.1 0.07 ± 9% perf-profile.children.cycles-pp.tcp_ack_update_rtt
0.03 ±102% +0.1 0.09 ± 20% perf-profile.children.cycles-pp.cpuacct_charge
0.12 ± 18% +0.1 0.18 ± 5% perf-profile.children.cycles-pp.kfree
0.13 ± 12% +0.1 0.20 ± 9% perf-profile.children.cycles-pp.__slab_free
0.00 +0.1 0.07 ± 20% perf-profile.children.cycles-pp.llist_reverse_order
0.37 ± 8% +0.1 0.45 ± 10% perf-profile.children.cycles-pp.kmem_cache_free
0.01 ±223% +0.1 0.09 ± 16% perf-profile.children.cycles-pp.llist_add_batch
0.40 ± 8% +0.1 0.48 ± 9% perf-profile.children.cycles-pp.sk_stream_alloc_skb
0.15 ± 8% +0.1 0.23 ± 14% perf-profile.children.cycles-pp.rcu_do_batch
0.38 ± 8% +0.1 0.47 ± 8% perf-profile.children.cycles-pp.__alloc_skb
0.20 ± 6% +0.1 0.30 ± 12% perf-profile.children.cycles-pp.rcu_core
0.00 +0.1 0.12 ± 20% perf-profile.children.cycles-pp.ttwu_queue_wakelist
0.55 ± 5% +0.1 0.68 ± 9% perf-profile.children.cycles-pp.__kfree_skb
0.69 ± 4% +0.1 0.83 ± 6% perf-profile.children.cycles-pp.tcp_clean_rtx_queue
0.98 ± 5% +0.2 1.13 ± 5% perf-profile.children.cycles-pp.tcp_ack
0.77 ± 15% +0.2 0.97 ± 9% perf-profile.children.cycles-pp.clockevents_program_event
0.00 +0.2 0.23 ± 18% perf-profile.children.cycles-pp.sched_ttwu_pending
1.30 ± 14% +0.3 1.59 ± 6% perf-profile.children.cycles-pp.ktime_get
2.39 ± 7% +0.3 2.69 ± 6% perf-profile.children.cycles-pp.copyin
0.69 ± 10% +0.3 1.00 ± 6% perf-profile.children.cycles-pp._copy_to_iter
0.65 ± 9% +0.3 0.96 ± 6% perf-profile.children.cycles-pp.copyout
0.00 +0.3 0.31 ± 20% perf-profile.children.cycles-pp.flush_smp_call_function_from_idle
2.40 ± 7% +0.3 2.71 ± 5% perf-profile.children.cycles-pp._copy_from_iter_full
0.98 ± 9% +0.4 1.34 ± 6% perf-profile.children.cycles-pp.__skb_datagram_iter
0.98 ± 9% +0.4 1.35 ± 6% perf-profile.children.cycles-pp.skb_copy_datagram_iter
1.54 ± 10% +0.4 1.92 ± 8% perf-profile.children.cycles-pp.read
1.14 ± 10% +0.4 1.54 ± 7% perf-profile.children.cycles-pp.inet_recvmsg
3.20 ± 7% +0.4 3.61 ± 3% perf-profile.children.cycles-pp.tcp_v4_rcv
3.25 ± 7% +0.4 3.66 ± 3% perf-profile.children.cycles-pp.ip_protocol_deliver_rcu
3.26 ± 7% +0.4 3.68 ± 3% perf-profile.children.cycles-pp.ip_local_deliver_finish
3.28 ± 7% +0.4 3.70 ± 3% perf-profile.children.cycles-pp.ip_local_deliver
3.51 ± 7% +0.4 3.93 ± 4% perf-profile.children.cycles-pp.__netif_receive_skb_one_core
3.39 ± 7% +0.4 3.82 ± 4% perf-profile.children.cycles-pp.ip_rcv
2.35 ± 7% +0.5 2.83 ± 6% perf-profile.children.cycles-pp.new_sync_read
2.69 ± 6% +0.5 3.17 ± 5% perf-profile.children.cycles-pp.vfs_read
2.79 ± 6% +0.5 3.27 ± 5% perf-profile.children.cycles-pp.ksys_read
2.26 ± 7% +0.5 2.75 ± 6% perf-profile.children.cycles-pp.sock_read_iter
1.98 ± 7% +0.5 2.47 ± 5% perf-profile.children.cycles-pp.tcp_recvmsg
1.67 ± 7% +0.5 2.17 ± 6% perf-profile.children.cycles-pp.tcp_recvmsg_locked
3.10 ± 7% +0.6 3.72 ± 6% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
9.78 ± 7% +1.0 10.78 ± 4% perf-profile.children.cycles-pp.sock_write_iter
9.71 ± 7% +1.0 10.72 ± 4% perf-profile.children.cycles-pp.sock_sendmsg
9.56 ± 7% +1.0 10.58 ± 4% perf-profile.children.cycles-pp.tcp_sendmsg
9.41 ± 7% +1.0 10.43 ± 4% perf-profile.children.cycles-pp.tcp_sendmsg_locked
0.50 ± 11% -0.2 0.34 ± 13% perf-profile.self.cycles-pp.io_serial_in
0.59 ± 11% -0.1 0.45 ± 19% perf-profile.self.cycles-pp.memcpy_toio
0.16 ± 15% -0.1 0.11 ± 16% perf-profile.self.cycles-pp.update_rq_clock
0.11 ± 14% -0.0 0.07 ± 15% perf-profile.self.cycles-pp.set_next_entity
0.32 ± 8% -0.0 0.28 ± 10% perf-profile.self.cycles-pp.__schedule
0.14 ± 11% -0.0 0.09 ± 12% perf-profile.self.cycles-pp.update_load_avg
0.29 ± 14% -0.0 0.25 ± 4% perf-profile.self.cycles-pp.native_sched_clock
0.17 ± 4% -0.0 0.15 ± 5% perf-profile.self.cycles-pp.apr_brigade_cleanup
0.10 ± 11% -0.0 0.08 ± 17% perf-profile.self.cycles-pp.arch_scale_freq_tick
0.17 ± 8% -0.0 0.15 ± 7% perf-profile.self.cycles-pp.ap_process_request_internal
0.07 ± 9% -0.0 0.05 ± 9% perf-profile.self.cycles-pp._find_next_bit
0.16 ± 6% +0.0 0.19 ± 8% perf-profile.self.cycles-pp.common_file_perm
0.07 ± 19% +0.0 0.10 ± 10% perf-profile.self.cycles-pp.sock_def_readable
0.04 ± 71% +0.0 0.07 ± 10% perf-profile.self.cycles-pp.tcp_rcv_space_adjust
0.07 ± 11% +0.0 0.11 ± 13% perf-profile.self.cycles-pp.poll_schedule_timeout
0.16 ± 8% +0.0 0.20 ± 8% perf-profile.self.cycles-pp.__check_object_size
0.05 ± 46% +0.0 0.09 ± 13% perf-profile.self.cycles-pp.skb_entail
0.07 ± 14% +0.0 0.11 ± 13% perf-profile.self.cycles-pp.skb_page_frag_refill
0.02 ±141% +0.0 0.06 ± 11% perf-profile.self.cycles-pp.__alloc_skb
0.07 ± 46% +0.1 0.12 ± 13% perf-profile.self.cycles-pp.__skb_clone
0.11 ± 14% +0.1 0.17 ± 14% perf-profile.self.cycles-pp.__ksize
0.12 ± 18% +0.1 0.17 ± 8% perf-profile.self.cycles-pp.kfree
0.02 ±142% +0.1 0.07 ± 26% perf-profile.self.cycles-pp.lockref_get_not_dead
0.18 ± 17% +0.1 0.24 ± 8% perf-profile.self.cycles-pp.__inet_lookup_established
0.00 +0.1 0.06 ± 14% perf-profile.self.cycles-pp.try_to_wake_up
0.03 ±102% +0.1 0.09 ± 20% perf-profile.self.cycles-pp.cpuacct_charge
0.01 ±223% +0.1 0.07 ± 15% perf-profile.self.cycles-pp.tcp_ack_update_rtt
0.20 ± 7% +0.1 0.26 ± 12% perf-profile.self.cycles-pp.kmem_cache_free
0.13 ± 12% +0.1 0.20 ± 9% perf-profile.self.cycles-pp.__slab_free
0.00 +0.1 0.07 ± 20% perf-profile.self.cycles-pp.llist_reverse_order
0.12 ± 11% +0.1 0.20 ± 14% perf-profile.self.cycles-pp.tcp_rcv_established
0.01 ±223% +0.1 0.09 ± 16% perf-profile.self.cycles-pp.llist_add_batch
0.26 ± 14% +0.1 0.34 ± 3% perf-profile.self.cycles-pp.tcp_recvmsg_locked
0.30 ± 10% +0.1 0.41 ± 8% perf-profile.self.cycles-pp.tcp_sendmsg_locked
0.98 ± 18% +0.3 1.31 ± 8% perf-profile.self.cycles-pp.ktime_get
1.08 ± 18% +0.4 1.48 ± 9% perf-profile.self.cycles-pp.cpuidle_enter_state
1.23 ± 22% +0.5 1.77 ± 12% perf-profile.self.cycles-pp.menu_select
1.86 ± 7% +0.6 2.46 ± 7% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
***************************************************************************************************
lkp-csl-2ap3: 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
=========================================================================================
cluster/compiler/cpufreq_governor/ip/kconfig/nr_threads/rootfs/runtime/tbox_group/test/testcase/ucode:
cs-localhost/gcc-9/performance/ipv4/x86_64-rhel-8.3/1/debian-10.4-x86_64-20200603.cgz/300s/lkp-csl-2ap3/UDP_STREAM/netperf/0x5003006
commit:
2ea46c6fc9 ("cpumask/hotplug: Fix cpu_dying() state tracking")
5f94d1b650 ("sched/fair: don't use waker's cpu if the waker of sync wake-up is interrupt")
2ea46c6fc9452ac1 5f94d1b650d14e01f383e3d02e4
---------------- ---------------------------
%stddev %change %stddev
\ | \
62544 -52.4% 29784 ± 15% netperf.Throughput_Mbps
62544 -52.4% 29784 ± 15% netperf.Throughput_total_Mbps
998.17 ± 5% -16.2% 836.00 ± 10% netperf.time.minor_page_faults
81.17 +9.1% 88.57 netperf.time.percent_of_cpu_this_job_got
242.61 +9.5% 265.75 netperf.time.system_time
35803969 -52.4% 17050380 ± 15% netperf.workload
0.19 ± 6% -0.0 0.14 ± 5% mpstat.cpu.all.soft%
0.04 -0.0 0.04 ± 2% mpstat.cpu.all.usr%
1568380 ±162% -88.6% 178452 ± 11% numa-numastat.node3.local_node
1640558 ±153% -84.3% 257114 numa-numastat.node3.numa_hit
1572364 ± 2% -14.6% 1342693 vmstat.memory.cache
237206 -51.8% 114337 ± 15% vmstat.system.cs
8917350 ± 19% +840.2% 83837786 ± 13% cpuidle.C1.time
1687718 ± 18% +734.3% 14080429 ± 11% cpuidle.C1.usage
72623238 -89.0% 8015009 ±119% cpuidle.POLL.time
33405975 -90.8% 3070283 ±139% cpuidle.POLL.usage
427986 ± 7% -49.9% 214579 ± 2% numa-meminfo.node3.Active
427986 ± 7% -49.9% 214579 ± 2% numa-meminfo.node3.Active(anon)
702896 ± 5% -32.6% 473640 ± 3% numa-meminfo.node3.FilePages
463902 ± 7% -49.1% 235917 ± 2% numa-meminfo.node3.Shmem
106952 ± 7% -49.8% 53679 ± 2% numa-vmstat.node3.nr_active_anon
175755 ± 5% -32.6% 118442 ± 3% numa-vmstat.node3.nr_file_pages
116007 ± 7% -49.1% 59011 ± 2% numa-vmstat.node3.nr_shmem
106952 ± 7% -49.8% 53679 ± 2% numa-vmstat.node3.nr_zone_active_anon
433337 ± 7% -49.0% 220837 ± 2% meminfo.Active
433337 ± 7% -49.0% 220837 ± 2% meminfo.Active(anon)
1452704 ± 2% -15.7% 1223905 meminfo.Cached
841896 ± 3% -26.8% 616582 meminfo.Committed_AS
45250 -9.6% 40886 ± 2% meminfo.Mapped
483539 ± 7% -47.3% 254634 ± 2% meminfo.Shmem
8213 ± 16% -27.5% 5953 ± 10% softirqs.CPU12.RCU
6993 ± 23% -30.1% 4890 ± 7% softirqs.CPU124.RCU
8490 ± 15% -33.8% 5623 ± 15% softirqs.CPU190.RCU
8545 ± 14% -29.9% 5987 ± 12% softirqs.CPU191.RCU
8207 ± 14% -31.6% 5614 ± 11% softirqs.CPU45.RCU
8880 ± 16% -32.9% 5962 ± 12% softirqs.CPU51.RCU
8895 ± 11% -34.1% 5859 ± 14% softirqs.CPU83.RCU
8708 ± 20% -35.2% 5639 ± 12% softirqs.CPU95.RCU
17915040 -52.4% 8533057 ± 15% softirqs.NET_RX
1447713 ± 7% -26.6% 1062814 ± 5% softirqs.RCU
107830 ± 7% -48.8% 55213 ± 2% proc-vmstat.nr_active_anon
362869 ± 2% -15.7% 305817 proc-vmstat.nr_file_pages
79042 -5.3% 74879 proc-vmstat.nr_inactive_anon
11592 -11.7% 10240 ± 2% proc-vmstat.nr_mapped
120578 ± 7% -47.3% 63500 ± 2% proc-vmstat.nr_shmem
107830 ± 7% -48.8% 55213 ± 2% proc-vmstat.nr_zone_active_anon
79042 -5.3% 74879 proc-vmstat.nr_zone_inactive_anon
37297024 -50.6% 18413448 ± 14% proc-vmstat.numa_hit
37037152 -51.0% 18153607 ± 14% proc-vmstat.numa_local
291363 ± 7% -50.3% 144772 ± 2% proc-vmstat.pgactivate
1.147e+09 -52.3% 5.468e+08 ± 15% proc-vmstat.pgalloc_normal
1.147e+09 -52.3% 5.468e+08 ± 15% proc-vmstat.pgfree
0.02 ± 27% -71.7% 0.01 ± 6% perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
0.02 ± 21% -72.3% 0.00 ± 14% perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64
0.01 ± 24% -54.2% 0.00 ± 13% perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.02 ± 26% -53.5% 0.01 ± 37% perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
0.02 ± 47% -63.7% 0.01 ± 23% perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select
0.00 ±223% +2300.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg
0.01 ± 11% -31.2% 0.01 ± 11% perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork
0.03 ± 44% -70.5% 0.01 ± 13% perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
0.03 ± 27% -67.1% 0.01 ± 53% perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64
0.04 ± 89% -75.9% 0.01 ± 10% perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown]
0.10 ± 57% -59.0% 0.04 ±104% perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
0.04 ± 62% -74.1% 0.01 ± 35% perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select
0.03 ± 52% -55.0% 0.02 ± 33% perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
0.04 ± 65% +9892.1% 4.00 ± 93% perf-sched.sch_delay.max.ms.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg
0.00 ±223% +2300.0% 0.00 perf-sched.total_sch_delay.average.ms
2.53 ± 8% +97.1% 5.00 ± 2% perf-sched.total_wait_and_delay.average.ms
950222 ± 6% -50.3% 472509 perf-sched.total_wait_and_delay.count.ms
2.53 ± 8% +97.0% 4.99 ± 2% perf-sched.total_wait_time.average.ms
4.28 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr
32.30 ± 8% +96.7% 63.54 ± 3% perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
0.00 ± 17% +394.5% 0.01 ± 4% perf-sched.wait_and_delay.avg.ms.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg
2112 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr
556.83 ± 6% -47.8% 290.43 ± 2% perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
938377 ± 6% -50.9% 460959 perf-sched.wait_and_delay.count.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg
1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr
0.07 ± 16% +5743.6% 4.10 ± 90% perf-sched.wait_and_delay.max.ms.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg
129.86 ± 73% -63.7% 47.15 ±135% perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
0.75 ± 2% -9.7% 0.68 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
32.30 ± 8% +96.7% 63.54 ± 3% perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
0.00 ± 17% +209.9% 0.01 ± 6% perf-sched.wait_time.avg.ms.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg
8.343e+08 -32.4% 5.641e+08 ± 7% perf-stat.i.branch-instructions
1.84 ± 24% +58.5 60.31 ± 27% perf-stat.i.cache-miss-rate%
4879389 ± 5% +1618.4% 83848142 ± 13% perf-stat.i.cache-misses
2.849e+08 ± 15% -44.8% 1.573e+08 ± 25% perf-stat.i.cache-references
238858 -51.8% 115114 ± 15% perf-stat.i.context-switches
3.79 ± 3% +67.9% 6.37 ± 5% perf-stat.i.cpi
1.499e+10 ± 3% +10.2% 1.652e+10 ± 2% perf-stat.i.cpu-cycles
3190 ± 8% -86.5% 430.75 ± 96% perf-stat.i.cycles-between-cache-misses
1.106e+09 -31.9% 7.533e+08 ± 7% perf-stat.i.dTLB-loads
6.313e+08 -33.1% 4.222e+08 ± 8% perf-stat.i.dTLB-stores
67.75 ± 6% -13.5 54.27 ± 7% perf-stat.i.iTLB-load-miss-rate%
6844433 -34.7% 4466144 ± 7% perf-stat.i.iTLB-load-misses
4.07e+09 -32.3% 2.757e+09 ± 7% perf-stat.i.instructions
0.27 ± 3% -38.6% 0.17 ± 8% perf-stat.i.ipc
0.08 ± 3% +10.2% 0.09 ± 2% perf-stat.i.metric.GHz
14.90 -31.8% 10.16 ± 7% perf-stat.i.metric.M/sec
88.72 ± 3% +9.7 98.42 ± 2% perf-stat.i.node-load-miss-rate%
390842 ± 54% +7043.8% 27921170 ± 13% perf-stat.i.node-load-misses
53149 ± 20% +140.2% 127659 ± 20% perf-stat.i.node-loads
84.06 ± 10% +13.9 97.91 ± 3% perf-stat.i.node-store-miss-rate%
108685 ± 5% +20162.1% 22022000 ± 13% perf-stat.i.node-store-misses
1.78 ± 23% +55.8 57.59 ± 30% perf-stat.overall.cache-miss-rate%
3.68 ± 3% +63.4% 6.02 ± 6% perf-stat.overall.cpi
3086 ± 8% -93.5% 201.38 ± 16% perf-stat.overall.cycles-between-cache-misses
67.82 ± 6% -13.3 54.47 ± 8% perf-stat.overall.iTLB-load-miss-rate%
0.27 ± 3% -38.6% 0.17 ± 7% perf-stat.overall.ipc
86.32 ± 4% +13.2 99.55 perf-stat.overall.node-load-miss-rate%
79.67 ± 11% +20.2 99.91 perf-stat.overall.node-store-miss-rate%
34248 +43.8% 49254 ± 8% perf-stat.overall.path-length
8.317e+08 -32.4% 5.625e+08 ± 7% perf-stat.ps.branch-instructions
4863939 ± 5% +1617.8% 83552434 ± 13% perf-stat.ps.cache-misses
2.839e+08 ± 15% -44.8% 1.568e+08 ± 25% perf-stat.ps.cache-references
238042 -51.8% 114736 ± 15% perf-stat.ps.context-switches
1.494e+10 ± 3% +10.2% 1.646e+10 ± 2% perf-stat.ps.cpu-cycles
1.102e+09 -31.9% 7.51e+08 ± 7% perf-stat.ps.dTLB-loads
6.293e+08 -33.1% 4.209e+08 ± 8% perf-stat.ps.dTLB-stores
6821496 -34.7% 4451472 ± 7% perf-stat.ps.iTLB-load-misses
4.057e+09 -32.3% 2.749e+09 ± 7% perf-stat.ps.instructions
389775 ± 54% +7038.0% 27822357 ± 13% perf-stat.ps.node-load-misses
53006 ± 20% +140.1% 127264 ± 20% perf-stat.ps.node-loads
108429 ± 5% +20137.9% 21943845 ± 13% perf-stat.ps.node-store-misses
1.226e+12 -32.3% 8.298e+11 ± 7% perf-stat.total.instructions
838643 ± 8% -67.5% 272512 ± 11% interrupts.CAL:Function_call_interrupts
67.33 ± 32% -60.3% 26.71 ± 36% interrupts.CPU0.RES:Rescheduling_interrupts
108.33 ± 36% +93.3% 209.43 ± 15% interrupts.CPU1.NMI:Non-maskable_interrupts
108.33 ± 36% +93.3% 209.43 ± 15% interrupts.CPU1.PMI:Performance_monitoring_interrupts
114.33 ± 46% +84.8% 211.29 ± 15% interrupts.CPU10.NMI:Non-maskable_interrupts
114.33 ± 46% +84.8% 211.29 ± 15% interrupts.CPU10.PMI:Performance_monitoring_interrupts
102.00 ± 18% +200.4% 306.43 ± 83% interrupts.CPU100.NMI:Non-maskable_interrupts
102.00 ± 18% +200.4% 306.43 ± 83% interrupts.CPU100.PMI:Performance_monitoring_interrupts
117.50 ± 22% +648.2% 879.14 ±122% interrupts.CPU101.NMI:Non-maskable_interrupts
117.50 ± 22% +648.2% 879.14 ±122% interrupts.CPU101.PMI:Performance_monitoring_interrupts
117.00 ± 24% +67.2% 195.57 ± 20% interrupts.CPU103.NMI:Non-maskable_interrupts
117.00 ± 24% +67.2% 195.57 ± 20% interrupts.CPU103.PMI:Performance_monitoring_interrupts
98.83 ± 32% +799.3% 888.86 ±184% interrupts.CPU104.NMI:Non-maskable_interrupts
98.83 ± 32% +799.3% 888.86 ±184% interrupts.CPU104.PMI:Performance_monitoring_interrupts
111.50 ± 23% +87.7% 209.29 ± 16% interrupts.CPU105.NMI:Non-maskable_interrupts
111.50 ± 23% +87.7% 209.29 ± 16% interrupts.CPU105.PMI:Performance_monitoring_interrupts
104.83 ± 34% +565.5% 697.71 ±133% interrupts.CPU112.NMI:Non-maskable_interrupts
104.83 ± 34% +565.5% 697.71 ±133% interrupts.CPU112.PMI:Performance_monitoring_interrupts
104.50 ± 36% +64.6% 172.00 ± 24% interrupts.CPU129.NMI:Non-maskable_interrupts
104.50 ± 36% +64.6% 172.00 ± 24% interrupts.CPU129.PMI:Performance_monitoring_interrupts
113.17 ± 25% +142.8% 274.71 ± 91% interrupts.CPU142.NMI:Non-maskable_interrupts
113.17 ± 25% +142.8% 274.71 ± 91% interrupts.CPU142.PMI:Performance_monitoring_interrupts
222.33 ±214% -98.2% 4.00 ± 61% interrupts.CPU145.RES:Rescheduling_interrupts
119.33 ± 32% +1189.9% 1539 ±187% interrupts.CPU16.NMI:Non-maskable_interrupts
119.33 ± 32% +1189.9% 1539 ±187% interrupts.CPU16.PMI:Performance_monitoring_interrupts
624.33 ±171% -82.9% 106.71 ± 24% interrupts.CPU163.NMI:Non-maskable_interrupts
624.33 ±171% -82.9% 106.71 ± 24% interrupts.CPU163.PMI:Performance_monitoring_interrupts
128.50 ± 18% +40.0% 179.86 ± 24% interrupts.CPU2.NMI:Non-maskable_interrupts
128.50 ± 18% +40.0% 179.86 ± 24% interrupts.CPU2.PMI:Performance_monitoring_interrupts
388.33 ±210% -97.8% 8.71 ± 62% interrupts.CPU3.RES:Rescheduling_interrupts
94.50 ± 37% +102.0% 190.86 ± 24% interrupts.CPU4.NMI:Non-maskable_interrupts
94.50 ± 37% +102.0% 190.86 ± 24% interrupts.CPU4.PMI:Performance_monitoring_interrupts
105.00 ± 32% +1495.6% 1675 ±173% interrupts.CPU5.NMI:Non-maskable_interrupts
105.00 ± 32% +1495.6% 1675 ±173% interrupts.CPU5.PMI:Performance_monitoring_interrupts
119.83 ± 35% +60.6% 192.43 ± 21% interrupts.CPU6.NMI:Non-maskable_interrupts
119.83 ± 35% +60.6% 192.43 ± 21% interrupts.CPU6.PMI:Performance_monitoring_interrupts
95.00 ± 41% +968.4% 1015 ±200% interrupts.CPU8.NMI:Non-maskable_interrupts
95.00 ± 41% +968.4% 1015 ±200% interrupts.CPU8.PMI:Performance_monitoring_interrupts
97.50 ± 48% +96.9% 192.00 ± 22% interrupts.CPU9.NMI:Non-maskable_interrupts
97.50 ± 48% +96.9% 192.00 ± 22% interrupts.CPU9.PMI:Performance_monitoring_interrupts
123.17 ± 29% +71.1% 210.71 ± 12% interrupts.CPU96.NMI:Non-maskable_interrupts
123.17 ± 29% +71.1% 210.71 ± 12% interrupts.CPU96.PMI:Performance_monitoring_interrupts
102.17 ± 17% +111.0% 215.57 ± 16% interrupts.CPU98.NMI:Non-maskable_interrupts
102.17 ± 17% +111.0% 215.57 ± 16% interrupts.CPU98.PMI:Performance_monitoring_interrupts
105.67 ± 24% +285.6% 407.43 ±118% interrupts.CPU99.NMI:Non-maskable_interrupts
105.67 ± 24% +285.6% 407.43 ±118% interrupts.CPU99.PMI:Performance_monitoring_interrupts
576.33 ± 7% -50.7% 284.14 ± 2% interrupts.IWI:IRQ_work_interrupts
30.77 -5.4 25.32 ± 8% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
30.57 -5.3 25.23 ± 8% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.27 ± 3% -2.8 8.43 ± 21% perf-profile.calltrace.cycles-pp.__x64_sys_recvfrom.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.24 ± 3% -2.8 8.42 ± 21% perf-profile.calltrace.cycles-pp.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64.entry_SYSCALL_64_after_hwframe
19.14 -2.4 16.71 ± 7% perf-profile.calltrace.cycles-pp.__x64_sys_sendto.do_syscall_64.entry_SYSCALL_64_after_hwframe
19.10 -2.4 16.70 ± 7% perf-profile.calltrace.cycles-pp.__sys_sendto.__x64_sys_sendto.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.14 ± 11% -2.3 1.81 ± 36% perf-profile.calltrace.cycles-pp.udp_send_skb.udp_sendmsg.sock_sendmsg.__sys_sendto.__x64_sys_sendto
4.06 ± 11% -2.3 1.79 ± 36% perf-profile.calltrace.cycles-pp.ip_send_skb.udp_send_skb.udp_sendmsg.sock_sendmsg.__sys_sendto
3.99 ± 11% -2.2 1.76 ± 36% perf-profile.calltrace.cycles-pp.ip_output.ip_send_skb.udp_send_skb.udp_sendmsg.sock_sendmsg
3.85 ± 12% -2.1 1.70 ± 36% perf-profile.calltrace.cycles-pp.ip_finish_output2.ip_output.ip_send_skb.udp_send_skb.udp_sendmsg
3.46 ± 13% -1.9 1.55 ± 36% perf-profile.calltrace.cycles-pp.__local_bh_enable_ip.ip_finish_output2.ip_output.ip_send_skb.udp_send_skb
3.42 ± 13% -1.9 1.54 ± 36% perf-profile.calltrace.cycles-pp.do_softirq.__local_bh_enable_ip.ip_finish_output2.ip_output.ip_send_skb
3.38 ± 13% -1.9 1.52 ± 36% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.do_softirq.__local_bh_enable_ip.ip_finish_output2.ip_output
3.23 ± 13% -1.8 1.48 ± 36% perf-profile.calltrace.cycles-pp.net_rx_action.__softirqentry_text_start.do_softirq.__local_bh_enable_ip.ip_finish_output2
3.14 ± 13% -1.7 1.44 ± 36% perf-profile.calltrace.cycles-pp.__napi_poll.net_rx_action.__softirqentry_text_start.do_softirq.__local_bh_enable_ip
3.11 ± 13% -1.7 1.44 ± 36% perf-profile.calltrace.cycles-pp.process_backlog.__napi_poll.net_rx_action.__softirqentry_text_start.do_softirq
3.01 ± 14% -1.6 1.39 ± 36% perf-profile.calltrace.cycles-pp.__netif_receive_skb_one_core.process_backlog.__napi_poll.net_rx_action.__softirqentry_text_start
2.93 ± 14% -1.6 1.37 ± 36% perf-profile.calltrace.cycles-pp.ip_rcv.__netif_receive_skb_one_core.process_backlog.__napi_poll.net_rx_action
2.87 ± 14% -1.5 1.35 ± 36% perf-profile.calltrace.cycles-pp.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core.process_backlog.__napi_poll
2.86 ± 14% -1.5 1.34 ± 36% perf-profile.calltrace.cycles-pp.ip_local_deliver_finish.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core.process_backlog
2.85 ± 14% -1.5 1.34 ± 36% perf-profile.calltrace.cycles-pp.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core
2.82 ± 14% -1.5 1.33 ± 36% perf-profile.calltrace.cycles-pp.__udp4_lib_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver.ip_rcv
2.00 ± 18% -1.4 0.58 ± 87% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.sock_def_readable
2.68 ± 15% -1.4 1.29 ± 36% perf-profile.calltrace.cycles-pp.udp_unicast_rcv_skb.__udp4_lib_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver
2.65 ± 15% -1.4 1.27 ± 36% perf-profile.calltrace.cycles-pp.udp_queue_rcv_one_skb.udp_unicast_rcv_skb.__udp4_lib_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish
2.01 ± 18% -1.4 0.65 ± 69% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.sock_def_readable.__udp_enqueue_schedule_skb
2.26 ± 17% -1.3 0.93 ± 36% perf-profile.calltrace.cycles-pp.sock_def_readable.__udp_enqueue_schedule_skb.udp_queue_rcv_one_skb.udp_unicast_rcv_skb.__udp4_lib_rcv
2.06 ± 18% -1.3 0.76 ± 69% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.sock_def_readable.__udp_enqueue_schedule_skb.udp_queue_rcv_one_skb
2.21 ± 17% -1.3 0.92 ± 36% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.sock_def_readable.__udp_enqueue_schedule_skb.udp_queue_rcv_one_skb.udp_unicast_rcv_skb
2.40 ± 16% -1.3 1.13 ± 38% perf-profile.calltrace.cycles-pp.__udp_enqueue_schedule_skb.udp_queue_rcv_one_skb.udp_unicast_rcv_skb.__udp4_lib_rcv.ip_protocol_deliver_rcu
1.31 ± 5% -1.0 0.33 ± 87% perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg
1.29 ± 5% -1.0 0.33 ± 87% perf-profile.calltrace.cycles-pp.__schedule.schedule.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp
2.14 ± 4% -1.0 1.18 ± 19% perf-profile.calltrace.cycles-pp.__skb_recv_udp.udp_recvmsg.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom
1.35 ± 5% -0.9 0.48 ± 64% perf-profile.calltrace.cycles-pp.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg.inet_recvmsg
1.71 ± 4% -0.8 0.89 ± 19% perf-profile.calltrace.cycles-pp.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg.inet_recvmsg.__sys_recvfrom
0.84 ± 7% -0.4 0.40 ± 88% perf-profile.calltrace.cycles-pp.__consume_stateless_skb.udp_recvmsg.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom
0.51 ± 44% +0.7 1.19 ± 36% perf-profile.calltrace.cycles-pp.ktime_get.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
16.97 ± 10% +3.7 20.65 ± 13% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
67.79 +5.8 73.59 ± 2% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
67.79 +5.8 73.60 ± 2% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
67.73 +5.8 73.55 ± 2% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
68.07 +5.9 74.00 ± 2% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
7.05 ± 46% +6.8 13.84 ± 14% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
31.00 -5.5 25.53 ± 7% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
30.80 -5.4 25.44 ± 7% perf-profile.children.cycles-pp.do_syscall_64
4.59 ± 4% -4.5 0.07 ± 21% perf-profile.children.cycles-pp.poll_idle
11.28 ± 3% -2.8 8.43 ± 21% perf-profile.children.cycles-pp.__x64_sys_recvfrom
11.24 ± 3% -2.8 8.42 ± 21% perf-profile.children.cycles-pp.__sys_recvfrom
19.14 -2.4 16.71 ± 7% perf-profile.children.cycles-pp.__x64_sys_sendto
19.11 -2.4 16.70 ± 7% perf-profile.children.cycles-pp.__sys_sendto
4.14 ± 11% -2.3 1.81 ± 36% perf-profile.children.cycles-pp.udp_send_skb
4.06 ± 11% -2.3 1.79 ± 36% perf-profile.children.cycles-pp.ip_send_skb
4.00 ± 11% -2.2 1.76 ± 36% perf-profile.children.cycles-pp.ip_output
3.85 ± 12% -2.2 1.70 ± 36% perf-profile.children.cycles-pp.ip_finish_output2
3.48 ± 13% -1.9 1.55 ± 36% perf-profile.children.cycles-pp.__local_bh_enable_ip
3.42 ± 13% -1.9 1.54 ± 36% perf-profile.children.cycles-pp.do_softirq
3.24 ± 13% -1.8 1.48 ± 36% perf-profile.children.cycles-pp.net_rx_action
2.46 ± 2% -1.7 0.73 ± 22% perf-profile.children.cycles-pp.__schedule
4.89 ± 9% -1.7 3.18 ± 19% perf-profile.children.cycles-pp.__softirqentry_text_start
3.14 ± 13% -1.7 1.45 ± 36% perf-profile.children.cycles-pp.__napi_poll
3.11 ± 13% -1.7 1.44 ± 36% perf-profile.children.cycles-pp.process_backlog
3.01 ± 14% -1.6 1.39 ± 36% perf-profile.children.cycles-pp.__netif_receive_skb_one_core
2.93 ± 14% -1.6 1.37 ± 36% perf-profile.children.cycles-pp.ip_rcv
2.87 ± 14% -1.5 1.35 ± 36% perf-profile.children.cycles-pp.ip_local_deliver
2.86 ± 14% -1.5 1.34 ± 36% perf-profile.children.cycles-pp.ip_local_deliver_finish
2.85 ± 14% -1.5 1.34 ± 36% perf-profile.children.cycles-pp.ip_protocol_deliver_rcu
2.82 ± 14% -1.5 1.33 ± 36% perf-profile.children.cycles-pp.__udp4_lib_rcv
2.68 ± 15% -1.4 1.29 ± 36% perf-profile.children.cycles-pp.udp_unicast_rcv_skb
2.65 ± 15% -1.4 1.27 ± 36% perf-profile.children.cycles-pp.udp_queue_rcv_one_skb
2.26 ± 17% -1.3 0.93 ± 36% perf-profile.children.cycles-pp.sock_def_readable
2.22 ± 17% -1.3 0.93 ± 36% perf-profile.children.cycles-pp.__wake_up_common_lock
2.40 ± 15% -1.3 1.13 ± 38% perf-profile.children.cycles-pp.__udp_enqueue_schedule_skb
2.02 ± 17% -1.2 0.78 ± 38% perf-profile.children.cycles-pp.autoremove_wake_function
2.02 ± 18% -1.2 0.78 ± 38% perf-profile.children.cycles-pp.try_to_wake_up
2.07 ± 18% -1.2 0.91 ± 37% perf-profile.children.cycles-pp.__wake_up_common
2.14 ± 4% -1.0 1.18 ± 19% perf-profile.children.cycles-pp.__skb_recv_udp
1.12 ± 3% -0.9 0.20 ± 36% perf-profile.children.cycles-pp.schedule_idle
1.37 ± 5% -0.8 0.54 ± 20% perf-profile.children.cycles-pp.schedule
1.71 ± 4% -0.8 0.89 ± 19% perf-profile.children.cycles-pp.__skb_wait_for_more_packets
1.36 ± 4% -0.7 0.61 ± 19% perf-profile.children.cycles-pp.schedule_timeout
0.83 ± 18% -0.6 0.19 ± 34% perf-profile.children.cycles-pp.ttwu_do_activate
0.82 ± 19% -0.6 0.18 ± 36% perf-profile.children.cycles-pp.enqueue_task_fair
0.60 ± 3% -0.5 0.07 ± 9% perf-profile.children.cycles-pp.pick_next_task_fair
0.66 ± 19% -0.5 0.17 ± 39% perf-profile.children.cycles-pp.enqueue_entity
0.80 ± 17% -0.5 0.32 ± 20% perf-profile.children.cycles-pp.update_rq_clock
0.72 ± 6% -0.4 0.32 ± 21% perf-profile.children.cycles-pp.dequeue_task_fair
0.39 ± 5% -0.3 0.09 ± 20% perf-profile.children.cycles-pp.update_load_avg
0.82 ± 17% -0.3 0.52 ± 18% perf-profile.children.cycles-pp.sched_clock_cpu
0.59 ± 5% -0.3 0.29 ± 22% perf-profile.children.cycles-pp.dequeue_entity
0.72 ± 15% -0.3 0.45 ± 16% perf-profile.children.cycles-pp.sched_clock
0.69 ± 15% -0.3 0.42 ± 17% perf-profile.children.cycles-pp.native_sched_clock
0.84 ± 7% -0.3 0.57 ± 27% perf-profile.children.cycles-pp.__consume_stateless_skb
0.29 ± 6% -0.3 0.03 ± 88% perf-profile.children.cycles-pp.kmem_cache_free
0.61 ± 7% -0.2 0.40 ± 29% perf-profile.children.cycles-pp.__free_pages_ok
0.30 ± 24% -0.2 0.10 ± 37% perf-profile.children.cycles-pp.ip_route_output_flow
0.73 ± 7% -0.2 0.53 ± 9% perf-profile.children.cycles-pp._raw_spin_lock
0.29 ± 23% -0.2 0.10 ± 37% perf-profile.children.cycles-pp.ip_route_output_key_hash
0.30 ± 16% -0.2 0.12 ± 36% perf-profile.children.cycles-pp.__dev_queue_xmit
0.27 ± 23% -0.2 0.09 ± 37% perf-profile.children.cycles-pp.ip_route_output_key_hash_rcu
0.21 ± 21% -0.1 0.07 ± 51% perf-profile.children.cycles-pp.fib_table_lookup
0.64 ± 7% -0.1 0.50 ± 10% perf-profile.children.cycles-pp.read_tsc
0.21 ± 13% -0.1 0.07 ± 48% perf-profile.children.cycles-pp.dev_hard_start_xmit
0.19 ± 11% -0.1 0.06 ± 49% perf-profile.children.cycles-pp.__switch_to
0.19 ± 11% -0.1 0.07 ± 49% perf-profile.children.cycles-pp.loopback_xmit
0.40 ± 18% -0.1 0.28 ± 17% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.17 ± 10% -0.1 0.05 ± 42% perf-profile.children.cycles-pp.__switch_to_asm
0.19 ± 6% -0.1 0.08 ± 29% perf-profile.children.cycles-pp.move_addr_to_user
0.16 ± 7% -0.1 0.07 ± 12% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.13 ± 24% -0.1 0.06 ± 72% perf-profile.children.cycles-pp.__free_one_page
0.13 ± 20% -0.1 0.06 ± 46% perf-profile.children.cycles-pp.sockfd_lookup_light
0.12 ± 19% -0.1 0.05 ± 69% perf-profile.children.cycles-pp.__fget_light
0.18 ± 9% -0.1 0.11 ± 18% perf-profile.children.cycles-pp.call_cpuidle
0.18 ± 9% -0.1 0.13 ± 12% perf-profile.children.cycles-pp.hrtimer_get_next_event
0.09 ± 11% -0.0 0.04 ± 67% perf-profile.children.cycles-pp.udp_rmem_release
0.09 ± 10% -0.0 0.05 ± 46% perf-profile.children.cycles-pp.trigger_load_balance
0.05 ± 47% +0.1 0.16 ± 21% perf-profile.children.cycles-pp.cpuacct_charge
0.00 +0.1 0.11 ± 26% perf-profile.children.cycles-pp.llist_reverse_order
0.00 +0.2 0.16 ± 13% perf-profile.children.cycles-pp.flush_smp_call_function_queue
0.00 +0.3 0.27 ± 59% perf-profile.children.cycles-pp.__smp_call_single_queue
0.00 +0.3 0.27 ± 59% perf-profile.children.cycles-pp.llist_add_batch
0.00 +0.4 0.36 ± 20% perf-profile.children.cycles-pp.sched_ttwu_pending
0.00 +0.4 0.38 ± 42% perf-profile.children.cycles-pp.ttwu_queue_wakelist
0.00 +0.5 0.55 ± 14% perf-profile.children.cycles-pp.flush_smp_call_function_from_idle
2.35 ± 25% +1.1 3.48 ± 19% perf-profile.children.cycles-pp.ktime_get
14.90 ± 4% +2.3 17.23 ± 10% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
67.79 +5.8 73.60 ± 2% perf-profile.children.cycles-pp.start_secondary
68.07 +5.9 74.00 ± 2% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
68.07 +5.9 74.00 ± 2% perf-profile.children.cycles-pp.cpu_startup_entry
68.06 +5.9 73.99 ± 2% perf-profile.children.cycles-pp.do_idle
7.12 ± 45% +6.8 13.97 ± 14% perf-profile.children.cycles-pp.menu_select
4.54 ± 4% -4.5 0.05 ± 65% perf-profile.self.cycles-pp.poll_idle
0.46 ± 20% -0.4 0.05 ± 66% perf-profile.self.cycles-pp.update_rq_clock
0.47 ± 9% -0.4 0.09 ± 18% perf-profile.self.cycles-pp.__schedule
0.65 ± 15% -0.3 0.39 ± 15% perf-profile.self.cycles-pp.native_sched_clock
0.55 ± 18% -0.2 0.32 ± 20% perf-profile.self.cycles-pp.do_idle
0.32 ± 25% -0.2 0.08 ± 30% perf-profile.self.cycles-pp.enqueue_entity
0.71 ± 7% -0.2 0.52 ± 9% perf-profile.self.cycles-pp._raw_spin_lock
0.35 ± 7% -0.2 0.18 ± 32% perf-profile.self.cycles-pp.__free_pages_ok
0.62 ± 7% -0.1 0.49 ± 10% perf-profile.self.cycles-pp.read_tsc
0.17 ± 9% -0.1 0.05 ± 70% perf-profile.self.cycles-pp.__switch_to
0.16 ± 22% -0.1 0.04 ± 89% perf-profile.self.cycles-pp.fib_table_lookup
0.16 ± 9% -0.1 0.05 ± 43% perf-profile.self.cycles-pp.__switch_to_asm
0.15 ± 8% -0.1 0.06 ± 18% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.12 ± 19% -0.1 0.05 ± 69% perf-profile.self.cycles-pp.__fget_light
0.16 ± 10% -0.1 0.11 ± 15% perf-profile.self.cycles-pp.call_cpuidle
0.12 ± 9% -0.1 0.06 ± 23% perf-profile.self.cycles-pp.__softirqentry_text_start
0.06 ± 52% +0.0 0.11 ± 30% perf-profile.self.cycles-pp.udp_queue_rcv_one_skb
0.00 +0.1 0.08 ± 24% perf-profile.self.cycles-pp.ttwu_queue_wakelist
0.02 ±142% +0.1 0.12 ± 30% perf-profile.self.cycles-pp.__wake_up_common
0.00 +0.1 0.11 ± 28% perf-profile.self.cycles-pp.sched_ttwu_pending
0.05 ± 47% +0.1 0.16 ± 21% perf-profile.self.cycles-pp.cpuacct_charge
0.00 +0.1 0.11 ± 26% perf-profile.self.cycles-pp.llist_reverse_order
0.00 +0.3 0.27 ± 59% perf-profile.self.cycles-pp.llist_add_batch
1.78 ± 35% +1.3 3.05 ± 23% perf-profile.self.cycles-pp.ktime_get
5.18 ± 60% +6.1 11.25 ± 19% perf-profile.self.cycles-pp.cpuidle_enter_state
4.81 ± 62% +6.7 11.54 ± 19% perf-profile.self.cycles-pp.menu_select
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
Thanks,
Oliver Sang
1
0

Re: [PATCH net v4 1/2] net: sched: fix packet stuck problem for lockless qdisc
by Yunsheng Lin 30 Apr '21
by Yunsheng Lin 30 Apr '21
30 Apr '21
On 2021/4/21 17:25, Yunsheng Lin wrote:
> On 2021/4/21 16:44, Michal Kubecek wrote:
>
>>
>> I'll try running some tests also on other architectures, including arm64
>> and s390x (to catch potential endinanity issues).
I tried debugging nperf in arm64, with the below patch:
diff --git a/client/main.c b/client/main.c
index 429634d..de1a3ef 100644
--- a/client/main.c
+++ b/client/main.c
@@ -63,7 +63,10 @@ static int client_init(void)
ret = client_set_usr1_handler();
if (ret < 0)
return ret;
- return ignore_signal(SIGPIPE);
+ //return ignore_signal(SIGPIPE);
+ signal(SIGPIPE, SIG_IGN);
+
+ return 0;
}
static int ctrl_send_start(struct client_config *config)
diff --git a/client/worker.c b/client/worker.c
index ac026893..d269311 100644
--- a/client/worker.c
+++ b/client/worker.c
@@ -7,7 +7,7 @@
#include "worker.h"
#include "main.h"
-#define WORKER_STACK_SIZE 16384
+#define WORKER_STACK_SIZE 131072
struct client_worker_data *workers_data;
union sockaddr_any test_addr;
It has below error output:
../nperf/nperf -H 127.0.0.1 -l 3 -i 1 --exact -t TCP_STREAM -M 1
server: 127.0.0.1, port 12543
iterations: 1, threads: 1, test length: 3
test: TCP_STREAM, message size: 1048576
run test begin
send begin
send done: -32
failed to receive server stats
*** Iteration 1 failed, quitting. ***
Tcpdump has below output:
09:55:12.253341 IP localhost.53080 > localhost.12543: Flags [S], seq 3954442980, win 65495, options [mss 65495,sackOK,TS val 3268837738 ecr 0,nop,wscale 7], length 0
09:55:12.253363 IP localhost.12543 > localhost.53080: Flags [S.], seq 4240541653, ack 3954442981, win 65483, options [mss 65495,sackOK,TS val 3268837738 ecr 3268837738,nop,wscale 7], length 0
09:55:12.253379 IP localhost.53080 > localhost.12543: Flags [.], ack 1, win 512, options [nop,nop,TS val 3268837738 ecr 3268837738], length 0
09:55:12.253412 IP localhost.53080 > localhost.12543: Flags [P.], seq 1:29, ack 1, win 512, options [nop,nop,TS val 3268837738 ecr 3268837738], length 28
09:55:12.253863 IP localhost.12543 > localhost.53080: Flags [P.], seq 1:17, ack 29, win 512, options [nop,nop,TS val 3268837739 ecr 3268837738], length 16
09:55:12.253891 IP localhost.53080 > localhost.12543: Flags [.], ack 17, win 512, options [nop,nop,TS val 3268837739 ecr 3268837739], length 0
09:55:12.254265 IP localhost.12543 > localhost.53080: Flags [F.], seq 17, ack 29, win 512, options [nop,nop,TS val 3268837739 ecr 3268837739], length 0
09:55:12.301992 IP localhost.53080 > localhost.12543: Flags [.], ack 18, win 512, options [nop,nop,TS val 3268837787 ecr 3268837739], length 0
09:55:15.254389 IP localhost.53080 > localhost.12543: Flags [F.], seq 29, ack 18, win 512, options [nop,nop,TS val 3268840739 ecr 3268837739], length 0
09:55:15.254426 IP localhost.12543 > localhost.53080: Flags [.], ack 30, win 512, options [nop,nop,TS val 3268840739 ecr 3268840739], length 0
Any idea what went wrong here?
Also, Would you mind running netperf to see if there is similar issue
in your system?
>>
>> Michal
>>
>> .
>>
>
>
> .
>
2
3

30 Apr '21
From: Jonathan Cameron <Jonathan.Cameron(a)huawei.com>
Both ACPI and DT provide the ability to describe additional layers of
topology between that of individual cores and higher level constructs
such as the level at which the last level cache is shared.
In ACPI this can be represented in PPTT as a Processor Hierarchy
Node Structure [1] that is the parent of the CPU cores and in turn
has a parent Processor Hierarchy Nodes Structure representing
a higher level of topology.
For example Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data, but each cluster
has local L3 tag. On the other hand, each clusters will share some
internal system bus.
+-----------------------------------+ +---------+
| +------+ +------+ +---------------------------+ |
| | CPU0 | | cpu1 | | +-----------+ | |
| +------+ +------+ | | | | |
| +----+ L3 | | |
| +------+ +------+ cluster | | tag | | |
| | CPU2 | | CPU3 | | | | | |
| +------+ +------+ | +-----------+ | |
| | | |
+-----------------------------------+ | |
+-----------------------------------+ | |
| +------+ +------+ +--------------------------+ |
| | | | | | +-----------+ | |
| +------+ +------+ | | | | |
| | | L3 | | |
| +------+ +------+ +----+ tag | | |
| | | | | | | | | |
| +------+ +------+ | +-----------+ | |
| | | |
+-----------------------------------+ | L3 |
| data |
+-----------------------------------+ | |
| +------+ +------+ | +-----------+ | |
| | | | | | | | | |
| +------+ +------+ +----+ L3 | | |
| | | tag | | |
| +------+ +------+ | | | | |
| | | | | ++ +-----------+ | |
| +------+ +------+ |---------------------------+ |
+-----------------------------------| | |
+-----------------------------------| | |
| +------+ +------+ +---------------------------+ |
| | | | | | +-----------+ | |
| +------+ +------+ | | | | |
| +----+ L3 | | |
| +------+ +------+ | | tag | | |
| | | | | | | | | |
| +------+ +------+ | +-----------+ | |
| | | |
+-----------------------------------+ | |
+-----------------------------------+ | |
| +------+ +------+ +--------------------------+ |
| | | | | | +-----------+ | |
| +------+ +------+ | | | | |
| | | L3 | | |
| +------+ +------+ +---+ tag | | |
| | | | | | | | | |
| +------+ +------+ | +-----------+ | |
| | | |
+-----------------------------------+ | |
+-----------------------------------+ ++ |
| +------+ +------+ +--------------------------+ |
| | | | | | +-----------+ | |
| +------+ +------+ | | | | |
| | | L3 | | |
| +------+ +------+ +--+ tag | | |
| | | | | | | | | |
| +------+ +------+ | +-----------+ | |
| | +---------+
+-----------------------------------+
That means the cost to transfer ownership of a cacheline between CPUs
within a cluster is lower than between CPUs in different clusters on
the same die. Hence, it can make sense to tell the scheduler to use
the cache affinity of the cluster to make better decision on thread
migration.
This patch simply exposes this information to userspace libraries
like hwloc by providing cluster_cpus and related sysfs attributes.
PoC of HWLOC support at [2].
Note this patch only handle the ACPI case.
Special consideration is needed for SMT processors, where it is
necessary to move 2 levels up the hierarchy from the leaf nodes
(thus skipping the processor core level).
Currently the ID provided is the offset of the Processor
Hierarchy Nodes Structure within PPTT. Whilst this is unique
it is not terribly elegant so alternative suggestions welcome.
Note that arm64 / ACPI does not provide any means of identifying
a die level in the topology but that may be unrelate to the cluster
level.
[1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node
structure (Type 0)
[2] https://github.com/hisilicon/hwloc/tree/linux-cluster
Signed-off-by: Jonathan Cameron <Jonathan.Cameron(a)huawei.com>
Signed-off-by: Barry Song <song.bao.hua(a)hisilicon.com>
---
-v6:
* the topology ABI documents required by Greg is not completed yet.
will have a separate patch for that.
Documentation/admin-guide/cputopology.rst | 26 +++++++++++--
arch/arm64/kernel/topology.c | 2 +
drivers/acpi/pptt.c | 63 +++++++++++++++++++++++++++++++
drivers/base/arch_topology.c | 15 ++++++++
drivers/base/topology.c | 10 +++++
include/linux/acpi.h | 5 +++
include/linux/arch_topology.h | 5 +++
include/linux/topology.h | 6 +++
8 files changed, 128 insertions(+), 4 deletions(-)
diff --git a/Documentation/admin-guide/cputopology.rst b/Documentation/admin-guide/cputopology.rst
index b90dafc..f9d3745 100644
--- a/Documentation/admin-guide/cputopology.rst
+++ b/Documentation/admin-guide/cputopology.rst
@@ -24,6 +24,12 @@ core_id:
identifier (rather than the kernel's). The actual value is
architecture and platform dependent.
+cluster_id:
+
+ the Cluster ID of cpuX. Typically it is the hardware platform's
+ identifier (rather than the kernel's). The actual value is
+ architecture and platform dependent.
+
book_id:
the book ID of cpuX. Typically it is the hardware platform's
@@ -56,6 +62,14 @@ package_cpus_list:
human-readable list of CPUs sharing the same physical_package_id.
(deprecated name: "core_siblings_list")
+cluster_cpus:
+
+ internal kernel map of CPUs within the same cluster.
+
+cluster_cpus_list:
+
+ human-readable list of CPUs within the same cluster.
+
die_cpus:
internal kernel map of CPUs within the same die.
@@ -96,11 +110,13 @@ these macros in include/asm-XXX/topology.h::
#define topology_physical_package_id(cpu)
#define topology_die_id(cpu)
+ #define topology_cluster_id(cpu)
#define topology_core_id(cpu)
#define topology_book_id(cpu)
#define topology_drawer_id(cpu)
#define topology_sibling_cpumask(cpu)
#define topology_core_cpumask(cpu)
+ #define topology_cluster_cpumask(cpu)
#define topology_die_cpumask(cpu)
#define topology_book_cpumask(cpu)
#define topology_drawer_cpumask(cpu)
@@ -116,10 +132,12 @@ not defined by include/asm-XXX/topology.h:
1) topology_physical_package_id: -1
2) topology_die_id: -1
-3) topology_core_id: 0
-4) topology_sibling_cpumask: just the given CPU
-5) topology_core_cpumask: just the given CPU
-6) topology_die_cpumask: just the given CPU
+3) topology_cluster_id: -1
+4) topology_core_id: 0
+5) topology_sibling_cpumask: just the given CPU
+6) topology_core_cpumask: just the given CPU
+7) topology_cluster_cpumask: just the given CPU
+8) topology_die_cpumask: just the given CPU
For architectures that don't support books (CONFIG_SCHED_BOOK) there are no
default definitions for topology_book_id() and topology_book_cpumask().
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index e08a412..d72eb8d 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -103,6 +103,8 @@ int __init parse_acpi_topology(void)
cpu_topology[cpu].thread_id = -1;
cpu_topology[cpu].core_id = topology_id;
}
+ topology_id = find_acpi_cpu_topology_cluster(cpu);
+ cpu_topology[cpu].cluster_id = topology_id;
topology_id = find_acpi_cpu_topology_package(cpu);
cpu_topology[cpu].package_id = topology_id;
diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
index 4ae9335..11f8b02 100644
--- a/drivers/acpi/pptt.c
+++ b/drivers/acpi/pptt.c
@@ -737,6 +737,69 @@ int find_acpi_cpu_topology_package(unsigned int cpu)
}
/**
+ * find_acpi_cpu_topology_cluster() - Determine a unique CPU cluster value
+ * @cpu: Kernel logical CPU number
+ *
+ * Determine a topology unique cluster ID for the given CPU/thread.
+ * This ID can then be used to group peers, which will have matching ids.
+ *
+ * The cluster, if present is the level of topology above CPUs. In a
+ * multi-thread CPU, it will be the level above the CPU, not the thread.
+ * It may not exist in single CPU systems. In simple multi-CPU systems,
+ * it may be equal to the package topology level.
+ *
+ * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found
+ * or there is no toplogy level above the CPU..
+ * Otherwise returns a value which represents the package for this CPU.
+ */
+
+int find_acpi_cpu_topology_cluster(unsigned int cpu)
+{
+ struct acpi_table_header *table;
+ acpi_status status;
+ struct acpi_pptt_processor *cpu_node, *cluster_node;
+ u32 acpi_cpu_id;
+ int retval;
+ int is_thread;
+
+ status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
+ if (ACPI_FAILURE(status)) {
+ acpi_pptt_warn_missing();
+ return -ENOENT;
+ }
+
+ acpi_cpu_id = get_acpi_id_for_cpu(cpu);
+ cpu_node = acpi_find_processor_node(table, acpi_cpu_id);
+ if (cpu_node == NULL || !cpu_node->parent) {
+ retval = -ENOENT;
+ goto put_table;
+ }
+
+ is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD;
+ cluster_node = fetch_pptt_node(table, cpu_node->parent);
+ if (cluster_node == NULL) {
+ retval = -ENOENT;
+ goto put_table;
+ }
+ if (is_thread) {
+ if (!cluster_node->parent) {
+ retval = -ENOENT;
+ goto put_table;
+ }
+ cluster_node = fetch_pptt_node(table, cluster_node->parent);
+ if (cluster_node == NULL) {
+ retval = -ENOENT;
+ goto put_table;
+ }
+ }
+ retval = ACPI_PTR_DIFF(cluster_node, table);
+put_table:
+ acpi_put_table(table);
+
+ return retval;
+}
+
+/**
* find_acpi_cpu_topology_hetero_id() - Get a core architecture tag
* @cpu: Kernel logical CPU number
*
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index de8587c..ca3b8c1 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -506,6 +506,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
return core_mask;
}
+const struct cpumask *cpu_clustergroup_mask(int cpu)
+{
+ return &cpu_topology[cpu].cluster_sibling;
+}
+
void update_siblings_masks(unsigned int cpuid)
{
struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
@@ -523,6 +528,11 @@ void update_siblings_masks(unsigned int cpuid)
if (cpuid_topo->package_id != cpu_topo->package_id)
continue;
+ if (cpuid_topo->cluster_id == cpu_topo->cluster_id) {
+ cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling);
+ cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling);
+ }
+
cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
@@ -541,6 +551,9 @@ static void clear_cpu_topology(int cpu)
cpumask_clear(&cpu_topo->llc_sibling);
cpumask_set_cpu(cpu, &cpu_topo->llc_sibling);
+ cpumask_clear(&cpu_topo->cluster_sibling);
+ cpumask_set_cpu(cpu, &cpu_topo->cluster_sibling);
+
cpumask_clear(&cpu_topo->core_sibling);
cpumask_set_cpu(cpu, &cpu_topo->core_sibling);
cpumask_clear(&cpu_topo->thread_sibling);
@@ -556,6 +569,7 @@ void __init reset_cpu_topology(void)
cpu_topo->thread_id = -1;
cpu_topo->core_id = -1;
+ cpu_topo->cluster_id = -1;
cpu_topo->package_id = -1;
cpu_topo->llc_id = -1;
@@ -571,6 +585,7 @@ void remove_cpu_topology(unsigned int cpu)
cpumask_clear_cpu(cpu, topology_core_cpumask(sibling));
for_each_cpu(sibling, topology_sibling_cpumask(cpu))
cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling));
+
for_each_cpu(sibling, topology_llc_cpumask(cpu))
cpumask_clear_cpu(cpu, topology_llc_cpumask(sibling));
diff --git a/drivers/base/topology.c b/drivers/base/topology.c
index 4d254fc..7157ac0 100644
--- a/drivers/base/topology.c
+++ b/drivers/base/topology.c
@@ -46,6 +46,9 @@
define_id_show_func(die_id);
static DEVICE_ATTR_RO(die_id);
+define_id_show_func(cluster_id);
+static DEVICE_ATTR_RO(cluster_id);
+
define_id_show_func(core_id);
static DEVICE_ATTR_RO(core_id);
@@ -61,6 +64,10 @@
static DEVICE_ATTR_RO(core_siblings);
static DEVICE_ATTR_RO(core_siblings_list);
+define_siblings_show_func(cluster_cpus, cluster_cpumask);
+static DEVICE_ATTR_RO(cluster_cpus);
+static DEVICE_ATTR_RO(cluster_cpus_list);
+
define_siblings_show_func(die_cpus, die_cpumask);
static DEVICE_ATTR_RO(die_cpus);
static DEVICE_ATTR_RO(die_cpus_list);
@@ -88,6 +95,7 @@
static struct attribute *default_attrs[] = {
&dev_attr_physical_package_id.attr,
&dev_attr_die_id.attr,
+ &dev_attr_cluster_id.attr,
&dev_attr_core_id.attr,
&dev_attr_thread_siblings.attr,
&dev_attr_thread_siblings_list.attr,
@@ -95,6 +103,8 @@
&dev_attr_core_cpus_list.attr,
&dev_attr_core_siblings.attr,
&dev_attr_core_siblings_list.attr,
+ &dev_attr_cluster_cpus.attr,
+ &dev_attr_cluster_cpus_list.attr,
&dev_attr_die_cpus.attr,
&dev_attr_die_cpus_list.attr,
&dev_attr_package_cpus.attr,
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
index 9f43241..138b779 100644
--- a/include/linux/acpi.h
+++ b/include/linux/acpi.h
@@ -1307,6 +1307,7 @@ static inline int lpit_read_residency_count_address(u64 *address)
#ifdef CONFIG_ACPI_PPTT
int acpi_pptt_cpu_is_thread(unsigned int cpu);
int find_acpi_cpu_topology(unsigned int cpu, int level);
+int find_acpi_cpu_topology_cluster(unsigned int cpu);
int find_acpi_cpu_topology_package(unsigned int cpu);
int find_acpi_cpu_topology_hetero_id(unsigned int cpu);
int find_acpi_cpu_cache_topology(unsigned int cpu, int level);
@@ -1319,6 +1320,10 @@ static inline int find_acpi_cpu_topology(unsigned int cpu, int level)
{
return -EINVAL;
}
+static inline int find_acpi_cpu_topology_cluster(unsigned int cpu)
+{
+ return -EINVAL;
+}
static inline int find_acpi_cpu_topology_package(unsigned int cpu)
{
return -EINVAL;
diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h
index 0f6cd6b..987c7ea 100644
--- a/include/linux/arch_topology.h
+++ b/include/linux/arch_topology.h
@@ -49,10 +49,12 @@ void topology_set_thermal_pressure(const struct cpumask *cpus,
struct cpu_topology {
int thread_id;
int core_id;
+ int cluster_id;
int package_id;
int llc_id;
cpumask_t thread_sibling;
cpumask_t core_sibling;
+ cpumask_t cluster_sibling;
cpumask_t llc_sibling;
};
@@ -60,13 +62,16 @@ struct cpu_topology {
extern struct cpu_topology cpu_topology[NR_CPUS];
#define topology_physical_package_id(cpu) (cpu_topology[cpu].package_id)
+#define topology_cluster_id(cpu) (cpu_topology[cpu].cluster_id)
#define topology_core_id(cpu) (cpu_topology[cpu].core_id)
#define topology_core_cpumask(cpu) (&cpu_topology[cpu].core_sibling)
#define topology_sibling_cpumask(cpu) (&cpu_topology[cpu].thread_sibling)
+#define topology_cluster_cpumask(cpu) (&cpu_topology[cpu].cluster_sibling)
#define topology_llc_cpumask(cpu) (&cpu_topology[cpu].llc_sibling)
void init_cpu_topology(void);
void store_cpu_topology(unsigned int cpuid);
const struct cpumask *cpu_coregroup_mask(int cpu);
+const struct cpumask *cpu_clustergroup_mask(int cpu);
void update_siblings_masks(unsigned int cpu);
void remove_cpu_topology(unsigned int cpuid);
void reset_cpu_topology(void);
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 7634cd7..80d27d7 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -186,6 +186,9 @@ static inline int cpu_to_mem(int cpu)
#ifndef topology_die_id
#define topology_die_id(cpu) ((void)(cpu), -1)
#endif
+#ifndef topology_cluster_id
+#define topology_cluster_id(cpu) ((void)(cpu), -1)
+#endif
#ifndef topology_core_id
#define topology_core_id(cpu) ((void)(cpu), 0)
#endif
@@ -195,6 +198,9 @@ static inline int cpu_to_mem(int cpu)
#ifndef topology_core_cpumask
#define topology_core_cpumask(cpu) cpumask_of(cpu)
#endif
+#ifndef topology_cluster_cpumask
+#define topology_cluster_cpumask(cpu) cpumask_of(cpu)
+#endif
#ifndef topology_die_cpumask
#define topology_die_cpumask(cpu) cpumask_of(cpu)
#endif
--
1.8.3.1
3
2

Re: [PATCH net 0/3] net: hns3: add some fixes for -net
by patchwork-bot+netdevbpf@kernel.org 30 Apr '21
by patchwork-bot+netdevbpf@kernel.org 30 Apr '21
30 Apr '21
Hello:
This series was applied to netdev/net.git (refs/heads/master):
On Thu, 29 Apr 2021 16:34:49 +0800 you wrote:
> This series adds some fixes for the HNS3 ethernet driver.
>
> Jian Shen (1):
> net: hns3: add check for HNS3_NIC_STATE_INITED in
> hns3_reset_notify_up_enet()
>
> Yufeng Mo (2):
> net: hns3: fix incorrect configuration for igu_egu_hw_err
> net: hns3: initialize the message content in hclge_get_link_mode()
>
> [...]
Here is the summary with links:
- [net,1/3] net: hns3: fix incorrect configuration for igu_egu_hw_err
https://git.kernel.org/netdev/net/c/2867298dd49e
- [net,2/3] net: hns3: initialize the message content in hclge_get_link_mode()
https://git.kernel.org/netdev/net/c/568a54bdf70b
- [net,3/3] net: hns3: add check for HNS3_NIC_STATE_INITED in hns3_reset_notify_up_enet()
https://git.kernel.org/netdev/net/c/b4047aac4ec1
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
1
0

Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC
by Dietmar Eggemann 29 Apr '21
by Dietmar Eggemann 29 Apr '21
29 Apr '21
On 20/04/2021 02:18, Barry Song wrote:
[...]
> @@ -5786,11 +5786,12 @@ static void record_wakee(struct task_struct *p)
> * whatever is irrelevant, spread criteria is apparent partner count exceeds
> * socket size.
> */
> -static int wake_wide(struct task_struct *p)
> +static int wake_wide(struct task_struct *p, int cluster)
> {
> unsigned int master = current->wakee_flips;
> unsigned int slave = p->wakee_flips;
> - int factor = __this_cpu_read(sd_llc_size);
> + int factor = cluster ? __this_cpu_read(sd_cluster_size) :
> + __this_cpu_read(sd_llc_size);
I don't see that the wake_wide() change has any effect here. None of the
sched domains has SD_BALANCE_WAKE set so a wakeup (WF_TTWU) can never
end up in the slow path.
Have you seen a diff when running your `lmbench stream` workload in what
wake_wide() returns when you use `sd cluster size` instead of `sd llc
size` as factor?
I guess for you, wakeups are now subdivided into faster (cluster = 4
CPUs) and fast (llc = 24 CPUs) via sis(), not into fast (sis()) and slow
(find_idlest_cpu()).
>
> if (master < slave)
> swap(master, slave);
[...]
> @@ -6745,6 +6748,12 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> int want_affine = 0;
> /* SD_flags and WF_flags share the first nibble */
> int sd_flag = wake_flags & 0xF;
> + /*
> + * if cpu and prev_cpu share LLC, consider cluster sibling rather
> + * than llc. this is typically true while tasks are bound within
> + * one numa
> + */
> + int cluster = sched_cluster_active() && cpus_share_cache(cpu, prev_cpu, 0);
So you changed from scanning cluster before LLC to scan either cluster
or LLC.
And this is based on whether `this_cpu` and `prev_cpu` are sharing LLC
or not. So you only see an effect when running the workload with
`numactl -N X ...`.
>
> if (wake_flags & WF_TTWU) {
> record_wakee(p);
> @@ -6756,7 +6765,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> new_cpu = prev_cpu;
> }
>
> - want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, p->cpus_ptr);
> + want_affine = !wake_wide(p, cluster) && cpumask_test_cpu(cpu, p->cpus_ptr);
> }
>
> rcu_read_lock();
> @@ -6768,7 +6777,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
> cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
> if (cpu != prev_cpu)
> - new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync);
> + new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync, cluster);
>
> sd = NULL; /* Prefer wake_affine over balance flags */
> break;
> @@ -6785,7 +6794,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
> } else if (wake_flags & WF_TTWU) { /* XXX always ? */
> /* Fast path */
> - new_cpu = select_idle_sibling(p, prev_cpu, new_cpu);
> + new_cpu = select_idle_sibling(p, prev_cpu, new_cpu, cluster);
>
> if (want_affine)
> current->recent_used_cpu = cpu;
[...]
3
4

Re: [PATCH V3 7/7] app/testpmd: remove redundant fwd streams initialization
by Li, Xiaoyun 27 Apr '21
by Li, Xiaoyun 27 Apr '21
27 Apr '21
> -----Original Message-----
> From: Huisong Li <lihuisong(a)huawei.com>
> Sent: Tuesday, April 20, 2021 17:01
> To: dev(a)dpdk.org
> Cc: Yigit, Ferruh <ferruh.yigit(a)intel.com>; Li, Xiaoyun <xiaoyun.li(a)intel.com>;
> linuxarm(a)openeuler.org; lihuisong(a)huawei.com
> Subject: [PATCH V3 7/7] app/testpmd: remove redundant fwd streams
> initialization
>
> The fwd_config_setup() is called after init_fwd_streams().
> The fwd_config_setup() will reinitialize forwarding streams.
> This patch removes init_fwd_streams() from init_config().
>
> Signed-off-by: Huisong Li <lihuisong(a)huawei.com>
> Signed-off-by: Lijun Ou <oulijun(a)huawei.com>
> ---
Agree. Seems redundant. Every fwd setup will call init_fwd_streams() again.
Acked-by: Xiaoyun Li <xiaoyun.li(a)intel.com>
1
0

27 Apr '21
> -----Original Message-----
> From: Huisong Li <lihuisong(a)huawei.com>
> Sent: Tuesday, April 20, 2021 17:01
> To: dev(a)dpdk.org
> Cc: Yigit, Ferruh <ferruh.yigit(a)intel.com>; Li, Xiaoyun <xiaoyun.li(a)intel.com>;
> linuxarm(a)openeuler.org; lihuisong(a)huawei.com
> Subject: [PATCH V3 6/7] app/testpmd: add forwarding config in start port
>
> Most operations in testpmd that need to update the forwarding streams in
> testpmd call fwd_config_setup(). In some scenarios, eg, dev_configure is called
> again, the forwarding streams may not be updated. As a result, the actual
> forwarding streams cannot be queried by "show config fwd" cmd.
I don't agree on this.
Fwd config should be only changed after the user change something like nb-cores, queue number, eth-peer in non-dcb mode. These are already done in those commands.
You should do fwd_config_setup at the end of cmd_config_dcb_parsed(), I agree on this.
But doing it in start port seems to do redundant times of fwd_setup. It's not really needed.
>
> The procedure is as follows:
> set nbcore 4
> port stop all
> port config 0 dcb vt off 4 pfc on
> port start all
> show config fwd
>
> Signed-off-by: Huisong Li <lihuisong(a)huawei.com>
> Signed-off-by: Lijun Ou <oulijun(a)huawei.com>
> ---
> app/test-pmd/testpmd.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> abcbdaa..f8052b6 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -2678,6 +2678,12 @@ start_port(portid_t pid)
> }
> }
> }
> + /*
> + * In some scenarios, eg, dev_configure is called again, the forwarding
> + * streams may not be updated. As a result, the actual forwarding
> + * streams cannot be queried by "show config fwd" command.
> + */
> + fwd_config_setup();
>
> printf("Done\n");
> return 0;
> --
> 2.7.4
1
0

27 Apr '21
> -----Original Message-----
> From: Huisong Li <lihuisong(a)huawei.com>
> Sent: Tuesday, April 20, 2021 17:01
> To: dev(a)dpdk.org
> Cc: Yigit, Ferruh <ferruh.yigit(a)intel.com>; Li, Xiaoyun <xiaoyun.li(a)intel.com>;
> linuxarm(a)openeuler.org; lihuisong(a)huawei.com
> Subject: [PATCH V3 5/7] app/testpmd: move position of verifying DCB test
>
> Currently, the check for doing DCB test is assigned to start_packet_forwarding(),
> which will be called when run "start" cmd. But fwd_config_setup() is used in
> many scenarios, such as, "port config all rxq".
>
> This patch moves the check from start_packet_forwarding() to
> fwd_config_setup().
>
> Fixes: 7741e4cf16c0 ("app/testpmd: VMDq and DCB updates")
>
> Signed-off-by: Huisong Li <lihuisong(a)huawei.com>
> Signed-off-by: Lijun Ou <oulijun(a)huawei.com>
> ---
> app/test-pmd/config.c | 23 +++++++++++++++++++++-- app/test-
> pmd/testpmd.c | 19 -------------------
> 2 files changed, 21 insertions(+), 21 deletions(-)
Acked-by: Xiaoyun Li <xiaoyun.li(a)intel.com>
1
0

Re: [PATCH V3 4/7] app/testpmd: add check for support of reporting DCB info
by Li, Xiaoyun 27 Apr '21
by Li, Xiaoyun 27 Apr '21
27 Apr '21
> -----Original Message-----
> From: Huisong Li <lihuisong(a)huawei.com>
> Sent: Tuesday, April 20, 2021 17:01
> To: dev(a)dpdk.org
> Cc: Yigit, Ferruh <ferruh.yigit(a)intel.com>; Li, Xiaoyun <xiaoyun.li(a)intel.com>;
> linuxarm(a)openeuler.org; lihuisong(a)huawei.com
> Subject: [PATCH V3 4/7] app/testpmd: add check for support of reporting DCB
> info
>
> Currently, '.get_dcb_info' must be supported for the port doing DCB test, or all
> information in 'rte_eth_dcb_info' are zero. It should be prevented when user run
> cmd "port config 0 dcb vt off 4 pfc off".
>
> This patch adds the check for support of reporting dcb info.
>
> Signed-off-by: Huisong Li <lihuisong(a)huawei.com>
> Signed-off-by: Lijun Ou <oulijun(a)huawei.com>
> ---
> V2->V3
> - fix the abnormal print information
>
> ---
> app/test-pmd/cmdline.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
> 2.7.4
Acked-by: Xiaoyun Li <xiaoyun.li(a)intel.com>
1
0

27 Apr '21
> -----Original Message-----
> From: Huisong Li <lihuisong(a)huawei.com>
> Sent: Tuesday, April 20, 2021 17:01
> To: dev(a)dpdk.org
> Cc: Yigit, Ferruh <ferruh.yigit(a)intel.com>; Li, Xiaoyun <xiaoyun.li(a)intel.com>;
> linuxarm(a)openeuler.org; lihuisong(a)huawei.com
> Subject: [PATCH V3 3/7] app/testpmd: fix a segment fault when DCB test
>
> After DCB mode is configured, if we decrease the number of RX and TX queues,
> fwd_config_setup() will be called to setup the DCB forwarding configuration.
> And forwarding streams are updated based on new queue numbers in
> fwd_config_setup(), but the mapping between the TC and queues obtained by
> rte_eth_dev_get_dcb_info() is still old queue numbers (old queue numbers are
> greater than new queue numbers).
> In this case, the segment fault happens. So rte_eth_dev_configure() should be
> called again to update the mapping between the TC and queues before
> rte_eth_dev_get_dcb_info().
>
> Like:
> set nbcore 4
> port stop all
> port config 0 dcb vt off 4 pfc on
> port start all
> port stop all
> port config all rxq 8
> port config all txq 8
>
> Fixes: 900550de04a7 ("app/testpmd: add dcb support")
> Cc: stable(a)dpdk.org
>
> Signed-off-by: Huisong Li <lihuisong(a)huawei.com>
> Signed-off-by: Lijun Ou <oulijun(a)huawei.com>
> ---
> app/test-pmd/config.c | 26 ++++++++++++++++++++++++++
> 1 file changed, 26 insertions(+)
>
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index
> 03ee40c..18b197b 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -2996,7 +2996,33 @@ dcb_fwd_config_setup(void)
> uint16_t nb_rx_queue, nb_tx_queue;
> uint16_t i, j, k, sm_id = 0;
> uint16_t total_tc_num;
> + struct rte_port *port;
> uint8_t tc = 0;
> + portid_t pid;
> + int ret;
> +
> + /*
> + * The fwd_config_setup() is called when the port is
> RTE_PORT_STARTED
When the port is RTE_PORT_STARTED or RTE_PORT_STARTED? Missing an 'or' here.
Maybe something like the following will be better? (just a reference, you can put it in a better way)
/*
* Re-configure ports to get updated mapping between tc and queue in case the queue number of the port is changed.
* Skip for started ports since configuring queue number needs to stop ports first.
*/
> + * RTE_PORT_STOPPED. When a port is RTE_PORT_STARTED,
> dev_configure
> + * cannot be called.
> + *
> + * re-configure the device after changing queue numbers of stopped
> + * ports, so that the updated mapping between tc and queue can be
> + * obtained.
> + */
> + for (pid = 0; pid < nb_fwd_ports; pid++) {
> + if (port_is_started(pid) == 1)
> + continue;
> +
> + port = &ports[pid];
> + ret = rte_eth_dev_configure(pid, nb_rxq, nb_txq,
> + &port->dev_conf);
> + if (ret < 0) {
> + printf("Failed to re-configure port %d, ret = %d.\n",
> + pid, ret);
> + return;
> + }
> + }
>
> cur_fwd_config.nb_fwd_lcores = (lcoreid_t) nb_fwd_lcores;
> cur_fwd_config.nb_fwd_ports = nb_fwd_ports;
> --
> 2.7.4
1
0