
From: Peter Zijlstra <peterz@infradead.org> mainline inclusion from mainline-v5.10-rc1 commit 3dbde69575637658d2094ee4416c21bc22eb89fe category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I47H3V CVE: NA -------------------------------- commit 3dbde69575637658d2094ee4416c21bc22eb89fe upstream Backport summary: backport to kernel 4.19.57 for ICX perf topdown support When a group that has TopDown members is failed to be scheduled, any later TopDown groups will not return valid values. Here is an example. A background perf that occupies all the GP counters and the fixed counter 1. $perf stat -e "{cycles,cycles,cycles,cycles,cycles,cycles,cycles, cycles,cycles}:D" -a A user monitors a TopDown group. It works well, because the fixed counter 3 and the PERF_METRICS are available. $perf stat -x, --topdown -- ./workload retiring,bad speculation,frontend bound,backend bound, 18.0,16.1,40.4,25.5, Then the user tries to monitor a group that has TopDown members. Because of the cycles event, the group is failed to be scheduled. $perf stat -x, -e '{slots,topdown-retiring,topdown-be-bound, topdown-fe-bound,topdown-bad-spec,cycles}' -- ./workload <not counted>,,slots,0,0.00,, <not counted>,,topdown-retiring,0,0.00,, <not counted>,,topdown-be-bound,0,0.00,, <not counted>,,topdown-fe-bound,0,0.00,, <not counted>,,topdown-bad-spec,0,0.00,, <not counted>,,cycles,0,0.00,, The user tries to monitor a TopDown group again. It doesn't work anymore. $perf stat -x, --topdown -- ./workload ,,,,, In a txn, cancel_txn() is to truncate the event_list for a canceled group and update the number of events added in this transaction. However, the number of TopDown events added in this transaction is not updated. The kernel will probably fail to add new Topdown events. Fixes: 7b2c05a15d29 ("perf/x86/intel: Generic support for hardware TopDown metrics") Reported-by: Andi Kleen <ak@linux.intel.com> Reported-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Kan Liang <kan.liang@linux.intel.com> Link: https://lkml.kernel.org/r/20201005082611.GH2628@hirez.programming.kicks-ass.... Signed-off-by: Yunying Sun <yunying.sun@intel.com> Signed-off-by: Jackie Liu <liuyun01@kylinos.cn> Signed-off-by: Zheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: Wei Li <liwei391@huawei.com> Reviewed-by: Xie XiuQi <xiexiuqi@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> --- arch/x86/events/core.c | 3 +++ arch/x86/events/perf_event.h | 1 + 2 files changed, 4 insertions(+) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 6e2a5ae4608bb..d5c95336ec158 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -1020,6 +1020,7 @@ static int add_nr_metric_event(struct cpu_hw_events *cpuc, if (cpuc->n_metric == INTEL_TD_METRIC_NUM) return -EINVAL; cpuc->n_metric++; + cpuc->n_txn_metric++; } return 0; @@ -1941,6 +1942,7 @@ static void x86_pmu_start_txn(struct pmu *pmu, unsigned int txn_flags) perf_pmu_disable(pmu); __this_cpu_write(cpu_hw_events.n_txn, 0); + __this_cpu_write(cpu_hw_events.n_txn_metric, 0); } /* @@ -1966,6 +1968,7 @@ static void x86_pmu_cancel_txn(struct pmu *pmu) */ __this_cpu_sub(cpu_hw_events.n_added, __this_cpu_read(cpu_hw_events.n_txn)); __this_cpu_sub(cpu_hw_events.n_events, __this_cpu_read(cpu_hw_events.n_txn)); + __this_cpu_sub(cpu_hw_events.n_metric, __this_cpu_read(cpu_hw_events.n_txn_metric)); perf_pmu_enable(pmu); } diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index afc9789c176a8..db719665e1472 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -220,6 +220,7 @@ struct cpu_hw_events { they've never been enabled yet */ int n_txn; /* the # last events in the below arrays; added in the current transaction */ + int n_txn_metric; int assign[X86_PMC_IDX_MAX]; /* event to counter assignment */ u64 tags[X86_PMC_IDX_MAX]; -- 2.25.1