[PATCH OLK-6.6] tools/interference: add ifstool utility for kernel interference statistics
9 Mar
2026
9 Mar
'26
9:18 p.m.
hulk inclusion
category: feature
bugzilla: https://atomgit.com/openeuler/kernel/issues/7429
--------------------------------
As real-time and high-performance workloads become more sensitive to
execution jitter, observability into kernel-induced "noise" is critical.
The CONFIG_IFS infrastructure provides this telemetry, but raw data from
cgroups is difficult to parse and visualize manually.
Introduce ifstool, a userspace utility to monitor and analyze the
interference.stat interface.
Signed-off-by: Tengda Wu
---
tools/Makefile | 11 +-
tools/kspect/Makefile | 28 +++
tools/kspect/README | 64 ++++++
tools/kspect/ifstool | 396 ++++++++++++++++++++++++++++++++++
tools/kspect/ifstool.1 | 81 +++++++
tools/kspect/requirements.txt | 2 +
6 files changed, 578 insertions(+), 4 deletions(-)
create mode 100644 tools/kspect/Makefile
create mode 100644 tools/kspect/README
create mode 100644 tools/kspect/ifstool
create mode 100644 tools/kspect/ifstool.1
create mode 100644 tools/kspect/requirements.txt
diff --git a/tools/Makefile b/tools/Makefile
index 37e9f6804832..153817f0cc17 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -22,6 +22,7 @@ help:
@echo ' hv - tools used when in Hyper-V clients'
@echo ' iio - IIO tools'
@echo ' intel-speed-select - Intel Speed Select tool'
+ @echo ' kspect - KSPECT tools'
@echo ' kvm_stat - top-like utility for displaying kvm statistics'
@echo ' leds - LEDs tools'
@echo ' nolibc - nolibc headers testing and installation'
@@ -69,7 +70,7 @@ acpi: FORCE
cpupower: FORCE
$(call descend,power/$@)
-cgroup counter firewire hv guest bootconfig spi usb virtio mm bpf iio gpio objtool leds wmi pci firmware debugging tracing: FORCE
+cgroup counter firewire hv guest bootconfig spi usb virtio mm bpf iio gpio objtool leds wmi pci firmware debugging tracing kspect: FORCE
$(call descend,$@)
bpf/%: FORCE
@@ -120,7 +121,8 @@ all: acpi cgroup counter cpupower gpio hv firewire \
perf selftests bootconfig spi turbostat usb \
virtio mm bpf x86_energy_perf_policy \
tmon freefall iio objtool kvm_stat wmi \
- pci debugging tracing thermal thermometer thermal-engine
+ pci debugging tracing thermal thermometer thermal-engine \
+ kspect
acpi_install:
$(call descend,power/$(@:_install=),install)
@@ -128,7 +130,7 @@ acpi_install:
cpupower_install:
$(call descend,power/$(@:_install=),install)
-cgroup_install counter_install firewire_install gpio_install hv_install iio_install perf_install bootconfig_install spi_install usb_install virtio_install mm_install bpf_install objtool_install wmi_install pci_install debugging_install tracing_install:
+cgroup_install counter_install firewire_install gpio_install hv_install iio_install perf_install bootconfig_install spi_install usb_install virtio_install mm_install bpf_install objtool_install wmi_install pci_install debugging_install tracing_install kspect_install:
$(call descend,$(@:_install=),install)
selftests_install:
@@ -161,7 +163,8 @@ install: acpi_install cgroup_install counter_install cpupower_install gpio_insta
virtio_install mm_install bpf_install x86_energy_perf_policy_install \
tmon_install freefall_install objtool_install kvm_stat_install \
wmi_install pci_install debugging_install intel-speed-select_install \
- tracing_install thermometer_install thermal-engine_install
+ tracing_install thermometer_install thermal-engine_install \
+ kspect_install
acpi_clean:
$(call descend,power/acpi,clean)
diff --git a/tools/kspect/Makefile b/tools/kspect/Makefile
new file mode 100644
index 000000000000..ffb380ddc885
--- /dev/null
+++ b/tools/kspect/Makefile
@@ -0,0 +1,28 @@
+# SPDX-License-Identifier: GPL-2.0
+
+PREFIX ?= /usr/local
+BINDIR = $(PREFIX)/bin
+MANDIR = $(PREFIX)/share/man/man1
+
+MAN1 = ifstool.1
+TARGET = ifstool
+
+all: man
+
+man: $(MAN1)
+
+install-man:
+ install -d $(MANDIR)
+ install -m 0644 $(MAN1) $(MANDIR)
+
+install-tools:
+ install -d $(BINDIR)
+ install -m 0755 $(TARGET) $(BINDIR)
+
+install: install-tools install-man
+
+uninstall:
+ rm -f $(BINDIR)/$(TARGET)
+ rm -f $(MANDIR)/$(MAN1)
+
+.PHONY: all install-tools install-man install
diff --git a/tools/kspect/README b/tools/kspect/README
new file mode 100644
index 000000000000..afb92a7f98d0
--- /dev/null
+++ b/tools/kspect/README
@@ -0,0 +1,64 @@
+IFSTOOL - Interference Statistics Analytical Utility
+
+Overview
+========
+IFSTOOL is a specialized userspace utility designed to facilitate the
+monitoring and analysis of Interference Statistics (CONFIG_IFS).
+
+The IFS infrastructure is a kernel-level framework that provides critical
+observability into execution jitter (noise) that disrupts task determinism.
+It monitors and quantifies the CPU time stolen by kernel-level activities,
+such as: interrupt handling, softirqs, and lock contention. The framework
+exposes this telemetry via the interference.stat control file within the
+cgroup hierarchy.
+
+IFSTOOL interfaces with the interference.stat file to export raw metrics
+into structured CSV data and interactive HTML-based distribution reports.
+
+Setup
+=====
+The host environment must meet the following criteria:
+
+* Kernel: Compiled with `CONFIG_IFS=y`.
+* Boot Parameters: `cgroup_ifs=1` added to the kernel command line.
+* Python Runtime: Python 3.x with `pandas` and `plotly` libraries.
+* Cgroup Hierarchy: Either v2 unified or v1 with the cpu subsystem mounted.
+
+Build
+=====
+IFSTOOL provides a Makefile to streamline the installation of the
+executable and its documentation.
+
+ $ make install
+
+Run
+===
+IFSTOOL operates via two primary functional modes: monitor and report.
+
+1. **Monitor:** Capture raw interference data from a target cgroup.
+
+ $ ifstool monitor --cgroup docker/ --interval 1 --output capture.csv
+
+2. **Report:** Transform captured CSV data into an interactive HTML dashboard.
+
+ - Provide only --base for a single-session deep dive:
+
+ $ ifstool report --base capture.csv
+
+ - Provide both --base and --curr to render a differential report,
+ ideal for validating optimizations:
+
+ $ ifstool report --base baseline.csv --curr current.csv
+
+HTML Description
+================
+The generated HTML report provides a multi-dimensional view of kernel noise:
+
+- Total Time Delta Trend: A time-series line chart illustrating the incremental
+ nanoseconds of interference per category (e.g., irq, spinlock).
+- Latency Heatmaps: A frequency-domain visualization of the kernel's internal
+ histogram.
+ - X-axis: Wall-clock time of the trace.
+ - Y-axis: Latency magnitude (logarithmic buckets from ns to s).
+ - Color Intensity: Represents the density (event count) of interference
+ within that specific latency window.
diff --git a/tools/kspect/ifstool b/tools/kspect/ifstool
new file mode 100644
index 000000000000..dcaa8a84272d
--- /dev/null
+++ b/tools/kspect/ifstool
@@ -0,0 +1,396 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# ifstool: A tool to monitor and report interference statistics (CONFIG_IFS).
+#
+# Copyright(c) 2026. Huawei Technologies Co., Ltd
+#
+# Authors:
+# Tengda Wu
+
+import os
+import time
+import re
+import csv
+import argparse
+import pandas as pd
+import plotly.graph_objects as go
+from plotly.subplots import make_subplots
+
+
+class InterferenceTool:
+ """
+ A utility class to monitor Linux cgroup interference statistics (CONFIG_IFS)
+ and generate visual comparison reports between baseline and current data.
+ """
+
+ def __init__(self):
+ # Default path for cgroup v2 interference stats
+ self.base_path = "/sys/fs/cgroup"
+
+ def parse_stat(self, content):
+ """
+ Parses the raw content of the interference.stat file.
+
+ Args:
+ content (str): Raw string content from the stat file.
+ Returns:
+ tuple: (total_times dict, distributions dict)
+ """
+
+ total_times, distributions = {}, {}
+
+ # Split content into sections: Top-level totals and various distributions
+ # Uses positive lookahead to split before a word followed by ' distribution'
+ sections = re.split(r"\n(?=[a-z]+ distribution)", content)
+
+ # Parse global total times (first section)
+ for line in sections[0].strip().split("\n"):
+ match = re.match(r"^([a-z]+)\s+(\d+)$", line.strip())
+ if match:
+ total_times[match.group(1)] = int(match.group(2))
+
+ # Parse histogram distributions (subsequent sections)
+ for section in sections[1:]:
+ lines = section.strip().split("\n")
+ # Extract header name (e.g., 'spinlock distribution')
+ header = lines[0].replace(" distribution", "").strip()
+ # Parse bucket key-value pairs (e.g., '[64 ns, 128 ns) : 143791')
+ dist_data = {
+ l.split(":")[0].strip(): int(l.split(":")[1].strip())
+ for l in lines[1:]
+ if ":" in l
+ }
+ distributions[header] = dist_data
+
+ return total_times, distributions
+
+ def monitor(self, cgroup_id, interval, duration, output_csv):
+ """
+ Periodically samples interference stats and saves results to a CSV file.
+
+ Args:
+ cgroup_id (str): The specific cgroup folder name.
+ interval (float): Seconds between samples.
+ duration (int): Total monitoring time in seconds.
+ output_csv (str): Filename to save sampled data.
+ """
+
+ candidates = [
+ os.path.join(self.base_path, cgroup_id),
+ os.path.join(self.base_path, "cpu", cgroup_id), # cgroup v1
+ ]
+ path = next((p for p in candidates if os.path.exists(p)), None)
+
+ if not path:
+ print(
+ f"\n[!] No access to cgroup: {os.path.join(self.base_path, cgroup_id)}"
+ )
+ print(" Hint: You can find the correct cid by executing:")
+ print(" cat /proc//cgroup")
+ return
+
+ path = os.path.join(path, "interference.stat")
+ if not os.path.exists(path):
+ print(f"\n[!] Interface file '{path}' not found.")
+ print(" This usually happens due to one of the following:")
+ print(" 1. Kernel is not compiled with CONFIG_IFS=y")
+ print(" 2. Kernel boot parameter 'cgroup_ifs=1' is missing")
+ print(" 3. The current cgroup is not managed by the IFS controller")
+ return
+
+ print(
+ f"[*] Starting monitor: {cgroup_id}, interval: {interval}s, duration: {duration}s"
+ )
+
+ data_list = []
+ start_time = time.time()
+
+ try:
+ while (time.time() - start_time) < duration:
+ ts = time.strftime("%H:%M:%S")
+ # Using fractional seconds in timestamp for sub-second intervals
+ if interval < 1.0:
+ ts = time.strftime("%H:%M:%S") + f".{int((time.time()%1)*100):02d}"
+
+ with open(path, "r") as f:
+ total_times, dists = self.parse_stat(f.read())
+
+ # Append total time metrics
+ for cat, val in total_times.items():
+ data_list.append([ts, cgroup_id, cat, "total_time_ns", val])
+
+ # Append bucket distribution metrics
+ for cat, dist in dists.items():
+ for b, c in dist.items():
+ data_list.append([ts, cgroup_id, cat, f"bucket_{b}", c])
+
+ time.sleep(interval)
+ except KeyboardInterrupt:
+ print("\n[!] Monitoring interrupted by user.")
+
+ # Write buffered data to CSV
+ with open(output_csv, "w", newline="") as f:
+ writer = csv.writer(f)
+ writer.writerow(
+ ["timestamp", "cgroup_id", "category", "metric_type", "value"]
+ )
+ writer.writerows(data_list)
+ print(f"[+] Data exported successfully: {output_csv}")
+
+ def bucket_key(self, bucket_str):
+ """
+ Parsing logic for histogram bucket labels to enable correct numerical sorting.
+ Example input: "bucket_[67.10 ms, 134.21 ms)"
+
+ Returns:
+ float: The value converted to nanoseconds (ns).
+ """
+ try:
+ # Regex to capture the first number and its unit (ns|us|ms|s)
+ match = re.search(r"(\d+\.?\d*)\s*(ns|us|ms|s)", bucket_str)
+ if not match:
+ return 0
+
+ value = float(match.group(1))
+ unit = match.group(2).lower()
+
+ # Conversion factors to nanoseconds
+ factors = {"ns": 1, "us": 1000, "ms": 1000000, "s": 1000000000}
+
+ return value * factors.get(unit, 1)
+ except Exception:
+ return 0
+
+ def report(self, base_csv, curr_csv, output_html):
+ """
+ Loads two CSV files and generates an interactive HTML dashboard.
+
+ Args:
+ base_csv (str): Path to baseline data.
+ curr_csv (str): Path to current data.
+ output_html (str): Path to generate the HTML report.
+ """
+
+ df_b = pd.read_csv(base_csv)
+ single_mode = curr_csv is None or base_csv == curr_csv
+ df_c = None if single_mode else pd.read_csv(curr_csv)
+
+ # Get unique union of categories present in both datasets
+ all_dfs = [df_b] if single_mode else [df_b, df_c]
+ categories = sorted(
+ list(set().union(*(df.category.unique() for df in all_dfs)))
+ )
+
+ # Color mapping to ensure consistent colors for categories across plots
+ color_sequence = [
+ "#636EFA",
+ "#EF553B",
+ "#00CC96",
+ "#AB63FA",
+ "#FFA15A",
+ "#19D3F3",
+ "#FF6692",
+ "#B6E880",
+ "#FF97FF",
+ "#FECB52",
+ ]
+ color_map = {
+ cat: color_sequence[i % len(color_sequence)]
+ for i, cat in enumerate(categories)
+ }
+
+ # Build HTML content with CSS for layout
+ html_parts = ["IFS Analysis Dashboard "]
+ col_width = "100%" if single_mode else "49.5%"
+ html_parts.append(
+ f""""""
+ )
+
+ title = "Performance Analysis" if single_mode else "Performance Comparison"
+ html_parts.append(f"") # end row-container & card
+
+ # Save the final HTML report
+ with open(output_html, "w") as f:
+ f.writelines(html_parts + [""])
+ print(f"[+] Comparison report generated: {output_html}")
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser()
+ sub = parser.add_subparsers(dest="mode")
+
+ # Monitor Mode Arguments
+ p_mon = sub.add_parser("monitor")
+ p_mon.add_argument(
+ "-G", "--cgroup", required=True, help="Cgroup name (e.g. 'docker/cid'"
+ )
+ p_mon.add_argument(
+ "-i", "--interval", type=float, default=1.0, help="Sampling interval (sec)"
+ )
+ p_mon.add_argument(
+ "-d", "--duration", type=int, default=30, help="Sampling duration (sec)"
+ )
+ p_mon.add_argument("-o", "--output", default="capture.csv")
+
+ # Report Mode Arguments
+ p_comp = sub.add_parser("report")
+ p_comp.add_argument("-b", "--base", required=True, help="Baseline CSV file")
+ p_comp.add_argument(
+ "-c", "--curr", default=None, help="Current CSV file (optional, for comparison)"
+ )
+ p_comp.add_argument("-o", "--output", default="report.html")
+
+ args = parser.parse_args()
+ tool = InterferenceTool()
+
+ if args.mode == "monitor":
+ tool.monitor(args.cgroup, args.interval, args.duration, args.output)
+ elif args.mode == "report":
+ tool.report(args.base, args.curr, args.output)
+ else:
+ parser.print_help()
diff --git a/tools/kspect/ifstool.1 b/tools/kspect/ifstool.1
new file mode 100644
index 000000000000..347d54f36bec
--- /dev/null
+++ b/tools/kspect/ifstool.1
@@ -0,0 +1,81 @@
+.TH IFSTOOL 1 "March 2026" "Linux" "User Commands"
+.SH NAME
+ifstool \- Analyze kernel interference statistics (CONFIG_IFS)
+.SH SYNOPSIS
+.B ifstool monitor
+[\fB\-G\fR|\fB\-\-cgroup\fR \fICGROUP_ID\fR] [\fB\-d\fR|\fB\-\-duration\fR \fISEC\fR] [\fB\-i\fR|\fB\-\-interval\fR \fISEC\fR] [\fB\-o\fR|\fB\-\-output\fR \fICSV\fR]
+.br
+.B ifstool report
+[\fB\-b\fR|\fB\-\-base\fR \fICSV\fR] [\fB\-c\fR|\fB\-\-curr\fR \fICSV\fR] [\fB\-o\fR|\fB\-\-output\fR \fIHTML\fR]
+
+.SH DESCRIPTION
+.B IFSTOOL
+is a specialized userspace utility designed to facilitate the monitoring and analysis of Interference Statistics (\fBCONFIG_IFS\fR).
+
+The \fBIFS infrastructure\fR is a kernel-level framework providing critical observability into execution jitter (noise) that disrupts task determinism. It quantifies CPU time stolen by kernel activities such as interrupt handling, softirqs, and lock contention. This telemetry is exposed via the \fBinterference.stat\fR control file within the cgroup hierarchy.
+
+\fBIFSTOOL\fR interfaces with this file to export raw metrics into structured CSV data and interactive HTML-based distribution reports.
+
+.SH SETUP
+The host environment must meet the following criteria:
+.IP \[bu] 2
+\fBKernel:\fR Compiled with \fBCONFIG_IFS=y\fR.
+.IP \[bu] 2
+\fBBoot Parameters:\fR \fBcgroup_ifs=1\fR must be added to the kernel command line.
+.IP \[bu] 2
+\fBPython Runtime:\fR Python 3.x with \fBpandas\fR and \fBplotly\fR libraries.
+.IP \[bu] 2
+\fBCgroup Hierarchy:\fR Either v2 unified or v1 (with the \fBcpu\fR subsystem mounted).
+
+.SH COMMANDS
+.SS monitor
+Capture raw interference data from a target cgroup.
+.TP
+\fB\-G, \-\-cgroup\fR \fICGROUP_ID\fR
+Specify the cgroup identifier (e.g., \fIdocker/\fR or \fIsystem.slice\fR).
+.TP
+\fB\-d, \-\-duration\fR \fISEC\fR
+Total collection time in seconds (default: 30).
+.TP
+\fB\-i, \-\-interval\fR \fISEC\fR
+Sampling interval in seconds; supports floating point (default: 1.0).
+
+.SS report
+Transform captured CSV data into an interactive HTML dashboard.
+.TP
+\fB\-b, \-\-base\fR \fICSV\fR
+Primary capture file for analysis.
+.TP
+\fB\-c, \-\-curr\fR \fICSV\fR
+Optional secondary file for differential (comparison) analysis.
+
+.SH HTML REPORT STRUCTURE
+The generated HTML report provides a multi-dimensional view of kernel noise:
+.IP "Total Time Delta Trend" 4
+A time-series line chart illustrating the incremental nanoseconds of interference per category (e.g., irq, spinlock).
+.IP "Latency Heatmaps" 4
+A frequency-domain visualization of the kernel's internal histogram:
+.RS 8
+.IP "X-axis:" 8
+Wall-clock time of the trace.
+.IP "Y-axis:" 8
+Latency magnitude (logarithmic buckets from ns to s).
+.IP "Color Intensity:" 8
+Represents the event count (density) of interference within that specific latency window.
+.RE
+
+.SH EXAMPLES
+Monitor a Docker container for 10 seconds:
+.IP
+.B $ ifstool monitor \-\-cgroup docker/ \-\-duration 10 \-\-interval 1
+.PP
+Generate a single-session deep dive:
+.IP
+.B $ ifstool report \-\-base capture.csv
+.PP
+Generate a differential report between two traces:
+.IP
+.B $ ifstool report \-\-base baseline.csv \-\-curr current.csv
+
+.SH AUTHOR
+Tengda Wu
\ No newline at end of file
diff --git a/tools/kspect/requirements.txt b/tools/kspect/requirements.txt
new file mode 100644
index 000000000000..8e5e376a0e9a
--- /dev/null
+++ b/tools/kspect/requirements.txt
@@ -0,0 +1,2 @@
+pandas==3.*
+plotly==6.*
--
2.34.1
{title} (CONFIG_IFS)
") + info = ( + f"File: {base_csv}" + if single_mode + else f"Baseline: {base_csv} | Current: {curr_csv}" + ) + html_parts.append(f"{info}
") + + # --- Part 1: Total Time Trends (Line Charts) --- + html_parts.append("")
+ html_parts.append("
")
+
+ html_parts.append(
+ "Total Time Delta (ns) Trend
") + + # Subplots ensure Y-axis can be matched for direct visual comparison + fig_line = make_subplots( + rows=1, + cols=1 if single_mode else 2, + horizontal_spacing=0.05, + subplot_titles=("Latency",) if single_mode else ("Baseline", "Current"), + ) + + plot_configs = ( + [("Data", df_b)] if single_mode else [("Baseline", df_b), ("Current", df_c)] + ) + for i, (name, df) in enumerate(plot_configs, 1): + for cat in categories: + sub = df[ + (df["category"] == cat) & (df["metric_type"] == "total_time_ns") + ].sort_values("timestamp") + if not sub.empty: + y_val = sub["value"].diff().fillna(0) + fig_line.add_trace( + go.Scatter( + x=sub["timestamp"], + y=y_val, + name=f"{cat} ({name})", + legendgroup=cat, + mode="lines+markers", + line=dict(color=color_map[cat], width=2), + marker=dict(color=color_map[cat], size=6), + ), + row=1, + col=i, + ) + + # Sync Y-axes scale for baseline and current plots + if not single_mode: + fig_line.update_yaxes(matches="y", row=1, col=2) + fig_line.update_layout( + height=500, template="plotly_white", margin=dict(t=50, b=20) + ) + html_parts.append(fig_line.to_html(full_html=False, include_plotlyjs="cdn")) + html_parts.append("Detailed Latency Distribution
" + ) + + # --- Part 2: Distribution Heatmaps --- + for cat in categories: + + def get_m_and_stats(df): + """Processes raw metrics into a delta-count matrix for heatmap visualization.""" + sub = df[ + (df["category"] == cat) + & (df["metric_type"].str.startswith("bucket_")) + ] + if sub.empty: + return pd.DataFrame(), {} + + # Transform to matrix (Buckets vs Time) + p = sub.pivot( + index="metric_type", columns="timestamp", values="value" + ).sort_index(axis=1) + # Sort Y-axis buckets based on numerical time value + p = p.reindex(sorted(p.index, key=self.bucket_key)) + # Calculate incremental change (delta) + delta = p.diff(axis=1).fillna(0) + return delta, { + "Total": int(delta.values.sum()), + "Peak": int(delta.values.max()), + } + + m_b, stats_b = get_m_and_stats(df_b) + m_c, stats_c = (None, None) if single_mode else get_m_and_stats(df_c) + + # Determine global max for heatmap color scaling consistency + z_vals = [m_b.values.max() if not m_b.empty else 0] + if not single_mode: + z_vals.append(m_c.values.max() if not m_c.empty else 0) + global_max_z = max(z_vals + [1]) + + html_parts.append(f"Category: {cat.upper()}
") + html_parts.append("")
+
+ configs = (
+ [("Data", m_b, stats_b)]
+ if single_mode
+ else [("Baseline", m_b, stats_b), ("Current", m_c, stats_c)]
+ )
+ for name, m, stats in configs:
+ html_parts.append("
")
+ if not m.empty:
+ # Display summary statistics
+ html_parts.append("
")
+
+ html_parts.append("")
+ for k, v in stats.items():
+ html_parts.append(
+ f"
")
+
+ # Generate Heatmap
+ fig = go.Figure(
+ data=go.Heatmap(
+ z=m.values,
+ x=m.columns,
+ y=m.index,
+ colorscale="Viridis",
+ zmin=0,
+ zmax=global_max_z,
+ colorbar=dict(title="Count", thickness=10, len=0.7),
+ )
+ )
+ fig.update_layout(
+ height=400,
+ margin=dict(l=120, r=0, t=10, b=30),
+ template="plotly_white",
+ )
+ html_parts.append(
+ fig.to_html(full_html=False, include_plotlyjs="cdn")
+ )
+ else:
+ html_parts.append(
+ "{k}: {v}
"
+ )
+ html_parts.append("No Data
"
+ )
+ html_parts.append("
9 Mar
9 Mar
9:36 p.m.
反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://atomgit.com/openeuler/kernel/merge_requests/21151 邮件列表地址:https://mailweb.openeuler.org/archives/list/kernel@openeuler.org/message/O4S... FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://atomgit.com/openeuler/kernel/merge_requests/21151 Mailing list address: https://mailweb.openeuler.org/archives/list/kernel@openeuler.org/message/O4S...
1
Age (days ago)
1
Last active (days ago)
1 comments
2 participants
participants (2)
-
patchwork bot -
Tengda Wu