Jianfeng Wang (2): slub: introduce count_partial_free_approx() slub: use count_partial_free_approx() in slab_out_of_memory()
mm/slub.c | 41 +++++++++++++++++++++++++++++++++++++++-- 1 file changed, 39 insertions(+), 2 deletions(-)
From: Jianfeng Wang jianfeng.w.wang@oracle.com
mainline inclusion from mainline-v6.10-rc1 commit 046f4c69090c120a51aa4767628afa900aac8e28 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IAM1MT CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
-------------------------------------------
When reading "/proc/slabinfo", the kernel needs to report the number of free objects for each kmem_cache. The current implementation uses count_partial() to get it by scanning each kmem_cache_node's partial slab list and summing free objects from every partial slab. This process must hold per-kmem_cache_node spinlock and disable IRQ, and may take a long time. Consequently, it can block slab allocations on other CPUs and cause timeouts for network devices, when the partial list is long. In production, even NMI watchdog can be triggered due to this matter: e.g., for "buffer_head", the number of partial slabs was observed to be ~1M in one kmem_cache_node. This problem was also confirmed by others [1-3].
Iterating a partial list to get the exact count of objects can cause soft lockups for a long list with or without the lock (e.g., if preemption is disabled), and may not be very useful: the object count can change after the lock is released. The approach of maintaining free-object counters requires atomic operations on the fast path [3].
So, the fix is to introduce count_partial_free_approx(). This function can be used for getting the free object count in a kmem_cache_node's partial list. It limits the number of slabs to scan and avoids scanning the whole list by giving an approximation for a long list. Suppose the limit is N. If the list's length is not greater than N, output the exact count by traversing the list; if its length is greater than N, output an approximated count by traversing a subset of the list. The proposed method is to scan N/2 slabs from the list's head and N/2 slabs from the tail. For a partial list with ~280K slabs, benchmarks show that it performs better than just counting from the list's head, after slabs get sorted by kmem_cache_shrink(). Default the limit to 10000, as it produces an approximation within 1% of the exact count for both scenarios. Then, use count_partial_free_approx() in get_slabinfo().
Benchmarks: Diff = (exact - approximated) / exact * Normal case (w/o kmem_cache_shrink()): | MAX_TO_SCAN | Diff (count from head)| Diff (count head+tail)| | 1000 | 0.43 % | 1.09 % | | 5000 | 0.06 % | 0.37 % | | 10000 | 0.02 % | 0.16 % | | 20000 | 0.009 % | -0.003 % |
* Skewed case (w/ kmem_cache_shrink()): | MAX_TO_SCAN | Diff (count from head)| Diff (count head+tail)| | 1000 | 12.46 % | 6.75 % | | 5000 | 5.38 % | 1.27 % | | 10000 | 4.99 % | 0.22 % | | 20000 | 4.86 % | -0.06 % |
[1] https://lore.kernel.org/linux-mm/alpine.DEB.2.21.2003031602460.1537@www.lame... [2] https://lore.kernel.org/lkml/alpine.DEB.2.22.394.2008071258020.55871@www.lam... [3] https://lore.kernel.org/lkml/1e01092b-140d-2bab-aeba-321a74a194ee@linux.com/...
Signed-off-by: Jianfeng Wang jianfeng.w.wang@oracle.com Acked-by: David Rientjes rientjes@google.com Signed-off-by: Vlastimil Babka vbabka@suse.cz
Conflicts: mm/slub.c [Context conflicts, and struct slab isn't introduced, use struct page instead.] Signed-off-by: Jinjiang Tu tujinjiang@huawei.com --- mm/slub.c | 39 ++++++++++++++++++++++++++++++++++++++- 1 file changed, 38 insertions(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c index a67bcc57a770..63a99c485b6b 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2383,6 +2383,43 @@ static inline unsigned long node_nr_objs(struct kmem_cache_node *n) { return atomic_long_read(&n->total_objects); } + +#define MAX_PARTIAL_TO_SCAN 10000 + +static unsigned long count_partial_free_approx(struct kmem_cache_node *n) +{ + unsigned long flags; + unsigned long x = 0; + struct page *page; + + spin_lock_irqsave(&n->list_lock, flags); + if (n->nr_partial <= MAX_PARTIAL_TO_SCAN) { + list_for_each_entry(page, &n->partial, slab_list) + x += page->objects - page->inuse; + } else { + /* + * For a long list, approximate the total count of objects in + * it to meet the limit on the number of slabs to scan. + * Scan from both the list's head and tail for better accuracy. + */ + unsigned long scanned = 0; + + list_for_each_entry(page, &n->partial, slab_list) { + x += page->objects - page->inuse; + if (++scanned == MAX_PARTIAL_TO_SCAN / 2) + break; + } + list_for_each_entry_reverse(page, &n->partial, slab_list) { + x += page->objects - page->inuse; + if (++scanned == MAX_PARTIAL_TO_SCAN) + break; + } + x = mult_frac(x, n->nr_partial, scanned); + x = min(x, node_nr_objs(n)); + } + spin_unlock_irqrestore(&n->list_lock, flags); + return x; +} #endif /* CONFIG_SLUB_DEBUG */
#if defined(CONFIG_SLUB_DEBUG) || defined(CONFIG_SYSFS) @@ -5938,7 +5975,7 @@ void get_slabinfo(struct kmem_cache *s, struct slabinfo *sinfo) for_each_kmem_cache_node(s, node, n) { nr_slabs += node_nr_slabs(n); nr_objs += node_nr_objs(n); - nr_free += count_partial(n, count_free); + nr_free += count_partial_free_approx(n); }
sinfo->active_objs = nr_objs - nr_free;
From: Jianfeng Wang jianfeng.w.wang@oracle.com
mainline inclusion from mainline-v6.10-rc1 commit b3d8a8e870144369fdbcbb1a78878ce98532265a category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IAM1MT CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
-------------------------------------------
slab_out_of_memory() uses count_partial() to get the exact count of free objects for each node. As it may get called in the slab allocation path, count_partial_free_approx() can be used to avoid the risk and overhead of traversing a long partial slab list.
At the same time, show_slab_objects() still uses count_partial(). Thus, slub users can still have the option to access the exact count of objects via sysfs if the overhead is acceptable to them.
Signed-off-by: Jianfeng Wang jianfeng.w.wang@oracle.com Acked-by: David Rientjes rientjes@google.com Signed-off-by: Vlastimil Babka vbabka@suse.cz Signed-off-by: Jinjiang Tu tujinjiang@huawei.com --- mm/slub.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c index 63a99c485b6b..03b1137df088 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2465,7 +2465,7 @@ slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid) unsigned long nr_objs; unsigned long nr_free;
- nr_free = count_partial(n, count_free); + nr_free = count_partial_free_approx(n); nr_slabs = node_nr_slabs(n); nr_objs = node_nr_objs(n);
反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://gitee.com/openeuler/kernel/pulls/11105 邮件列表地址:https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/C...
FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://gitee.com/openeuler/kernel/pulls/11105 Mailing list address: https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/C...