From: Vlastimil Babka vbabka@suse.cz
mainline inclusion from linux-next commit 0b8efcb0bf057dc0203a022334d5934867a6c212 category: bugfix bugzilla: 175105 CVE: NA
-------------------------------------------------
drop_slab_node() is called as part of echo 2>/proc/sys/vm/drop_caches operation. It iterates over all memcgs and calls shrink_slab() which in turn iterates over all slab shrinkers. Freed objects are counted and as long as the total number of freed objects from all memcgs and shrinkers is higher than 10, drop_slab_node() loops for another full memcgs*shrinkers iteration.
This arbitrary constant threshold of 10 can result in effectively an infinite loop on a system with large number of memcgs and/or parallel activity that allocates new objects. This has been reported previously by Chunxin Zang [1] and recently by our customer.
The previous report [1] has resulted in commit 069c411de40a ("mm/vmscan: fix infinite loop in drop_slab_node") which added a check for signals allowing the user to terminate the command writing to drop_caches. At the time it was also considered to make the threshold grow with each iteration to guarantee termination, but such patch hasn't been formally proposed yet.
This patch implements the dynamically growing threshold. At first iteration it's enough to free one object to continue, and this threshold effectively doubles with each iteration. Our customer's feedback was positive.
There is always a risk that this change will result on some system in a previously terminating drop_caches operation to terminate sooner and free fewer objects. Ideally the semantics would guarantee freeing all freeable objects that existed at the moment of starting the operation, while not looping forever for newly allocated objects, but that's not feasible to track. In the less ideal solution based on thresholds, arguably the termination guarantee is more important than the exhaustiveness guarantee. If there are reports of large regression wrt being exhaustive, we can tune how fast the threshold grows.
[1] https://lore.kernel.org/lkml/20200909152047.27905-1-zangchunxin@bytedance.co...
Matthew reports [2] that if we free enough objects, we can eventually right-shift by BITS_PER_LONG, which is undefined behavior. Raise the threshold from 0 to 1 which means we will shift only up to BITS_PER_LONG-1.
[2] https://lore.kernel.org/linux-mm/YSTDnqKgQLvziyQI@casper.infradead.org/
Link: https://lkml.kernel.org/r/20210818152239.25502-1-vbabka@suse.cz Signed-off-by: Vlastimil Babka vbabka@suse.cz Reported-by: Chunxin Zang zangchunxin@bytedance.com Cc: Muchun Song songmuchun@bytedance.com Cc: Chris Down chris@chrisdown.name Cc: Michal Hocko mhocko@kernel.org Cc: Matthew Wilcox willy@infradead.org Cc: Vlastimil Babka vbabka@suse.cz Cc: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Stephen Rothwell sfr@canb.auug.org.au Conflicts: mm/vmscan.c Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/vmscan.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c index b5f5a366d155f..b381c891ca8fd 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -738,6 +738,7 @@ void drop_slab_node(int nid) { unsigned long freed; unsigned int counts = 0; + int shift = 0;
do { struct mem_cgroup *memcg = NULL; @@ -760,7 +761,7 @@ void drop_slab_node(int nid) return; } } - } while (freed > 10); + } while ((freed >> shift++) > 1); }
void drop_slab(void)