Zhang Qilong (3): mm: Replace deferrable timer with delay timer for shrink worker mm: Move vm_cache_limit_mbytes check to page_cache_over_limit() mm: Add page cache limit check before queueing shrink worker mm/page_cache_limit.c | 26 ++++++++++---------------- 1 file changed, 10 insertions(+), 16 deletions(-) -- 2.43.0
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/ID9H69 -------------------------------- When system is idle, the deferrable timer maybe expire in unsure time. If the system have many clean cached pages at this time, shrink worker can not be queued in excepted time. So we replace deferrable timer with delay timer for shrink worker to avoid this issue. Fixes: 621647ce254f ("mm: support periodical memory reclaim") Signed-off-by: Zhang Qilong <zhangqilong3@huawei.com> --- mm/page_cache_limit.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/page_cache_limit.c b/mm/page_cache_limit.c index 1ab00225f8ac..a7500f13b46b 100644 --- a/mm/page_cache_limit.c +++ b/mm/page_cache_limit.c @@ -16,11 +16,11 @@ static int vm_cache_reclaim_weight __read_mostly = 1; static int vm_cache_reclaim_weight_max = 100; static int vm_cache_reclaim_enable = 1; static unsigned long vm_cache_limit_mbytes __read_mostly; static void shrink_shepherd(struct work_struct *w); -static DECLARE_DEFERRABLE_WORK(shepherd, shrink_shepherd); +static DECLARE_DELAYED_WORK(shepherd, shrink_shepherd); static struct work_struct vmscan_works[MAX_NUMNODES]; static bool should_periodical_reclaim(void) { return vm_cache_reclaim_s && vm_cache_reclaim_enable; -- 2.43.0
hulk inclusion category: cleanup bugzilla: https://gitee.com/openeuler/kernel/issues/ID9H69 -------------------------------- Move vm_cache_limit_mbytes check from should_reclaim_page_cache() to page_cache_over_limit(), and call should_periodical_reclaim() directly in page cache limit handle. Signed-off-by: Zhang Qilong <zhangqilong3@huawei.com> --- mm/page_cache_limit.c | 16 ++++------------ 1 file changed, 4 insertions(+), 12 deletions(-) diff --git a/mm/page_cache_limit.c b/mm/page_cache_limit.c index a7500f13b46b..aac6e265a3b4 100644 --- a/mm/page_cache_limit.c +++ b/mm/page_cache_limit.c @@ -36,30 +36,22 @@ static unsigned long node_reclaim_num(void) static bool page_cache_over_limit(void) { unsigned long lru_file; unsigned long limit; + if (!vm_cache_limit_mbytes) + return false; + limit = vm_cache_limit_mbytes << (20 - PAGE_SHIFT); lru_file = global_node_page_state(NR_ACTIVE_FILE) + global_node_page_state(NR_INACTIVE_FILE); if (lru_file > limit) return true; return false; } -static bool should_reclaim_page_cache(void) -{ - if (!should_periodical_reclaim()) - return false; - - if (!vm_cache_limit_mbytes) - return false; - - return true; -} - int cache_reclaim_enable_handler(struct ctl_table *table, int write, void *buffer, size_t *length, loff_t *ppos) { int ret; @@ -108,11 +100,11 @@ int cache_limit_mbytes_sysctl_handler(struct ctl_table *table, int write, vm_cache_limit_mbytes = origin_mbytes; return -EINVAL; } if (write) { - while (should_reclaim_page_cache() && page_cache_over_limit() && + while (should_periodical_reclaim() && page_cache_over_limit() && nr_retries--) { if (signal_pending(current)) return -EINTR; shrink_memory(node_reclaim_num(), false); -- 2.43.0
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/ID9H69 -------------------------------- Add page cache limit check before queueing shrink worker, queueing shrink worker can be avoid if page_cache_over_limit() returns false. Fixes: 621647ce254f ("mm: support periodical memory reclaim") Signed-off-by: Zhang Qilong <zhangqilong3@huawei.com> --- mm/page_cache_limit.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/mm/page_cache_limit.c b/mm/page_cache_limit.c index aac6e265a3b4..677ead24d27f 100644 --- a/mm/page_cache_limit.c +++ b/mm/page_cache_limit.c @@ -119,13 +119,15 @@ static void shrink_shepherd(struct work_struct *w) int node; if (!should_periodical_reclaim()) return; - for_each_online_node(node) { - if (!work_pending(&vmscan_works[node])) - queue_work_node(node, system_unbound_wq, &vmscan_works[node]); + if (page_cache_over_limit()) + for_each_online_node(node) { + if (!work_pending(&vmscan_works[node])) + queue_work_node(node, system_unbound_wq, + &vmscan_works[node]); } queue_delayed_work(system_unbound_wq, &shepherd, round_jiffies_relative((unsigned long)vm_cache_reclaim_s * HZ)); } -- 2.43.0
反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://gitee.com/openeuler/kernel/pulls/19392 邮件列表地址:https://mailweb.openeuler.org/archives/list/kernel@openeuler.org/message/PU3... FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://gitee.com/openeuler/kernel/pulls/19392 Mailing list address: https://mailweb.openeuler.org/archives/list/kernel@openeuler.org/message/PU3...
participants (2)
-
patchwork bot -
Zhang Qilong