From: Nhat Pham nphamcs@gmail.com
mainline inclusion from mainline-v6.8-rc1 commit 501a06fe8e4c185bbda371b8cedbdf1b23a633d8 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8ZUJQ CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
--------------------------------
During our experiment with zswap, we sometimes observe swap IOs due to occasional zswap store failures and writebacks-to-swap. These swapping IOs prevent many users who cannot tolerate swapping from adopting zswap to save memory and improve performance where possible.
This patch adds the option to disable this behavior entirely: do not writeback to backing swapping device when a zswap store attempt fail, and do not write pages in the zswap pool back to the backing swap device (both when the pool is full, and when the new zswap shrinker is called).
This new behavior can be opted-in/out on a per-cgroup basis via a new cgroup file. By default, writebacks to swap device is enabled, which is the previous behavior. Initially, writeback is enabled for the root cgroup, and a newly created cgroup will inherit the current setting of its parent.
Note that this is subtly different from setting memory.swap.max to 0, as it still allows for pages to be stored in the zswap pool (which itself consumes swap space in its current form).
This patch should be applied on top of the zswap shrinker series:
https://lore.kernel.org/linux-mm/20231130194023.4102148-1-nphamcs@gmail.com/
as it also disables the zswap shrinker, a major source of zswap writebacks.
For the most part, this feature is motivated by internal parties who have already established their opinions regarding swapping - the workloads that are highly sensitive to IO, and especially those who are using servers with really slow disk performance (for instance, massive but slow HDDs). For these folks, it's impossible to convince them to even entertain zswap if swapping also comes as a packaged deal. Writeback disabling is quite a useful feature in these situations - on a mixed workloads deployment, they can disable writeback for the more IO-sensitive workloads, and enable writeback for other background workloads.
For instance, on a server with HDD, I allocate memories and populate them with random values (so that zswap store will always fail), and specify memory.high low enough to trigger reclaim. The time it takes to allocate the memories and just read through it a couple of times (doing silly things like computing the values' average etc.):
zswap.writeback disabled: real 0m30.537s user 0m23.687s sys 0m6.637s 0 pages swapped in 0 pages swapped out
zswap.writeback enabled: real 0m45.061s user 0m24.310s sys 0m8.892s 712686 pages swapped in 461093 pages swapped out
(the last two lines are from vmstat -s).
[nphamcs@gmail.com: add a comment about recurring zswap store failures leading to reclaim inefficiency] Link: https://lkml.kernel.org/r/20231221005725.3446672-1-nphamcs@gmail.com Link: https://lkml.kernel.org/r/20231207192406.3809579-1-nphamcs@gmail.com Signed-off-by: Nhat Pham nphamcs@gmail.com Suggested-by: Johannes Weiner hannes@cmpxchg.org Reviewed-by: Yosry Ahmed yosryahmed@google.com Acked-by: Chris Li chrisl@kernel.org Cc: Dan Streetman ddstreet@ieee.org Cc: David Heidelberg david@ixit.cz Cc: Domenico Cerasuolo cerasuolodomenico@gmail.com Cc: Hugh Dickins hughd@google.com Cc: Jonathan Corbet corbet@lwn.net Cc: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Cc: Michal Hocko mhocko@kernel.org Cc: Mike Rapoport (IBM) rppt@kernel.org Cc: Muchun Song muchun.song@linux.dev Cc: Roman Gushchin roman.gushchin@linux.dev Cc: Sergey Senozhatsky senozhatsky@chromium.org Cc: Seth Jennings sjenning@redhat.com Cc: Shakeel Butt shakeelb@google.com Cc: Tejun Heo tj@kernel.org Cc: Vitaly Wool vitaly.wool@konsulko.com Cc: Zefan Li lizefan.x@bytedance.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Ma Wupeng mawupeng1@huawei.com --- Documentation/admin-guide/cgroup-v2.rst | 15 ++++++++++ Documentation/admin-guide/mm/zswap.rst | 10 +++++++ include/linux/memcontrol.h | 12 ++++++++ include/linux/zswap.h | 5 ++++ mm/memcontrol.c | 38 +++++++++++++++++++++++++ mm/page_io.c | 6 ++++ mm/shmem.c | 3 +- mm/zswap.c | 13 +++++++-- 8 files changed, 98 insertions(+), 4 deletions(-)
diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index e998aa071cedf..45a8b67433528 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1645,6 +1645,21 @@ PAGE_SIZE multiple when read back. limit, it will refuse to take any more stores before existing entries fault back in or are written out to disk.
+ memory.zswap.writeback + A read-write single value file. The default value is "1". The + initial value of the root cgroup is 1, and when a new cgroup is + created, it inherits the current value of its parent. + + When this is set to 0, all swapping attempts to swapping devices + are disabled. This included both zswap writebacks, and swapping due + to zswap store failures. If the zswap store failures are recurring + (for e.g if the pages are incompressible), users can observe + reclaim inefficiency after disabling writeback (because the same + pages might be rejected again and again). + + Note that this is subtly different from setting memory.swap.max to + 0, as it still allows for pages to be written to the zswap pool. + memory.pressure A read-only nested-keyed file.
diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst index 62fc244ec7020..b42132969e315 100644 --- a/Documentation/admin-guide/mm/zswap.rst +++ b/Documentation/admin-guide/mm/zswap.rst @@ -153,6 +153,16 @@ attribute, e. g.::
Setting this parameter to 100 will disable the hysteresis.
+Some users cannot tolerate the swapping that comes with zswap store failures +and zswap writebacks. Swapping can be disabled entirely (without disabling +zswap itself) on a cgroup-basis as follows: + + echo 0 > /sys/fs/cgroup/<cgroup-name>/memory.zswap.writeback + +Note that if the store failures are recurring (for e.g if the pages are +incompressible), users can observe reclaim inefficiency after disabling +writeback (because the same pages might be rejected again and again). + When there is a sizable amount of cold memory residing in the zswap pool, it can be advantageous to proactively write these cold pages to swap and reclaim the memory for other use cases. By default, the zswap shrinker is disabled. diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 760302ac08f77..d9b910fac70b6 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -241,6 +241,12 @@ struct mem_cgroup {
#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP) unsigned long zswap_max; + + /* + * Prevent pages from this memcg from being written back from zswap to + * swap, and from being swapped out on zswap store failures. + */ + bool zswap_writeback; #endif
unsigned long soft_limit; @@ -1996,6 +2002,7 @@ static inline void count_objcg_event(struct obj_cgroup *objcg, bool obj_cgroup_may_zswap(struct obj_cgroup *objcg); void obj_cgroup_charge_zswap(struct obj_cgroup *objcg, size_t size); void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size); +bool mem_cgroup_zswap_writeback_enabled(struct mem_cgroup *memcg); #else static inline bool obj_cgroup_may_zswap(struct obj_cgroup *objcg) { @@ -2009,6 +2016,11 @@ static inline void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size) { } +static inline bool mem_cgroup_zswap_writeback_enabled(struct mem_cgroup *memcg) +{ + /* if zswap is disabled, do not block pages going to the swapping device */ + return true; +} #endif
#endif /* _LINUX_MEMCONTROL_H */ diff --git a/include/linux/zswap.h b/include/linux/zswap.h index 08c240e16a01f..f0f6d8a04c812 100644 --- a/include/linux/zswap.h +++ b/include/linux/zswap.h @@ -35,6 +35,7 @@ void zswap_swapoff(int type); void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg); void zswap_lruvec_state_init(struct lruvec *lruvec); void zswap_page_swapin(struct page *page); +bool is_zswap_enabled(void); #else
struct zswap_lruvec_state {}; @@ -55,6 +56,10 @@ static inline void zswap_swapoff(int type) {} static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {} static inline void zswap_lruvec_state_init(struct lruvec *lruvec) {} static inline void zswap_page_swapin(struct page *page) {} +static inline bool is_zswap_enabled(void) +{ + return false; +} #endif
#endif /* _LINUX_ZSWAP_H */ diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 9d8092048603c..8e3dd875a4aef 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6437,6 +6437,8 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) WRITE_ONCE(memcg->soft_limit, PAGE_COUNTER_MAX); #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP) memcg->zswap_max = PAGE_COUNTER_MAX; + WRITE_ONCE(memcg->zswap_writeback, + !parent || READ_ONCE(parent->zswap_writeback)); #endif #ifdef CONFIG_MEMCG_V1_RECLAIM memcg->high_async_ratio = HIGH_ASYNC_RATIO_BASE; @@ -9052,6 +9054,12 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size) rcu_read_unlock(); }
+bool mem_cgroup_zswap_writeback_enabled(struct mem_cgroup *memcg) +{ + /* if zswap is disabled, do not block pages going to the swapping device */ + return !is_zswap_enabled() || !memcg || READ_ONCE(memcg->zswap_writeback); +} + static u64 zswap_current_read(struct cgroup_subsys_state *css, struct cftype *cft) { @@ -9082,6 +9090,31 @@ static ssize_t zswap_max_write(struct kernfs_open_file *of, return nbytes; }
+static int zswap_writeback_show(struct seq_file *m, void *v) +{ + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); + + seq_printf(m, "%d\n", READ_ONCE(memcg->zswap_writeback)); + return 0; +} + +static ssize_t zswap_writeback_write(struct kernfs_open_file *of, + char *buf, size_t nbytes, loff_t off) +{ + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); + int zswap_writeback; + ssize_t parse_ret = kstrtoint(strstrip(buf), 0, &zswap_writeback); + + if (parse_ret) + return parse_ret; + + if (zswap_writeback != 0 && zswap_writeback != 1) + return -EINVAL; + + WRITE_ONCE(memcg->zswap_writeback, zswap_writeback); + return nbytes; +} + static struct cftype zswap_files[] = { { .name = "zswap.current", @@ -9094,6 +9127,11 @@ static struct cftype zswap_files[] = { .seq_show = zswap_max_show, .write = zswap_max_write, }, + { + .name = "zswap.writeback", + .seq_show = zswap_writeback_show, + .write = zswap_writeback_write, + }, { } /* terminate */ }; #endif /* CONFIG_MEMCG_KMEM && CONFIG_ZSWAP */ diff --git a/mm/page_io.c b/mm/page_io.c index af65db0be7310..d7defb114b344 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -201,6 +201,12 @@ int swap_writepage(struct page *page, struct writeback_control *wbc) folio_end_writeback(folio); return 0; } + + if (!mem_cgroup_zswap_writeback_enabled(folio_memcg(folio))) { + folio_mark_dirty(folio); + return AOP_WRITEPAGE_ACTIVATE; + } + __swap_writepage(&folio->page, wbc); return 0; } diff --git a/mm/shmem.c b/mm/shmem.c index de9a884b10e8f..d4ecbd131c01b 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1521,8 +1521,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
mutex_unlock(&shmem_swaplist_mutex); BUG_ON(folio_mapped(folio)); - swap_writepage(&folio->page, wbc); - return 0; + return swap_writepage(&folio->page, wbc); }
mutex_unlock(&shmem_swaplist_mutex); diff --git a/mm/zswap.c b/mm/zswap.c index 2146f2dad4cd7..894ccae11a395 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -150,6 +150,11 @@ static bool zswap_shrinker_enabled = IS_ENABLED( CONFIG_ZSWAP_SHRINKER_DEFAULT_ON); module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
+bool is_zswap_enabled(void) +{ + return zswap_enabled; +} + /********************************* * data structures **********************************/ @@ -593,7 +598,8 @@ static unsigned long zswap_shrinker_scan(struct shrinker *shrinker, struct zswap_pool *pool = container_of(shrinker, struct zswap_pool, shrinker); bool encountered_page_in_swapcache = false;
- if (!zswap_shrinker_enabled) { + if (!zswap_shrinker_enabled || + !mem_cgroup_zswap_writeback_enabled(sc->memcg)) { sc->nr_scanned = 0; return SHRINK_STOP; } @@ -634,7 +640,7 @@ static unsigned long zswap_shrinker_count(struct shrinker *shrinker, struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid)); unsigned long nr_backing, nr_stored, nr_freeable, nr_protected;
- if (!zswap_shrinker_enabled) + if (!zswap_shrinker_enabled || !mem_cgroup_zswap_writeback_enabled(memcg)) return 0;
#ifdef CONFIG_MEMCG_KMEM @@ -950,6 +956,9 @@ static int shrink_memcg(struct mem_cgroup *memcg) struct zswap_pool *pool; int nid, shrunk = 0;
+ if (!mem_cgroup_zswap_writeback_enabled(memcg)) + return -EINVAL; + /* * Skip zombies because their LRUs are reparented and we would be * reclaiming from the parent instead of the dead memcg.