From: David Hildenbrand david@redhat.com
mainline inclusion from mainline-v5.10-rc1 commit 293ffa5ebb9c08a77d8de458166c31b4d7b0cd65 category: feature bugzilla: 182882 CVE: NA
-----------------------------------------------
Whenever we move pages between freelists via move_to_free_list()/ move_freepages_block(), we don't actually touch the pages: 1. Page isolation doesn't actually touch the pages, it simply isolates pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist. When undoing isolation, we move the pages back to the target list. 2. Page stealing (steal_suitable_fallback()) moves free pages directly between lists without touching them. 3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock() moves free pages directly between freelists without touching them.
We already place pages to the tail of the freelists when undoing isolation via __putback_isolated_page(), let's do it in any case (e.g., if order <= pageblock_order) and document the behavior. To simplify, let's move the pages to the tail for all move_to_free_list()/move_freepages_block() users.
In 2., the target list is empty, so there should be no change. In 3., we might observe a change, however, highatomic is more concerned about allocations succeeding than cache hotness - if we ever realize this change degrades a workload, we can special-case this instance and add a proper comment.
This change results in all pages getting onlined via online_pages() to be placed to the tail of the freelist.
Signed-off-by: David Hildenbrand david@redhat.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Reviewed-by: Oscar Salvador osalvador@suse.de Reviewed-by: Wei Yang richard.weiyang@linux.alibaba.com Acked-by: Pankaj Gupta pankaj.gupta.linux@gmail.com Acked-by: Michal Hocko mhocko@suse.com Cc: Alexander Duyck alexander.h.duyck@linux.intel.com Cc: Mel Gorman mgorman@techsingularity.net Cc: Dave Hansen dave.hansen@intel.com Cc: Vlastimil Babka vbabka@suse.cz Cc: Mike Rapoport rppt@kernel.org Cc: Scott Cheloha cheloha@linux.ibm.com Cc: Michael Ellerman mpe@ellerman.id.au Cc: Haiyang Zhang haiyangz@microsoft.com Cc: "K. Y. Srinivasan" kys@microsoft.com Cc: Matthew Wilcox willy@infradead.org Cc: Michal Hocko mhocko@suse.com Cc: Stephen Hemminger sthemmin@microsoft.com Cc: Wei Liu wei.liu@kernel.org Link: https://lkml.kernel.org/r/20201005121534.15649-4-david@redhat.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Conflicts: mm/page_alloc.c [Peng Liu: cherry-pick from 293ffa5ebb9c08a77d8de458166c31b4d7b0cd65] Signed-off-by: Peng Liu liupeng256@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/page_alloc.c | 12 +++++++++--- mm/page_isolation.c | 5 +++++ 2 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a4806709e441f..9baa829f6a29c 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2122,7 +2122,7 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone, #endif
/* - * Move the free pages in a range to the free lists of the requested type. + * Move the free pages in a range to the freelist tail of the requested type. * Note that start_page and end_pages are not aligned on a pageblock * boundary. If alignment is required, use move_freepages_block() */ @@ -2174,8 +2174,14 @@ static int move_freepages(struct zone *zone, }
order = page_order(page); - list_move(&page->lru, - &zone->free_area[order].free_list[migratetype]); + /* + * Used for pages which are on another list. Move the pages to + * the tail of the list - so the moved pages won't immediately + * be considered for allocation again (e.g., optimization for + * memory onlining). + */ + list_move_tail(&page->lru, + &zone->free_area[order].free_list[migratetype]); page += 1 << order; pages_moved += 1 << order; } diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 88b635279e032..4639ef78fb151 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -128,6 +128,11 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype) * If we isolate freepage with more than pageblock_order, there * should be no freepage in the range, so we could avoid costly * pageblock scanning for freepage moving. + * + * We didn't actually touch any of the isolated pages, so place them + * to the tail of the freelist. This is an optimization for memory + * onlining - just onlined memory won't immediately be considered for + * allocation. */ if (!isolated_page) { nr_pages = move_freepages_block(zone, page, migratetype, NULL);