From: David Hildenbrand david@redhat.com
mainline inclusion from mainline-v5.10-rc1 commit 47b6a24a23825ae7b33ff11396980da7c353843d category: feature bugzilla: 182882 CVE: NA
-----------------------------------------------
__putback_isolated_page() already documents that pages will be placed to the tail of the freelist - this is, however, not the case for "order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be the case for all existing users.
This change affects two users: - free page reporting - page isolation, when undoing the isolation (including memory onlining).
This behavior is desirable for pages that haven't really been touched lately, so exactly the two users that don't actually read/write page content, but rather move untouched pages.
The new behavior is especially desirable for memory onlining, where we allow allocation of newly onlined pages via undo_isolate_page_range() in online_pages(). Right now, we always place them to the head of the freelist, resulting in undesireable behavior: Assume we add individual memory chunks via add_memory() and online them right away to the NORMAL zone. We create a dependency chain of unmovable allocations e.g., via the memmap. The memmap of the next chunk will be placed onto previous chunks - if the last block cannot get offlined+removed, all dependent ones cannot get offlined+removed. While this can already be observed with individual DIMMs, it's more of an issue for virtio-mem (and I suspect also ppc DLPAR).
Document that this should only be used for optimizations, and no code should rely on this behavior for correction (if the order of the freelists ever changes).
We won't care about page shuffling: memory onlining already properly shuffles after onlining. free page reporting doesn't care about physically contiguous ranges, and there are already cases where page isolation will simply move (physically close) free pages to (currently) the head of the freelists via move_freepages_block() instead of shuffling. If this becomes ever relevant, we should shuffle the whole zone when undoing isolation of larger ranges, and after free_contig_range().
Signed-off-by: David Hildenbrand david@redhat.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Reviewed-by: Alexander Duyck alexander.h.duyck@linux.intel.com Reviewed-by: Oscar Salvador osalvador@suse.de Reviewed-by: Wei Yang richard.weiyang@linux.alibaba.com Reviewed-by: Pankaj Gupta pankaj.gupta.linux@gmail.com Acked-by: Michal Hocko mhocko@suse.com Cc: Mel Gorman mgorman@techsingularity.net Cc: Dave Hansen dave.hansen@intel.com Cc: Vlastimil Babka vbabka@suse.cz Cc: Mike Rapoport rppt@kernel.org Cc: Scott Cheloha cheloha@linux.ibm.com Cc: Michael Ellerman mpe@ellerman.id.au Cc: Haiyang Zhang haiyangz@microsoft.com Cc: "K. Y. Srinivasan" kys@microsoft.com Cc: Matthew Wilcox willy@infradead.org Cc: Michal Hocko mhocko@kernel.org Cc: Stephen Hemminger sthemmin@microsoft.com Cc: Wei Liu wei.liu@kernel.org Link: https://lkml.kernel.org/r/20201005121534.15649-3-david@redhat.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Conflicts: mm/page_alloc.c [Peng Liu: cherry-pick from 47b6a24a23825ae7b33ff11396980da7c353843d] Signed-off-by: Peng Liu liupeng256@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/page_alloc.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4a15b87b7f89e..a4806709e441f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -90,6 +90,18 @@ typedef int __bitwise fpi_t; */ #define FPI_SKIP_REPORT_NOTIFY ((__force fpi_t)BIT(0))
+/* + * Place the (possibly merged) page to the tail of the freelist. Will ignore + * page shuffling (relevant code - e.g., memory onlining - is expected to + * shuffle the whole zone). + * + * Note: No code should rely on this flag for correctness - it's purely + * to allow for optimizations when handing back either fresh pages + * (memory onlining) or untouched pages (page isolation, free page + * reporting). + */ +#define FPI_TO_TAIL ((__force fpi_t)BIT(1)) + /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); #define MIN_PERCPU_PAGELIST_FRACTION (8) @@ -889,7 +901,11 @@ static inline void __free_one_page(struct page *page,
done_merging: set_page_order(page, order); - + if (fpi_flags & FPI_TO_TAIL) { + list_add_tail(&page->lru, + &zone->free_area[order].free_list[migratetype]); + goto out; + } /* * If this is not the largest possible page, check if the buddy * of the next-highest order is free. If it is, it's possible @@ -3026,7 +3042,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
/* Return isolated page to tail of freelist. */ __free_one_page(page, page_to_pfn(page), zone, order, mt, - FPI_SKIP_REPORT_NOTIFY); + FPI_SKIP_REPORT_NOTIFY | FPI_TO_TAIL); }
/*