hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8YWPT CVE: NA
--------------------------------
With dynamic hugetlb feature, some memory is isolated in the dynamic pool. When try to compact memory, the kcompactd thread will scan all memory, althougt some memory is belonging to dynamic pool, kcompactd still try to migrate them. After migration, these memory will free to dynamic pool rather than buddy system, which results the free pages in buddy system decreased.
Since it is unnecessary to compact the memory in the dynamic pool, skip migrate them to fix the problem.
The same problem also existed in alloc_contig_range(), offline_pages() and numa balancing. Skip it again in these three scenarios.
In addition to this, we have to consider the migration of hugepage, if a hugepage is from dynamic pool, we should not allow to migrate it.
Fixes: 0bc0d0d57eda ("dhugetlb: backport dynamic hugetlb feature") Signed-off-by: Liu Shixin liushixin2@huawei.com --- include/linux/migrate.h | 6 +++++- mm/compaction.c | 3 +++ mm/mempolicy.c | 10 ++++++++-- mm/migrate.c | 3 +++ mm/page_isolation.c | 3 ++- 5 files changed, 21 insertions(+), 4 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h index f2b4abbca55e..dc1df7b085cd 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -38,9 +38,13 @@ static inline struct page *new_page_nodemask(struct page *page, unsigned int order = 0; struct page *new_page = NULL;
- if (PageHuge(page)) + if (PageHuge(page)) { + if (page_belong_to_dynamic_hugetlb(page)) + return NULL; + return alloc_huge_page_nodemask(page_hstate(compound_head(page)), preferred_nid, nodemask); + }
if (PageTransHuge(page)) { gfp_mask |= GFP_TRANSHUGE; diff --git a/mm/compaction.c b/mm/compaction.c index 1d991e443322..f45a057a0e64 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1271,6 +1271,9 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone, if (!page) continue;
+ if (page_belong_to_dynamic_hugetlb(page)) + continue; + /* If isolation recently failed, do not retry */ if (!isolation_suitable(cc, page)) continue; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 701988dc02f6..e8c82f3235e2 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1040,10 +1040,13 @@ static int migrate_page_add(struct page *page, struct list_head *pagelist, /* page allocation callback for NUMA node migration */ struct page *alloc_new_node_page(struct page *page, unsigned long node) { - if (PageHuge(page)) + if (PageHuge(page)) { + if (page_belong_to_dynamic_hugetlb(page)) + return NULL; + return alloc_huge_page_node(page_hstate(compound_head(page)), node); - else if (PageTransHuge(page)) { + } else if (PageTransHuge(page)) { struct page *thp;
thp = alloc_pages_node(node, @@ -1217,6 +1220,9 @@ static struct page *new_page(struct page *page, unsigned long start) }
if (PageHuge(page)) { + if (page_belong_to_dynamic_hugetlb(page)) + return NULL; + return alloc_huge_page_vma(page_hstate(compound_head(page)), vma, address); } else if (PageTransHuge(page)) { diff --git a/mm/migrate.c b/mm/migrate.c index 56a2033d443c..dbe174f86cfd 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1907,6 +1907,9 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) if (!migrate_balanced_pgdat(pgdat, 1UL << compound_order(page))) return 0;
+ if (page_belong_to_dynamic_hugetlb(page)) + return 0; + if (isolate_lru_page(page)) return 0;
diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 49b50cef0101..05f90155d5be 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -220,7 +220,8 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, pfn += pageblock_nr_pages) { page = __first_valid_page(pfn, pageblock_nr_pages); if (page) { - if (set_migratetype_isolate(page, migratetype, flags)) { + if (page_belong_to_dynamic_hugetlb(page) || + set_migratetype_isolate(page, migratetype, flags)) { undo_pfn = pfn; goto undo; }