Patch 1 fixes a bug found in earlier version. Patch 2,3,4 are potential vulnerability discovered during code analysis. Patch 5 is an optimization. Patch 7 is a feature limitation. Patch 6,8,9 are three bugfix discovered during code analysis.
Liu Shixin (9): mm/dynamic_hugetlb: check free_pages_prepares when split pages mm/dynamic_hugetlb: improve the initialization of huge pages mm/dynamic_hugetlb: use pfn to traverse subpages mm/dynamic_hugetlb: check page using check_new_page mm/dynamic_hugetlb: use mem_cgroup_force_empty to reclaim pages mm/dynamic_hugetlb: hold the lock until pages back to hugetlb mm/dynamic_hugetlb: only support to merge 2M dynamicly mm/dynamic_hugetlb: set/clear HPageFreed mm/dynamic_hugetlb: initialize subpages before merging
include/linux/memcontrol.h | 2 + mm/dynamic_hugetlb.c | 156 ++++++++++++++++++++++++------------- mm/internal.h | 1 + mm/memcontrol.c | 2 +- mm/page_alloc.c | 2 +- 5 files changed, 109 insertions(+), 54 deletions(-)
From: Liu Shixin liushixin2@huawei.com
hulk inclusion category: bugfix bugzilla: 46904 https://gitee.com/openeuler/kernel/issues/I4Y0XO
--------------------------------
The hugepages may still remain PG_uptodate flags when freed. When splitting hugepage to pages, the flag is not clear. This causes the page to be allocated with PG_uptodate flags and user may read incorrect datas.
In order to solve this problem and similar problems, add free_pages_prepares() to clear page when splitting pages to small pool.
Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com --- mm/dynamic_hugetlb.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/mm/dynamic_hugetlb.c b/mm/dynamic_hugetlb.c index f20e654cc856..92b7ba6f37eb 100644 --- a/mm/dynamic_hugetlb.c +++ b/mm/dynamic_hugetlb.c @@ -74,14 +74,16 @@ static void __hpool_split_huge_page(struct dhugetlb_pool *hpool, struct page *pa
__ClearPageHead(page); for (i = 0; i < nr_pages; i++) { - page[i].flags &= ~(1 << PG_locked | 1 << PG_error | - 1 << PG_referenced | 1 << PG_dirty | - 1 << PG_active | 1 << PG_private | - 1 << PG_writeback); if (i != 0) { page[i].mapping = NULL; clear_compound_head(&page[i]); } + /* + * If a hugepage is mapped in private mode, the PG_uptodate bit + * will not be cleared when the hugepage freed. Clear the + * hugepage using free_pages_prepare() here. + */ + free_pages_prepare(&page[i], 0, false); add_new_page_to_pool(hpool, &page[i], HUGE_PAGES_POOL_4K); } }
From: Liu Shixin liushixin2@huawei.com
hulk inclusion category: bugfix bugzilla: 46904 https://gitee.com/openeuler/kernel/issues/I4Y0XO
--------------------------------
Referring to alloc_buddy_huge_page function, replace prep_compound_page with prep_new_page which is more appropriate because it's the opposite of free_pages_prepare. And initialize page->mapping for huge pages as they are initialized in free_huge_page too.
Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com --- mm/dynamic_hugetlb.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/mm/dynamic_hugetlb.c b/mm/dynamic_hugetlb.c index 92b7ba6f37eb..9c2110d3c251 100644 --- a/mm/dynamic_hugetlb.c +++ b/mm/dynamic_hugetlb.c @@ -30,15 +30,22 @@ static void add_new_page_to_pool(struct dhugetlb_pool *hpool, struct page *page, switch (hpages_pool_idx) { case HUGE_PAGES_POOL_1G: prep_compound_gigantic_page(page, PUD_SHIFT - PAGE_SHIFT); + set_page_count(page, 0); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); + hugetlb_set_page_subpool(page, NULL); set_hugetlb_cgroup(page, NULL); + set_hugetlb_cgroup_rsvd(page, NULL); break; case HUGE_PAGES_POOL_2M: - prep_compound_page(page, PMD_SHIFT - PAGE_SHIFT); + prep_new_page(page, PMD_SHIFT - PAGE_SHIFT, __GFP_COMP, 0); + set_page_count(page, 0); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); + hugetlb_set_page_subpool(page, NULL); set_hugetlb_cgroup(page, NULL); + set_hugetlb_cgroup_rsvd(page, NULL); break; } + page->mapping = NULL; list_add_tail(&page->lru, &hpages_pool->hugepage_freelists); hpages_pool->free_normal_pages++; } @@ -74,10 +81,8 @@ static void __hpool_split_huge_page(struct dhugetlb_pool *hpool, struct page *pa
__ClearPageHead(page); for (i = 0; i < nr_pages; i++) { - if (i != 0) { - page[i].mapping = NULL; + if (i != 0) clear_compound_head(&page[i]); - } /* * If a hugepage is mapped in private mode, the PG_uptodate bit * will not be cleared when the hugepage freed. Clear the
From: Liu Shixin liushixin2@huawei.com
hulk inclusion category: bugfix bugzilla: 46904 https://gitee.com/openeuler/kernel/issues/I4Y0XO
--------------------------------
For 1G huge pages, the struct page of each subpages may be discontinuous, but pfn must be continuous, so it's better to traverse subpages using pfn rathan than struct page.
Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com --- mm/dynamic_hugetlb.c | 32 ++++++++++++++++++++------------ 1 file changed, 20 insertions(+), 12 deletions(-)
diff --git a/mm/dynamic_hugetlb.c b/mm/dynamic_hugetlb.c index 9c2110d3c251..f5f64c4f0acf 100644 --- a/mm/dynamic_hugetlb.c +++ b/mm/dynamic_hugetlb.c @@ -54,20 +54,21 @@ static void __hpool_split_gigantic_page(struct dhugetlb_pool *hpool, struct page { int nr_pages = 1 << (PUD_SHIFT - PAGE_SHIFT); int nr_blocks = 1 << (PMD_SHIFT - PAGE_SHIFT); - int i; + int i, pfn = page_to_pfn(page);
lockdep_assert_held(&hpool->lock); atomic_set(compound_mapcount_ptr(page), 0); atomic_set(compound_pincount_ptr(page), 0);
for (i = 1; i < nr_pages; i++) - clear_compound_head(&page[i]); + clear_compound_head(pfn_to_page(pfn + i)); set_compound_order(page, 0); page[1].compound_nr = 0; __ClearPageHead(page);
for (i = 0; i < nr_pages; i+= nr_blocks) - add_new_page_to_pool(hpool, &page[i], HUGE_PAGES_POOL_2M); + add_new_page_to_pool(hpool, pfn_to_page(pfn + i), + HUGE_PAGES_POOL_2M); }
static void __hpool_split_huge_page(struct dhugetlb_pool *hpool, struct page *page) @@ -208,7 +209,7 @@ static int hpool_merge_page(struct dhugetlb_pool *hpool, int hpages_pool_idx, bo struct huge_pages_pool *hpages_pool, *src_hpages_pool; struct split_hugepage *split_page, *split_next; unsigned long nr_pages, block_size; - struct page *page, *next; + struct page *page, *next, *p; bool need_migrate = false; int i, try; LIST_HEAD(wait_page_list); @@ -242,7 +243,8 @@ static int hpool_merge_page(struct dhugetlb_pool *hpool, int hpages_pool_idx, bo clear_percpu_pools(hpool); page = pfn_to_page(split_page->start_pfn); for (i = 0; i < nr_pages; i+= block_size) { - if (PagePool(&page[i])) { + p = pfn_to_page(split_page->start_pfn + i); + if (PagePool(p)) { if (!need_migrate) goto next; else @@ -252,11 +254,12 @@ static int hpool_merge_page(struct dhugetlb_pool *hpool, int hpages_pool_idx, bo
list_del(&split_page->head_pages); hpages_pool->split_normal_pages--; - kfree(split_page); for (i = 0; i < nr_pages; i+= block_size) { - list_del(&page[i].lru); + p = pfn_to_page(split_page->start_pfn + i); + list_del(&p->lru); src_hpages_pool->free_normal_pages--; } + kfree(split_page); add_new_page_to_pool(hpool, page, hpages_pool_idx); trace_dynamic_hugetlb_split_merge(hpool, page, DHUGETLB_MERGE, page_size(page)); return 0; @@ -269,8 +272,9 @@ static int hpool_merge_page(struct dhugetlb_pool *hpool, int hpages_pool_idx, bo /* Isolate free page first. */ INIT_LIST_HEAD(&wait_page_list); for (i = 0; i < nr_pages; i+= block_size) { - if (!PagePool(&page[i])) { - list_move(&page[i].lru, &wait_page_list); + p = pfn_to_page(split_page->start_pfn + i); + if (!PagePool(p)) { + list_move(&p->lru, &wait_page_list); src_hpages_pool->free_normal_pages--; } } @@ -278,12 +282,13 @@ static int hpool_merge_page(struct dhugetlb_pool *hpool, int hpages_pool_idx, bo /* Unlock and try migration. */ spin_unlock(&hpool->lock); for (i = 0; i < nr_pages; i+= block_size) { - if (PagePool(&page[i])) + p = pfn_to_page(split_page->start_pfn + i); + if (PagePool(p)) /* * TODO: fatal migration failures should bail * out */ - do_migrate_range(page_to_pfn(&page[i]), page_to_pfn(&page[i]) + block_size); + do_migrate_range(page_to_pfn(p), page_to_pfn(p) + block_size); } spin_lock(&hpool->lock);
@@ -756,6 +761,9 @@ static int free_hugepage_to_hugetlb(struct dhugetlb_pool *hpool) unsigned int nr_pages; int nid, ret = 0;
+ if (!h) + return ret; + spin_lock(&hpool->lock); spin_lock(&hugetlb_lock); list_for_each_entry_safe(page, next, &hpages_pool->hugepage_freelists, lru) { @@ -1028,7 +1036,7 @@ int hugetlb_pool_info_show(struct seq_file *m, void *v) return 0;
if (!hpool) { - seq_printf(m, "Curent hierarchial have not memory pool.\n"); + seq_printf(m, "Current hierarchial have not memory pool.\n"); return 0; }
From: Liu Shixin liushixin2@huawei.com
hulk inclusion category: bugfix bugzilla: 46904 https://gitee.com/openeuler/kernel/issues/I4Y0XO
--------------------------------
Use check_new_page to check the page to be allocated.
Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com --- mm/dynamic_hugetlb.c | 29 ++++++++++++++++------------- mm/internal.h | 1 + mm/page_alloc.c | 2 +- 3 files changed, 18 insertions(+), 14 deletions(-)
diff --git a/mm/dynamic_hugetlb.c b/mm/dynamic_hugetlb.c index f5f64c4f0acf..90e2a52390b2 100644 --- a/mm/dynamic_hugetlb.c +++ b/mm/dynamic_hugetlb.c @@ -476,20 +476,23 @@ static struct page *__alloc_page_from_dhugetlb_pool(void) */ spin_lock_irqsave(&percpu_pool->lock, flags);
- if (percpu_pool->free_pages == 0) { - int ret; - - spin_lock(&hpool->lock); - ret = add_pages_to_percpu_pool(hpool, percpu_pool, - PERCPU_POOL_PAGE_BATCH); - spin_unlock(&hpool->lock); - if (ret) - goto unlock; - } + do { + page = NULL; + if (percpu_pool->free_pages == 0) { + int ret; + + spin_lock(&hpool->lock); + ret = add_pages_to_percpu_pool(hpool, percpu_pool, + PERCPU_POOL_PAGE_BATCH); + spin_unlock(&hpool->lock); + if (ret) + goto unlock; + }
- page = list_entry(percpu_pool->head_page.next, struct page, lru); - list_del(&page->lru); - percpu_pool->free_pages--; + page = list_entry(percpu_pool->head_page.next, struct page, lru); + list_del(&page->lru); + percpu_pool->free_pages--; + } while (page && check_new_page(page)); percpu_pool->used_pages++; SetPagePool(page);
diff --git a/mm/internal.h b/mm/internal.h index 31517354f3c7..917b86b2870c 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -195,6 +195,7 @@ extern void memblock_free_pages(struct page *page, unsigned long pfn, unsigned int order); extern void __free_pages_core(struct page *page, unsigned int order); extern void prep_compound_page(struct page *page, unsigned int order); +extern int check_new_page(struct page *page); extern void post_alloc_hook(struct page *page, unsigned int order, gfp_t gfp_flags); extern void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags, diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a72df34fa210..a27aed0b9987 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2204,7 +2204,7 @@ static void check_new_page_bad(struct page *page) /* * This page is about to be returned from the page allocator */ -static inline int check_new_page(struct page *page) +inline int check_new_page(struct page *page) { if (likely(page_expected_state(page, PAGE_FLAGS_CHECK_AT_PREP|__PG_HWPOISON)))
From: Liu Shixin liushixin2@huawei.com
hulk inclusion category: bugfix bugzilla: 46904 https://gitee.com/openeuler/kernel/issues/I4Y0XO
--------------------------------
When all processes in the memory cgroup are finished, some memory may still be occupied such as file cache. Use mem_cgroup_force_empty to reclaim these pages that charged in the memory cgroup before merging all pages.
Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com --- include/linux/memcontrol.h | 2 ++ mm/dynamic_hugetlb.c | 6 ++++++ mm/memcontrol.c | 2 +- 3 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 7cc7cfe55d9a..0e55013c570d 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1258,6 +1258,8 @@ static inline bool memcg_has_children(struct mem_cgroup *memcg) return ret; }
+int mem_cgroup_force_empty(struct mem_cgroup *memcg); + #else /* CONFIG_MEMCG */
#define MEM_CGROUP_ID_SHIFT 0 diff --git a/mm/dynamic_hugetlb.c b/mm/dynamic_hugetlb.c index 90e2a52390b2..8366b54dfcfe 100644 --- a/mm/dynamic_hugetlb.c +++ b/mm/dynamic_hugetlb.c @@ -849,6 +849,12 @@ int hugetlb_pool_destroy(struct cgroup *cgrp) if (!hpool || hpool->attach_memcg != memcg) return 0;
+ /* + * Even if no process exists in the memory cgroup, some pages may still + * be occupied. Release these pages before merging them. + */ + mem_cgroup_force_empty(hpool->attach_memcg); + ret = hugetlb_pool_merge_all_pages(hpool); if (ret) return -ENOMEM; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 2804fe9d3dae..fad3d4dd88ec 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3407,7 +3407,7 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, * * Caller is responsible for holding css reference for memcg. */ -static int mem_cgroup_force_empty(struct mem_cgroup *memcg) +int mem_cgroup_force_empty(struct mem_cgroup *memcg) { int nr_retries = MAX_RECLAIM_RETRIES;
From: Liu Shixin liushixin2@huawei.com
hulk inclusion category: bugfix bugzilla: 46904 https://gitee.com/openeuler/kernel/issues/I4Y0XO
--------------------------------
Do not release the lock after merging all pages, otherwise some other process may allocate the pages, and then some pages can't be put back to hugetlb.
Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com --- mm/dynamic_hugetlb.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/mm/dynamic_hugetlb.c b/mm/dynamic_hugetlb.c index 8366b54dfcfe..d07877559bac 100644 --- a/mm/dynamic_hugetlb.c +++ b/mm/dynamic_hugetlb.c @@ -305,7 +305,8 @@ static int hugetlb_pool_merge_all_pages(struct dhugetlb_pool *hpool) { int ret = 0;
- spin_lock(&hpool->lock); + lockdep_assert_held(&hpool->lock); + while (hpool->hpages_pool[HUGE_PAGES_POOL_2M].split_normal_pages) { ret = hpool_merge_page(hpool, HUGE_PAGES_POOL_2M, true); if (ret) { @@ -329,7 +330,6 @@ static int hugetlb_pool_merge_all_pages(struct dhugetlb_pool *hpool) goto out; } out: - spin_unlock(&hpool->lock); return ret; }
@@ -767,7 +767,8 @@ static int free_hugepage_to_hugetlb(struct dhugetlb_pool *hpool) if (!h) return ret;
- spin_lock(&hpool->lock); + lockdep_assert_held(&hpool->lock); + spin_lock(&hugetlb_lock); list_for_each_entry_safe(page, next, &hpages_pool->hugepage_freelists, lru) { nr_pages = 1 << huge_page_order(h); @@ -791,7 +792,6 @@ static int free_hugepage_to_hugetlb(struct dhugetlb_pool *hpool) break; } spin_unlock(&hugetlb_lock); - spin_unlock(&hpool->lock); return ret; }
@@ -855,12 +855,15 @@ int hugetlb_pool_destroy(struct cgroup *cgrp) */ mem_cgroup_force_empty(hpool->attach_memcg);
+ spin_lock(&hpool->lock); ret = hugetlb_pool_merge_all_pages(hpool); - if (ret) + if (ret) { + spin_unlock(&hpool->lock); return -ENOMEM; + } ret = free_hugepage_to_hugetlb(hpool); memcg->hpool = NULL; - + spin_unlock(&hpool->lock); put_hpool(hpool); return ret; }
From: Liu Shixin liushixin2@huawei.com
hulk inclusion category: bugfix bugzilla: 46904 https://gitee.com/openeuler/kernel/issues/I4Y0XO
--------------------------------
We do not support dynamic combination of 1G hugepages dynamicly as this can result in a significant performance loss. We suggest to configure the number of hugepages immediately after creating a dynamic hugetlb pool rather than modify them dynamicly while some processes are runing.
Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com --- mm/dynamic_hugetlb.c | 25 ++++++++++++++----------- 1 file changed, 14 insertions(+), 11 deletions(-)
diff --git a/mm/dynamic_hugetlb.c b/mm/dynamic_hugetlb.c index d07877559bac..bec6ff560e37 100644 --- a/mm/dynamic_hugetlb.c +++ b/mm/dynamic_hugetlb.c @@ -938,17 +938,20 @@ static ssize_t update_reserved_pages(struct mem_cgroup *memcg, char *buf, int hp if (hpool_split_page(hpool, hpages_pool_idx - 1)) break; } - /* - * First try to merge pages without migration, If this can not meet - * the requirements, then try to merge pages with migration. - */ - while (delta > hpages_pool->free_normal_pages) { - if (hpool_merge_page(hpool, hpages_pool_idx, false)) - break; - } - while (delta > hpages_pool->free_normal_pages) { - if (hpool_merge_page(hpool, hpages_pool_idx, true)) - break; + /* Currently, only merging 2M hugepages is supported */ + if (hpages_pool_idx == HUGE_PAGES_POOL_2M) { + /* + * First try to merge pages without migration, If this can not meet + * the requirements, then try to merge pages with migration. + */ + while (delta > hpages_pool->free_normal_pages) { + if (hpool_merge_page(hpool, hpages_pool_idx, false)) + break; + } + while (delta > hpages_pool->free_normal_pages) { + if (hpool_merge_page(hpool, hpages_pool_idx, true)) + break; + } } delta = min(nr_pages - hpages_pool->nr_huge_pages, hpages_pool->free_normal_pages); hpages_pool->nr_huge_pages += delta;
From: Liu Shixin liushixin2@huawei.com
hulk inclusion category: bugfix bugzilla: 46904 https://gitee.com/openeuler/kernel/issues/I4Y0XO
--------------------------------
Patch ("mm: hugetlb: fix a race between freeing and dissolving the page") add PageHugeFreed to check whether a page is freed in hugetlb. Patch ("hugetlb: convert PageHugeFreed to HPageFreed flag") convert it to HPageFreed. We need to clear it when alloc hugepage from hugetlb to and set it when free hugepage back to hugetlb.
Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com --- mm/dynamic_hugetlb.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/mm/dynamic_hugetlb.c b/mm/dynamic_hugetlb.c index bec6ff560e37..7dc1d7643a35 100644 --- a/mm/dynamic_hugetlb.c +++ b/mm/dynamic_hugetlb.c @@ -739,6 +739,7 @@ static int alloc_hugepage_from_hugetlb(struct dhugetlb_pool *hpool, if (ret) continue;
+ ClearHPageFreed(page); list_move_tail(&page->lru, &hpages_pool->hugepage_freelists); h->free_huge_pages--; h->free_huge_pages_node[nid]--; @@ -780,6 +781,7 @@ static int free_hugepage_to_hugetlb(struct dhugetlb_pool *hpool) set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
nid = page_to_nid(page); + SetHPageFreed(page); list_move(&page->lru, &h->hugepage_freelists[nid]); hpool->total_huge_pages--; hpages_pool->free_normal_pages--;
From: Liu Shixin liushixin2@huawei.com
hulk inclusion category: bugfix bugzilla: 46904 https://gitee.com/openeuler/kernel/issues/I4Y0XO
--------------------------------
Patch ("hugetlb: address ref count racing in prep_compound_gigantic_page") add a check of ref count in prep_compound_gigantic_page. We will call this function in dynamic hugetlb feature too, so we should initialize subpages before calling prep_compound_gigantic_page to satisfy the change. Further, the input of prep_compound_gigantic_page should be a group of pages rather than compound page, so clear the properties related to compound page.
Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com --- mm/dynamic_hugetlb.c | 24 ++++++++++++++++++++++-- 1 file changed, 22 insertions(+), 2 deletions(-)
diff --git a/mm/dynamic_hugetlb.c b/mm/dynamic_hugetlb.c index 7dc1d7643a35..eb9b528b73de 100644 --- a/mm/dynamic_hugetlb.c +++ b/mm/dynamic_hugetlb.c @@ -210,7 +210,7 @@ static int hpool_merge_page(struct dhugetlb_pool *hpool, int hpages_pool_idx, bo struct split_hugepage *split_page, *split_next; unsigned long nr_pages, block_size; struct page *page, *next, *p; - bool need_migrate = false; + bool need_migrate = false, need_initial = false; int i, try; LIST_HEAD(wait_page_list);
@@ -221,8 +221,9 @@ static int hpool_merge_page(struct dhugetlb_pool *hpool, int hpages_pool_idx, bo
switch (hpages_pool_idx) { case HUGE_PAGES_POOL_1G: - nr_pages = 1 << (PUD_SHIFT - PMD_SHIFT); + nr_pages = 1 << (PUD_SHIFT - PAGE_SHIFT); block_size = 1 << (PMD_SHIFT - PAGE_SHIFT); + need_initial = true; break; case HUGE_PAGES_POOL_2M: nr_pages = 1 << (PMD_SHIFT - PAGE_SHIFT); @@ -258,6 +259,25 @@ static int hpool_merge_page(struct dhugetlb_pool *hpool, int hpages_pool_idx, bo p = pfn_to_page(split_page->start_pfn + i); list_del(&p->lru); src_hpages_pool->free_normal_pages--; + /* + * The input of prep_compound_gigantic_page should be a + * group of pages whose ref count is 1 rather than + * compound_page. + * Initialize the pages before merge them to 1G. + */ + if (need_initial) { + int j; + + set_compound_page_dtor(p, NULL_COMPOUND_DTOR); + atomic_set(compound_mapcount_ptr(p), 0); + set_compound_order(p, 0); + __ClearPageHead(p); + set_page_count(p, 1); + for (j = 1; j < block_size; j++) { + clear_compound_head(&p[j]); + set_page_count(&p[j], 1); + } + } } kfree(split_page); add_new_page_to_pool(hpool, page, hpages_pool_idx);