hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I9K8D1 CVE: NA
--------------------------------
hpool_merge_page() will traver hugepage_splitlists and try to merge pages. The number of pages may be large and it will try to migrate pages which are in used. So the process can take a long times. To avoid softlockup, add cond_resched() before loop and migration.
Fixes: cdbeee51d044 ("mm/dynamic_hugetlb: add migration function") Signed-off-by: Liu Shixin liushixin2@huawei.com --- mm/dynamic_hugetlb.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/dynamic_hugetlb.c b/mm/dynamic_hugetlb.c index dc4cf48a332e..fd8a994ecf63 100644 --- a/mm/dynamic_hugetlb.c +++ b/mm/dynamic_hugetlb.c @@ -272,18 +272,21 @@ static int hpool_merge_page(struct dhugetlb_pool *hpool, int hpages_pool_idx, bo merge: can_merge = true;
+ spin_unlock(&hpool->lock); + cond_resched(); /* * If we are merging 4K page to 2M page, we need to get * lock of percpu pool sequentially and clear percpu pool. */ if (hpages_pool_idx == HUGE_PAGES_POOL_2M) { - spin_unlock(&hpool->lock); dhugetlb_lock_all(hpool); for (i = 0; i < NR_PERCPU_POOL; i++) { percpu_pool = &hpool->percpu_pool[i]; reclaim_pages_from_percpu_pool(hpool, percpu_pool, percpu_pool->free_pages); } + } else { + spin_lock(&hpool->lock); }
page = pfn_to_page(split_page->start_pfn); @@ -360,12 +363,14 @@ static int hpool_merge_page(struct dhugetlb_pool *hpool, int hpages_pool_idx, bo
for (i = 0; i < nr_pages; i+= block_size) { p = pfn_to_page(split_page->start_pfn + i); - if (PagePool(p)) + if (PagePool(p)) { + cond_resched(); /* * TODO: fatal migration failures should bail * out */ do_migrate_range(page_to_pfn(p), page_to_pfn(p) + block_size); + } } spin_lock(&hpool->lock);