From: Andrew Morton akpm@linux-foundation.org
mainline inclusion from mainline-v5.11-rc1 commit 34fe653716b0d340bc26dd4823d2dbe00c57f849 category: bugfix bugzilla: NA CVE: NA
-----------------------------------------------
With a machine with 3 TB (more than 2 TB memory). If you use vmalloc to allocate > 2 TB memory, the array_size below will be overflowed.
The array_size is an unsigned int and can only be used to allocate less than 2 TB memory. If you pass 2*1028*1028*1024*1024 = 2 * 2^40 in the argument of vmalloc. The array_size will become 2*2^31 = 2^32. The 2^32 cannot be store with a 32 bit integer.
The fix is to change the type of array_size to unsigned long.
[akpm@linux-foundation.org: rework for current mainline]
Link: https://bugzilla.kernel.org/show_bug.cgi?id=210023 Reported-by: hsinhuiwu@gmail.com Cc: Matthew Wilcox willy@infradead.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org
conflict: mm/vmalloc.c
Signed-off-by: Tong Tiangen tongtiangen@huawei.com Reviewed-by: Chen Wandun chenwandun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/vmalloc.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 011a84ebec04d..7750fd379633e 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2565,7 +2565,9 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, unsigned long addr = (unsigned long)area->addr; unsigned long size = get_vm_area_size(area); unsigned int page_order = page_shift - PAGE_SHIFT; - unsigned int nr_pages, array_size, i; + unsigned int nr_pages; + unsigned long array_size; + unsigned int i; const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO; const gfp_t alloc_mask = gfp_mask | __GFP_NOWARN; const gfp_t highmem_mask = (gfp_mask & (GFP_DMA | GFP_DMA32)) ? @@ -2573,7 +2575,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, __GFP_HIGHMEM;
nr_pages = size >> PAGE_SHIFT; - array_size = (nr_pages * sizeof(struct page *)); + array_size = (unsigned long)nr_pages * sizeof(struct page *);
/* Please note that the recursion is strictly bounded. */ if (array_size > PAGE_SIZE) {
From: Liu Xiang liu.xiang@zlingsmart.com
mainline inclusion from mainline-v5.11-rc1 commit 0a4f3d1bb91cac4efdd780373638b6a1a4c24c51 category: bugfix bugzilla: NA CVE: NA
-----------------------------------------------
On 64-bit machine, delta variable in hugetlb_acct_memory() may be larger than 0xffffffff, but gather_surplus_pages() can only use the low 32-bit value now. So we need to fix type of delta parameter and related local variables in gather_surplus_pages().
Link: https://lkml.kernel.org/r/1605793733-3573-1-git-send-email-liu.xiang@zlingsm... Reported-by: Ma Chenggong ma.chenggong@zlingsmart.com Signed-off-by: Liu Xiang liu.xiang@zlingsmart.com Signed-off-by: Pan Jiagen pan.jiagen@zlingsmart.com Reviewed-by: Mike Kravetz mike.kravetz@oracle.com Cc: Liu Xiang liuxiang_1999@126.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org
conflict: mm/hugetlb.c
Signed-off-by: Tong Tiangen tongtiangen@huawei.com Reviewed-by: Chen Wandun chenwandun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/hugetlb.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 8ab6a9903ec9a..bac44fb7593df 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1820,12 +1820,13 @@ struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma, * Increase the hugetlb pool such that it can accommodate a reservation * of size 'delta'. */ -static int gather_surplus_pages(struct hstate *h, int delta) +static int gather_surplus_pages(struct hstate *h, long delta) { struct list_head surplus_list; struct page *page, *tmp; - int ret, i; - int needed, allocated; + int ret; + long i; + long needed, allocated; bool alloc_ok = true;
needed = (h->resv_huge_pages + delta) - h->free_huge_pages;
From: Muchun Song songmuchun@bytedance.com
mainline inclusion from mainline-v5.11-rc1 commit 7ad69832f37e3cea8557db6df7c793905f1135e8 category: bugfix bugzilla: NA CVE: NA
-----------------------------------------------
When we free a page whose order is very close to MAX_ORDER and greater than pageblock_order, it wastes some CPU cycles to increase max_order to MAX_ORDER one by one and check the pageblock migratetype of that page repeatedly especially when MAX_ORDER is much larger than pageblock_order.
We also should not be checking migratetype of buddy when "order == MAX_ORDER - 1" as the buddy pfn may be invalid, so adjust the condition. With the new check, we don't need the max_order check anymore, so we replace it.
Also adjust max_order initialization so that it's lower by one than previously, which makes the code hopefully more clear.
Link: https://lkml.kernel.org/r/20201204155109.55451-1-songmuchun@bytedance.com Fixes: d9dddbf55667 ("mm/page_alloc: prevent merging between isolated and other pageblocks") Signed-off-by: Muchun Song songmuchun@bytedance.com Acked-by: Vlastimil Babka vbabka@suse.cz Reviewed-by: Oscar Salvador osalvador@suse.de Reviewed-by: David Hildenbrand david@redhat.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org
conflict: mm/page_alloc.c
Signed-off-by: Tong Tiangen tongtiangen@huawei.com Reviewed-by: Chen Wandun chenwandun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/page_alloc.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d2012e07e5295..856353b5de599 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -809,7 +809,7 @@ static inline void __free_one_page(struct page *page, struct page *buddy; unsigned int max_order;
- max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1); + max_order = min_t(unsigned int, MAX_ORDER - 1, pageblock_order);
VM_BUG_ON(!zone_is_initialized(zone)); VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page); @@ -822,7 +822,7 @@ static inline void __free_one_page(struct page *page, VM_BUG_ON_PAGE(bad_range(zone, page), page);
continue_merging: - while (order < max_order - 1) { + while (order < max_order) { buddy_pfn = __find_buddy_pfn(pfn, order); buddy = page + (buddy_pfn - pfn);
@@ -846,7 +846,7 @@ static inline void __free_one_page(struct page *page, pfn = combined_pfn; order++; } - if (max_order < MAX_ORDER) { + if (order < MAX_ORDER - 1) { /* If we are here, it means order is >= pageblock_order. * We want to prevent merge between freepages on isolate * pageblock and normal pageblock. Without this, pageblock @@ -867,7 +867,7 @@ static inline void __free_one_page(struct page *page, is_migrate_isolate(buddy_mt))) goto done_merging; } - max_order++; + max_order = order + 1; goto continue_merging; }
From: YueHaibing yuehaibing@huawei.com
mainline inclusion from mainline-v5.11-rc1 commit 42a44704367cd18d069c9855cb84090ff90ecd86 category: bugfix bugzilla: NA CVE: NA
-----------------------------------------------
Fix smatch warning:
mm/zswap.c:425 zswap_cpu_comp_prepare() warn: passing zero to 'PTR_ERR'
crypto_alloc_comp() never return NULL, use IS_ERR instead of IS_ERR_OR_NULL to fix this.
Link: https://lkml.kernel.org/r/20201031055615.28080-1-yuehaibing@huawei.com Fixes: f1c54846ee45 ("zswap: dynamic pool creation") Signed-off-by: YueHaibing yuehaibing@huawei.com Reviewed-by: David Hildenbrand david@redhat.com Cc: Seth Jennings sjenning@redhat.com Cc: Dan Streetman ddstreet@ieee.org Cc: Vitaly Wool vitaly.wool@konsulko.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Tong Tiangen tongtiangen@huawei.com Reviewed-by: Chen Wandun chenwandun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/zswap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/zswap.c b/mm/zswap.c index cd91fd9d96b81..6c686888dbd05 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -413,7 +413,7 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node) return 0;
tfm = crypto_alloc_comp(pool->tfm_name, 0, 0); - if (IS_ERR_OR_NULL(tfm)) { + if (IS_ERR(tfm)) { pr_err("could not alloc crypto comp %s : %ld\n", pool->tfm_name, PTR_ERR(tfm)); return -ENOMEM;
From: Muchun Song songmuchun@bytedance.com
mainline inclusion from mainline-v5.11-rc1 commit 2484be0f88dc6c9670362d51f6a04f2da0626b50 category: bugfix bugzilla: NA CVE: NA
-----------------------------------------------
A max order page has no buddy page and never merges to another order. So isolating and then freeing it is pointless.
Link: https://lkml.kernel.org/r/20201202122114.75316-1-songmuchun@bytedance.com Fixes: 3c605096d315 ("mm/page_alloc: restrict max order of merging on isolated pageblock") Signed-off-by: Muchun Song songmuchun@bytedance.com Reviewed-by: Andrew Morton akpm@linux-foundation.org Acked-by: Vlastimil Babka vbabka@suse.cz Reviewed-by: David Hildenbrand david@redhat.com Reviewed-by: Oscar Salvador osalvador@suse.de Cc: Joonsoo Kim iamjoonsoo.kim@lge.com Cc: Matthew Wilcox willy@infradead.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org
conflict: mm/page_isolation.c
Signed-off-by: Tong Tiangen tongtiangen@huawei.com Reviewed-by: Chen Wandun chenwandun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/page_isolation.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 019280712e1b8..cdc33cd2203d1 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -111,7 +111,7 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype) */ if (PageBuddy(page)) { order = page_order(page); - if (order >= pageblock_order) { + if (order >= pageblock_order && order < MAX_ORDER - 1) { pfn = page_to_pfn(page); buddy_pfn = __find_buddy_pfn(pfn, order); buddy = page + (buddy_pfn - pfn);