From: Tang Yizhou tangyizhou@huawei.com
ascend inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4EUVI CVE: NA
-------------------------------------------------
If alignment is 1, there won't be any bugs due to the implementation of __vmalloc_node_range().
Changing alignment to PMD_SIZE is more readable.
Reported-by: Xu Qiang xuqiang36@huawei.com Signed-off-by: Tang Yizhou tangyizhou@huawei.com Reviewed-by: Ding Tianhong dingtianhong@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Reviewed-by: Weilong Chen chenweilong@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/share_pool.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/share_pool.c b/mm/share_pool.c index dfce9001b9e42..d7a256de14ce6 100644 --- a/mm/share_pool.c +++ b/mm/share_pool.c @@ -2797,7 +2797,7 @@ void *vmalloc_hugepage(unsigned long size) /* PMD hugepage aligned */ size = PMD_ALIGN(size);
- return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, + return __vmalloc_node_range(size, PMD_SIZE, VMALLOC_START, VMALLOC_END, GFP_KERNEL, PAGE_KERNEL, VM_HUGE_PAGES, NUMA_NO_NODE, __builtin_return_address(0)); @@ -2820,7 +2820,7 @@ void *vmalloc_hugepage_user(unsigned long size) /* PMD hugepage aligned */ size = PMD_ALIGN(size);
- return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, + return __vmalloc_node_range(size, PMD_SIZE, VMALLOC_START, VMALLOC_END, GFP_KERNEL | __GFP_ZERO, PAGE_KERNEL, VM_HUGE_PAGES | VM_USERMAP, NUMA_NO_NODE, __builtin_return_address(0)); @@ -2866,7 +2866,7 @@ void *buff_vzalloc_hugepage_user(unsigned long size) /* PMD hugepage aligned */ size = PMD_ALIGN(size);
- return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, + return __vmalloc_node_range(size, PMD_SIZE, VMALLOC_START, VMALLOC_END, GFP_KERNEL | __GFP_ZERO | __GFP_ACCOUNT, PAGE_KERNEL, VM_HUGE_PAGES | VM_USERMAP, NUMA_NO_NODE, __builtin_return_address(0));