Liu Shixin (3): mm: limit order to 0 when allocated from dynamic pool mm: huge_memory: add memory reliable count in __discard_anon_folio_pmd_locked() mm/shmem: replace HPAGE_PMD_ORDER with PMD_ORDER in shmem_alloc_folio()
mm/filemap.c | 3 +++ mm/huge_memory.c | 1 + mm/memory.c | 2 ++ mm/readahead.c | 3 +++ mm/shmem.c | 2 +- 5 files changed, 10 insertions(+), 1 deletion(-)
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IAO6NS
--------------------------------
The large folios is not supported in dynamic pool, so we have to limit order to 0 when it will be allocated from dynamic pool. The allocation in shmem has already been limited in commit d0ef72eca876 ("mm: shmem: add mTHP support for anonymous shmem"), and now we are limit the allocation in anon/file path.
Fixes: ba572beac8be ("mm: thp: support allocation of anonymous multi-size THP") Signed-off-by: Liu Shixin liushixin2@huawei.com --- mm/filemap.c | 3 +++ mm/memory.c | 2 ++ mm/readahead.c | 3 +++ 3 files changed, 8 insertions(+)
diff --git a/mm/filemap.c b/mm/filemap.c index eb96ddf00ba8d..630a3eec5a881 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -47,6 +47,7 @@ #include <linux/splice.h> #include <linux/huge_mm.h> #include <linux/pgtable.h> +#include <linux/dynamic_pool.h> #include <asm/pgalloc.h> #include <asm/tlbflush.h> #include "internal.h" @@ -1931,6 +1932,8 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
if (!mapping_large_folio_support(mapping)) order = 0; + if (order && mm_in_dynamic_pool(current->mm)) + order = 0; if (order > MAX_PAGECACHE_ORDER) order = MAX_PAGECACHE_ORDER; /* If we're not aligned, allocate a smaller folio */ diff --git a/mm/memory.c b/mm/memory.c index 2771c10454e17..84ff5ac8fb968 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4395,6 +4395,8 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) */ if (unlikely(userfaultfd_armed(vma))) goto fallback; + if (mm_in_dynamic_pool(vma->vm_mm)) + goto fallback;
/* * Get a list of all the (large) orders below PMD_ORDER that are enabled diff --git a/mm/readahead.c b/mm/readahead.c index a8911f7c161a7..d0b3de43cf23b 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -128,6 +128,7 @@ #include <linux/blk-cgroup.h> #include <linux/fadvise.h> #include <linux/sched/mm.h> +#include <linux/dynamic_pool.h>
#include "internal.h"
@@ -503,6 +504,8 @@ void page_cache_ra_order(struct readahead_control *ractl,
if (!mapping_large_folio_support(mapping) || ra->size < 4) goto fallback; + if (mm_in_dynamic_pool(current->mm)) + goto fallback;
limit = min(limit, index + ra->size - 1);
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IAO6NS
--------------------------------
In commit a4130bbe35b2, we add a new path unmap_huge_pmd_locked(), so need to add memory reliable count in this path, i.e., add count in __discard_anon_folio_pmd_locked().
Fixes: a4130bbe35b2 ("mm/vmscan: avoid split lazyfree THP during shrink_folio_list()") Signed-off-by: Ma Wupeng mawupeng1@huawei.com Signed-off-by: Liu Shixin liushixin2@huawei.com --- mm/huge_memory.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index af6a5c840e276..a62c4dc2b9da7 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2933,6 +2933,7 @@ static bool __discard_anon_folio_pmd_locked(struct vm_area_struct *vma, folio_remove_rmap_pmd(folio, pmd_page(orig_pmd), vma); zap_deposited_table(mm, pmdp); add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR); + add_reliable_folio_counter(folio, mm, -HPAGE_PMD_NR); if (vma->vm_flags & VM_LOCKED) mlock_drain_local(); folio_put(folio);
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IAO6NS
--------------------------------
When HugeTLB and THP are all disabled, HPAGE_PMD_ORDER will trigger build error:
./include/linux/huge_mm.h:109:28: note: in expansion of macro ‘BUILD_BUG’ 109 | #define HPAGE_PMD_SHIFT ({ BUILD_BUG(); 0; })
Fix it by replace HPAGE_PMD_ORDER with PMD_ORDER, which is equal when THP enabled and when THP disabled, it is no really used in shmem_alloc_folio(). So there is no effect when THP disabled.
Fixes: c7fcbe104175 ("mm: shmem: Merge shmem_alloc_hugefolio() with shmem_alloc_folio()") Signed-off-by: Liu Shixin liushixin2@huawei.com --- mm/shmem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/shmem.c b/mm/shmem.c index d7a970f7accb7..7b3fa6b9aa289 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1735,7 +1735,7 @@ static struct folio *shmem_alloc_folio(gfp_t gfp, int order, struct folio *folio;
shmem_pseudo_vma_init(&pvma, info, index); - folio = vma_alloc_folio(gfp, order, &pvma, 0, order == HPAGE_PMD_ORDER); + folio = vma_alloc_folio(gfp, order, &pvma, 0, order == PMD_ORDER); shmem_pseudo_vma_destroy(&pvma);
return folio;
反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://gitee.com/openeuler/kernel/pulls/11485 邮件列表地址:https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/5...
FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://gitee.com/openeuler/kernel/pulls/11485 Mailing list address: https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/5...