From: David Hildenbrand david@redhat.com
mainline inclusion from mainline-v6.10-rc1 commit 46d62de7ad1286854e0c2944ad26a1c1b1a5f191 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IAE0PK
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
--------------------------------
Let's add a fast-path for small folios to all relevant rmap functions. Note that only RMAP_LEVEL_PTE applies.
This is a preparation for tracking the mapcount of large folios in a single value.
Link: https://lkml.kernel.org/r/20240409192301.907377-4-david@redhat.com Signed-off-by: David Hildenbrand david@redhat.com Reviewed-by: Yin Fengwei fengwei.yin@intel.com Cc: Chris Zankel chris@zankel.net Cc: Hugh Dickins hughd@google.com Cc: John Paul Adrian Glaubitz glaubitz@physik.fu-berlin.de Cc: Jonathan Corbet corbet@lwn.net Cc: Matthew Wilcox (Oracle) willy@infradead.org Cc: Max Filippov jcmvbkbc@gmail.com Cc: Miaohe Lin linmiaohe@huawei.com Cc: Muchun Song muchun.song@linux.dev Cc: Naoya Horiguchi nao.horiguchi@gmail.com Cc: Peter Xu peterx@redhat.com Cc: Richard Chang richardycc@google.com Cc: Rich Felker dalias@libc.org Cc: Ryan Roberts ryan.roberts@arm.com Cc: Yang Shi shy828301@gmail.com Cc: Yoshinori Sato ysato@users.sourceforge.jp Cc: Zi Yan ziy@nvidia.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Liu Shixin liushixin2@huawei.com --- include/linux/rmap.h | 13 +++++++++++++ mm/rmap.c | 26 ++++++++++++++++---------- 2 files changed, 29 insertions(+), 10 deletions(-)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 32b6f1925e58..23d7a6260266 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -319,6 +319,11 @@ static __always_inline void __folio_dup_file_rmap(struct folio *folio,
switch (level) { case RMAP_LEVEL_PTE: + if (!folio_test_large(folio)) { + atomic_inc(&page->_mapcount); + break; + } + do { atomic_inc(&page->_mapcount); } while (page++, --nr_pages > 0); @@ -402,6 +407,14 @@ static __always_inline int __folio_try_dup_anon_rmap(struct folio *folio, if (PageAnonExclusive(page + i)) return -EBUSY; } + + if (!folio_test_large(folio)) { + if (PageAnonExclusive(page)) + ClearPageAnonExclusive(page); + atomic_inc(&page->_mapcount); + break; + } + do { if (PageAnonExclusive(page)) ClearPageAnonExclusive(page); diff --git a/mm/rmap.c b/mm/rmap.c index d78bb701bda1..2802d183f331 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1137,15 +1137,18 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio,
switch (level) { case RMAP_LEVEL_PTE: + if (!folio_test_large(folio)) { + nr = atomic_inc_and_test(&page->_mapcount); + break; + } + do { first = atomic_inc_and_test(&page->_mapcount); - if (first && folio_test_large(folio)) { + if (first) { first = atomic_inc_return_relaxed(mapped); - first = (first < ENTIRELY_MAPPED); + if (first < ENTIRELY_MAPPED) + nr++; } - - if (first) - nr++; } while (page++, --nr_pages > 0); break; case RMAP_LEVEL_PMD: @@ -1480,15 +1483,18 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
switch (level) { case RMAP_LEVEL_PTE: + if (!folio_test_large(folio)) { + nr = atomic_add_negative(-1, &page->_mapcount); + break; + } + do { last = atomic_add_negative(-1, &page->_mapcount); - if (last && folio_test_large(folio)) { + if (last) { last = atomic_dec_return_relaxed(mapped); - last = (last < ENTIRELY_MAPPED); + if (last < ENTIRELY_MAPPED) + nr++; } - - if (last) - nr++; } while (page++, --nr_pages > 0);
partially_mapped = nr && atomic_read(mapped);