From: David Hildenbrand david@redhat.com
mainline inclusion from mainline-v6.12-rc1 commit 43c9074e6f093d304d55c43638732c402be75e2b category: performance bugzilla: https://gitee.com/src-openeuler/kernel/issues/IB1S01
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
--------------------------------
It is not immediately obvious, but we can move the folio->_nr_pages_mapped update out of the loop and reduce the number of atomic ops without affecting the stats.
The important point to realize is that only removing the last PMD mapping will result in _nr_pages_mapped going below ENTIRELY_MAPPED, not the individual atomic_inc_return_relaxed() calls. Concurrent races with removal of PMD mappings should be handled as expected, just like when we would have such races right now on a single mapcount update.
In a simple munmap() microbenchmark [1] on 1 GiB of memory backed by the same PTE-mapped folio size (only mapped by a single process such that they will get completely unmapped), this change results in a speedup (positive is good) per folio size on a x86-64 Intel machine of roughly (a bit of noise expected):
* 16 KiB: +10% * 32 KiB: +15% * 64 KiB: +17% * 128 KiB: +21% * 256 KiB: +22% * 512 KiB: +22% * 1024 KiB: +23% * 2048 KiB: +27%
[1] https://gitlab.com/davidhildenbrand/scratchspace/-/blob/main/pte-mapped-foli...
Link: https://lkml.kernel.org/r/20240807115515.1640951-1-david@redhat.com Signed-off-by: David Hildenbrand david@redhat.com Signed-off-by: Andrew Morton akpm@linux-foundation.org
Conflicts: mm/rmap.c [Context conflicts due to commit 05c5323b2a34 ("mm: track mapcount of large folios in single value") isn't merged.] Signed-off-by: Jinjiang Tu tujinjiang@huawei.com --- mm/rmap.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c index de385e29916b..dbcdac9bb7a3 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1131,7 +1131,7 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio, int *nr_pmdmapped) { atomic_t *mapped = &folio->_nr_pages_mapped; - int first, nr = 0; + int first = 0, nr = 0;
__folio_rmap_sanity_checks(folio, page, nr_pages, level);
@@ -1143,13 +1143,13 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio, }
do { - first = atomic_inc_and_test(&page->_mapcount); - if (first) { - first = atomic_inc_return_relaxed(mapped); - if (first < ENTIRELY_MAPPED) - nr++; - } + first += atomic_inc_and_test(&page->_mapcount); } while (page++, --nr_pages > 0); + + if (first && + atomic_add_return_relaxed(first, mapped) < ENTIRELY_MAPPED) + nr = first; + break; case RMAP_LEVEL_PMD: first = atomic_inc_and_test(&folio->_entire_mapcount); @@ -1489,7 +1489,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, enum rmap_level level) { atomic_t *mapped = &folio->_nr_pages_mapped; - int last, nr = 0, nr_pmdmapped = 0; + int last = 0, nr = 0, nr_pmdmapped = 0; bool partially_mapped = false;
__folio_rmap_sanity_checks(folio, page, nr_pages, level); @@ -1502,14 +1502,13 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, }
do { - last = atomic_add_negative(-1, &page->_mapcount); - if (last) { - last = atomic_dec_return_relaxed(mapped); - if (last < ENTIRELY_MAPPED) - nr++; - } + last += atomic_add_negative(-1, &page->_mapcount); } while (page++, --nr_pages > 0);
+ if (last && + atomic_sub_return_relaxed(last, mapped) < ENTIRELY_MAPPED) + nr = last; + partially_mapped = nr && atomic_read(mapped); break; case RMAP_LEVEL_PMD: