From: David Hildenbrand david@redhat.com
mainline inclusion from mainline-v6.7-rc1 commit a1f34ee1de2c3a55bc2a6b9a38e1ecd2830dcc03 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I98AW9 CVE: NA
-------------------------------------------------
If swapin code would ever decide to not use order-0 pages and supply a PTE-mapped large folio, we will have to change how we call __folio_set_anon() -- eventually with exclusive=false and an adjusted address. For now, let's add a VM_WARN_ON_FOLIO() with a comment about the situation.
Link: https://lkml.kernel.org/r/20230913125113.313322-5-david@redhat.com Signed-off-by: David Hildenbrand david@redhat.com Cc: Matthew Wilcox willy@infradead.org Cc: Mike Kravetz mike.kravetz@oracle.com Cc: Muchun Song muchun.song@linux.dev Signed-off-by: Andrew Morton akpm@linux-foundation.org (cherry picked from commit a1f34ee1de2c3a55bc2a6b9a38e1ecd2830dcc03) Signed-off-by: Kefeng Wang wangkefeng.wang@huawei.com --- mm/rmap.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/mm/rmap.c b/mm/rmap.c index 56fac8547afc..bbc3ae731741 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1272,6 +1272,13 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
if (unlikely(!folio_test_anon(folio))) { VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio); + /* + * For a PTE-mapped large folio, we only know that the single + * PTE is exclusive. Further, __folio_set_anon() might not get + * folio->index right when not given the address of the head + * page. + */ + VM_WARN_ON_FOLIO(folio_test_large(folio) && !compound, folio); __folio_set_anon(folio, vma, address, !!(flags & RMAP_EXCLUSIVE)); } else if (likely(!folio_test_ksm(folio))) {