From: Huang Ying ying.huang@intel.com
commit 8a8683ad9ba48b4b52a57f013513d1635c1ca5c4 upstream.
In set_pmd_migration_entry(), pmdp_invalidate() is used to change PMD atomically. But the PMD is read before that with an ordinary memory reading. If the THP (transparent huge page) is written between the PMD reading and pmdp_invalidate(), the PMD dirty bit may be lost, and cause data corruption. The race window is quite small, but still possible in theory, so need to be fixed.
The race is fixed via using the return value of pmdp_invalidate() to get the original content of PMD, which is a read/modify/write atomic operation. So no THP writing can occur in between.
The race has been introduced when the THP migration support is added in the commit 616b8371539a ("mm: thp: enable thp migration in generic path"). But this fix depends on the commit d52605d7cb30 ("mm: do not lose dirty and accessed bits in pmdp_invalidate()"). So it's easy to be backported after v4.16. But the race window is really small, so it may be fine not to backport the fix at all.
Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: "Huang, Ying" ying.huang@intel.com Reviewed-by: Zi Yan ziy@nvidia.com Reviewed-by: William Kucharski william.kucharski@oracle.com Acked-by: Kirill A. Shutemov kirill.shutemov@linux.intel.com Cc: stable@vger.kernel.org Cc: Vlastimil Babka vbabka@suse.cz Cc: Michal Hocko mhocko@kernel.org Cc: Andrea Arcangeli aarcange@redhat.com Link: http://lkml.kernel.org/r/20200220075220.2327056-1-ying.huang@intel.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/huge_memory.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 1469983..280b0e7 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2949,8 +2949,7 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw, return;
flush_cache_range(vma, address, address + HPAGE_PMD_SIZE); - pmdval = *pvmw->pmd; - pmdp_invalidate(vma, address, pvmw->pmd); + pmdval = pmdp_invalidate(vma, address, pvmw->pmd); if (pmd_dirty(pmdval)) set_page_dirty(page); entry = make_migration_entry(page, pmd_write(pmdval));