From: Hugh Dickins hughd@google.com
mainline inclusion from mainline-v5.15 commit 050dcb5c85bb47f8151175ca5833aa882cc7fe0c category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I58GJ0 CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
Patch series "huge tmpfs: shmem_is_huge() fixes and cleanups".
A series of huge tmpfs fixes and cleanups.
This patch (of 9):
shmem_fallocate() goes to a lot of trouble to leave its newly allocated pages !Uptodate, partly to identify and undo them on failure, partly to leave the overhead of clearing them until later. But the huge page case did not skip to the end of the extent, walked through the tail pages one by one, and appeared to work just fine: but in doing so, cleared and Uptodated the huge page, so there was no way to undo it on failure.
And by setting Uptodate too soon, it messed up both its nr_falloced and nr_unswapped counts, so that the intended "time to give up" heuristic did not work at all.
Now advance immediately to the end of the huge extent, with a comment on why this is more than just an optimization. But although this speeds up huge tmpfs fallocation, it does leave the clearing until first use, and some users may have come to appreciate slow fallocate but fast first use: if they complain, then we can consider adding a pass to clear at the end.
Link: https://lkml.kernel.org/r/da632211-8e3e-6b1-aee-ab24734429a0@google.com Link: https://lkml.kernel.org/r/16201bd2-70e-37e2-e89b-5f929430da@google.com Fixes: 800d8c63b2e9 ("shmem: add huge pages support") Signed-off-by: Hugh Dickins hughd@google.com Reviewed-by: Yang Shi shy828301@gmail.com Cc: Shakeel Butt shakeelb@google.com Cc: "Kirill A. Shutemov" kirill.shutemov@linux.intel.com Cc: Miaohe Lin linmiaohe@huawei.com Cc: Mike Kravetz mike.kravetz@oracle.com Cc: Michal Hocko mhocko@suse.com Cc: Rik van Riel riel@surriel.com Cc: Matthew Wilcox willy@infradead.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Chen Wandun chenwandun@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- mm/shmem.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c index 701e9cd02adc..4414d9cbeb98 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2788,7 +2788,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, inode->i_private = &shmem_falloc; spin_unlock(&inode->i_lock);
- for (index = start; index < end; index++) { + for (index = start; index < end; ) { struct page *page;
/* @@ -2811,13 +2811,26 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, goto undone; }
+ index++; + /* + * Here is a more important optimization than it appears: + * a second SGP_FALLOC on the same huge page will clear it, + * making it PageUptodate and un-undoable if we fail later. + */ + if (PageTransCompound(page)) { + index = round_up(index, HPAGE_PMD_NR); + /* Beware 32-bit wraparound */ + if (!index) + index--; + } + /* * Inform shmem_writepage() how far we have reached. * No need for lock or barrier: we have the page lock. */ - shmem_falloc.next++; if (!PageUptodate(page)) - shmem_falloc.nr_falloced++; + shmem_falloc.nr_falloced += index - shmem_falloc.next; + shmem_falloc.next = index;
/* * If !PageUptodate, leave it that way so that freeable pages