hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9DN5Z CVE: NA
--------------------------------
Since commit 730633f0b7f9 ("mm: Protect operations adding pages to page cache with invalidate_lock"), mapping->invalidate_lock can protect us from adding new folios into page cache. So it's possible to disable active inodes' large folios support, even through it might be dangerous. Filesystems can disable it under mapping->invalidate_lock and drop all page cache before dropping AS_LARGE_FOLIO_SUPPORT.
Signed-off-by: Zhang Yi yi.zhang@huawei.com --- include/linux/pagemap.h | 14 ++++++++++++++ mm/readahead.c | 6 ++++++ 2 files changed, 20 insertions(+)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 40403c365814..26a60ff9cfed 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -344,6 +344,20 @@ static inline void mapping_set_large_folios(struct address_space *mapping) __set_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags); }
+/** + * mapping_clear_large_folios() - The file disable supports large folios. + * @mapping: The file. + * + * The filesystem have to make sure the file is in atomic context and all + * cached folios have been cleared under mapping->invalidate_lock before + * calling this function. + */ +static inline void mapping_clear_large_folios(struct address_space *mapping) +{ + WARN_ON_ONCE(!rwsem_is_locked(&mapping->invalidate_lock)); + __clear_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags); +} + /* * Large folio support currently depends on THP. These dependencies are * being worked on but are not yet fixed. diff --git a/mm/readahead.c b/mm/readahead.c index 4d0dbfd62d20..63c7320ba464 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -510,6 +510,12 @@ void page_cache_ra_order(struct readahead_control *ractl, }
filemap_invalidate_lock_shared(mapping); + + if (unlikely(!mapping_large_folio_support(mapping))) { + filemap_invalidate_unlock_shared(mapping); + goto fallback; + } + while (index <= limit) { unsigned int order = new_order;