[PATCH openEuler-1.0-LTS v5 0/9] support apei_page_offline_policy
Changes in v5: Change sysctl_apei_page_offline_policy bit0 to enable/disable driver notification. Backport some patches that have been merged. Since CONFIG_UCE_KERNEL_RECOVERY depends on CONFIG_ARM64, remove the redundant condition in migrate_page(). Changes in v4: Fix typo errors and backport bugfix for mc_copy. Changes in v3: Use bool rather than int as the return type of apei_page_should_offline(). Drop the PF_UCE_KERNEL_RECOVERY flag after soft-offlining the page. Changes in v2: Add a new sysctl_apei_page_offline_policy, instead of changing soft_offline_page. Jiaqi Yan (1): mm/memory-failure: userspace controls soft-offlining pages Kyle Meyer (1): mm/memory-failure: support disabling soft offline for HugeTLB pages Qi Xi (3): Revert "mm/memory-failure: support disabling soft offline for HugeTLB pages" Revert "arm64: mm: Add copy mc support for migrate_page" apei/ghes: Add sysctl interface to control soft-offline page isolation Wupeng Ma (3): uce: add copy_mc_highpage{s} arm64: mm: Add copy mc support for migrate_page arm64: mm: Add copy mc support for migrate_page Yongjian Sun (1): ext4: fix e4b bitmap inconsistency reports drivers/acpi/apei/ghes.c | 77 ++++++++++++++++++++++++++++++++++++++++ fs/ext4/mballoc.c | 21 +++++------ include/linux/highmem.h | 55 ++++++++++++++++++++++++++++ include/linux/mm.h | 7 ++++ kernel/sysctl.c | 21 +++++++++++ mm/memory-failure.c | 20 +++++++++-- mm/migrate.c | 58 ++++++++++++++++++++++++++++-- 7 files changed, 244 insertions(+), 15 deletions(-) -- 2.33.0
From: Wupeng Ma <mawupeng1@huawei.com> hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/ID6J4B -------------------------------- Introduce copy_mc_highpage{s} to properly handle uncorrectable memory errors (UCE) during kernel page copies. Rather than panicking on hardware memory errors, the implementation now safely propagates the error condition. Signed-off-by: Wupeng Ma <mawupeng1@huawei.com> --- include/linux/highmem.h | 55 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 55 insertions(+) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 1fed918bb1e5..0baeb9112e63 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -267,6 +267,61 @@ static inline void copy_highpage(struct page *to, struct page *from) kunmap_atomic(vfrom); } +#ifdef CONFIG_UCE_KERNEL_RECOVERY +/* Return -EFAULT if there was a #MC during copy, otherwise 0 for success. */ +static inline int copy_mc_highpage(struct page *to, struct page *from) +{ + char *vfrom, *vto; + int ret; + + vfrom = kmap_atomic(from); + vto = kmap_atomic(to); + ret = copy_page_cow(vto, vfrom); + kunmap_atomic(vto); + kunmap_atomic(vfrom); + + return ret; +} + +/* Return -EFAULT if there was a #MC during copy, otherwise 0 for success. */ +static inline int copy_mc_highpages(struct page *to, struct page *from, + int nr_pages) +{ + int ret = 0; + int i; + + for (i = 0; i < nr_pages; i++) { + cond_resched(); + ret = copy_mc_highpage(to + i, from + i); + if (ret) + return -EFAULT; + } + + return ret; +} +#else +static inline int copy_mc_highpage(struct page *to, struct page *from) +{ + copy_highpage(to, from); + + return 0; +} + +/* Return -EFAULT if there was a #MC during copy, otherwise 0 for success. */ +static inline int copy_mc_highpages(struct page *to, struct page *from, + int nr_pages) +{ + int i; + + for (i = 0; i < nr_pages; i++) { + cond_resched(); + (void)copy_mc_highpage(to + i, from + i); + } + + return 0; +} +#endif + #endif #endif /* _LINUX_HIGHMEM_H */ -- 2.33.0
From: Wupeng Ma <mawupeng1@huawei.com> hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/ID6J4B -------------------------------- When encountering memory faults during page migration on arm64 systems, this change ensures the faulting user process is terminated instead of causing a kernel panic. The implementation adds proper error handling for copy operations in migrate_page(). To enable this, bit 1 for uce_kernel_recovery should be enabled: - echo 1 > /proc/sys/kernel/uce_kernel_recovery Signed-off-by: Wupeng Ma <mawupeng1@huawei.com> --- mm/migrate.c | 79 ++++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 70 insertions(+), 9 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index f8c379a0b9b9..83f95fa8f4c5 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -47,6 +47,7 @@ #include <linux/page_owner.h> #include <linux/sched/mm.h> #include <linux/ptrace.h> +#include <linux/highmem.h> #include <asm/tlbflush.h> @@ -640,24 +641,33 @@ int migrate_huge_page_move_mapping(struct address_space *mapping, * arithmetic will work across the entire page. We need something more * specialized. */ -static void __copy_gigantic_page(struct page *dst, struct page *src, - int nr_pages) +static int __copy_gigantic_page(struct page *dst, struct page *src, + int nr_pages, bool mc) { - int i; + int i, ret = 0; struct page *dst_base = dst; struct page *src_base = src; for (i = 0; i < nr_pages; ) { cond_resched(); - copy_highpage(dst, src); + + if (mc) { + ret = copy_mc_highpage(dst, src); + if (ret) + return -EFAULT; + } else { + copy_highpage(dst, src); + } i++; dst = mem_map_next(dst, dst_base, i); src = mem_map_next(src, src_base, i); } + + return ret; } -static void copy_huge_page(struct page *dst, struct page *src) +static int __copy_huge_page(struct page *dst, struct page *src, bool mc) { int i; int nr_pages; @@ -667,20 +677,32 @@ static void copy_huge_page(struct page *dst, struct page *src) struct hstate *h = page_hstate(src); nr_pages = pages_per_huge_page(h); - if (unlikely(nr_pages > MAX_ORDER_NR_PAGES)) { - __copy_gigantic_page(dst, src, nr_pages); - return; - } + if (unlikely(nr_pages > MAX_ORDER_NR_PAGES)) + return __copy_gigantic_page(dst, src, nr_pages, mc); } else { /* thp page */ BUG_ON(!PageTransHuge(src)); nr_pages = hpage_nr_pages(src); } + if (mc) + return copy_mc_highpages(dst, src, nr_pages); + for (i = 0; i < nr_pages; i++) { cond_resched(); copy_highpage(dst + i, src + i); } + return 0; +} + +static int copy_huge_page(struct page *dst, struct page *src) +{ + return __copy_huge_page(dst, src, false); +} + +static int copy_mc_huge_page(struct page *dst, struct page *src) +{ + return __copy_huge_page(dst, src, true); } /* @@ -756,6 +778,38 @@ void migrate_page_copy(struct page *newpage, struct page *page) } EXPORT_SYMBOL(migrate_page_copy); +static int migrate_page_copy_mc(struct page *newpage, struct page *page) +{ + int rc; + + if (PageHuge(page) || PageTransHuge(page)) + rc = copy_mc_huge_page(newpage, page); + else + rc = copy_mc_highpage(newpage, page); + + return rc; +} + +static int migrate_page_mc_extra(struct address_space *mapping, + struct page *newpage, struct page *page, + enum migrate_mode mode, int extra_count) +{ + int rc; + + rc = migrate_page_copy_mc(newpage, page); + if (rc) + return rc; + + rc = migrate_page_move_mapping(mapping, newpage, page, NULL, mode, + extra_count); + if (rc != MIGRATEPAGE_SUCCESS) + return rc; + + migrate_page_states(newpage, page); + + return rc; +} + /************************************************************ * Migration functions ***********************************************************/ @@ -774,6 +828,13 @@ int migrate_page(struct address_space *mapping, BUG_ON(PageWriteback(page)); /* Writeback must be complete */ +#ifdef CONFIG_UCE_KERNEL_RECOVERY + if (IS_ENABLED(CONFIG_ARM64) && + is_cow_kernel_recovery_enable() && + (mode != MIGRATE_SYNC_NO_COPY)) + return migrate_page_mc_extra(mapping, newpage, page, mode, 0); +#endif + rc = migrate_page_move_mapping(mapping, newpage, page, NULL, mode, 0); if (rc != MIGRATEPAGE_SUCCESS) -- 2.33.0
From: Jiaqi Yan <jiaqiyan@google.com> mainline inclusion from mainline-v6.11-rc1 commit 56374430c5dfcf6d4f1df79514f797b45fbd0485 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/ID6J4B Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i... -------------------------------- Correctable memory errors are very common on servers with large amount of memory, and are corrected by ECC. Soft offline is kernel's additional recovery handling for memory pages having (excessive) corrected memory errors. Impacted page is migrated to a healthy page if inuse; the original page is discarded for any future use. The actual policy on whether (and when) to soft offline should be maintained by userspace, especially in case of an 1G HugeTLB page. Soft-offline dissolves the HugeTLB page, either in-use or free, into chunks of 4K pages, reducing HugeTLB pool capacity by 1 hugepage. If userspace has not acknowledged such behavior, it may be surprised when later failed to mmap hugepages due to lack of hugepages. In case of a transparent hugepage, it will be split into 4K pages as well; userspace will stop enjoying the transparent performance. In addition, discarding the entire 1G HugeTLB page only because of corrected memory errors sounds very costly and kernel better not doing under the hood. But today there are at least 2 such cases doing so: 1. when GHES driver sees both GHES_SEV_CORRECTED and CPER_SEC_ERROR_THRESHOLD_EXCEEDED after parsing CPER. 2. RAS Correctable Errors Collector counts correctable errors per PFN and when the counter for a PFN reaches threshold In both cases, userspace has no control of the soft offline performed by kernel's memory failure recovery. This commit gives userspace the control of softofflining any page: kernel only soft offlines raw page / transparent hugepage / HugeTLB hugepage if userspace has agreed to. The interface to userspace is a new sysctl at /proc/sys/vm/enable_soft_offline. By default its value is set to 1 to preserve existing behavior in kernel. When set to 0, soft-offline (e.g. MADV_SOFT_OFFLINE) will fail with EOPNOTSUPP. [jiaqiyan@google.com: v7] Link: https://lkml.kernel.org/r/20240628205958.2845610-3-jiaqiyan@google.com Link: https://lkml.kernel.org/r/20240626050818.2277273-3-jiaqiyan@google.com Signed-off-by: Jiaqi Yan <jiaqiyan@google.com> Acked-by: Miaohe Lin <linmiaohe@huawei.com> Acked-by: David Rientjes <rientjes@google.com> Cc: Frank van der Linden <fvdl@google.com> Cc: Jane Chu <jane.chu@oracle.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Lance Yang <ioworker0@gmail.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: Naoya Horiguchi <nao.horiguchi@gmail.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Wupeng Ma <mawupeng1@huawei.com> --- include/linux/mm.h | 1 + kernel/sysctl.c | 9 +++++++++ mm/memory-failure.c | 13 ++++++++++++- 3 files changed, 22 insertions(+), 1 deletion(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 67e299374ac8..0274a82144e4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2873,6 +2873,7 @@ extern int get_hwpoison_page(struct page *page); #define put_hwpoison_page(page) put_page(page) extern int sysctl_memory_failure_early_kill; extern int sysctl_memory_failure_recovery; +extern int sysctl_enable_soft_offline; extern void shake_page(struct page *p, int access); extern atomic_long_t num_poisoned_pages __read_mostly; extern int soft_offline_page(struct page *page, int flags); diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 0d1f07dc7b44..f35a1990456e 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -1824,6 +1824,15 @@ static struct ctl_table vm_table[] = { .extra1 = &zero, .extra2 = &one, }, + { + .procname = "enable_soft_offline", + .data = &sysctl_enable_soft_offline, + .maxlen = sizeof(sysctl_enable_soft_offline), + .mode = 0644, + .proc_handler = proc_dointvec_minmax, + .extra1 = &zero, + .extra2 = &one, + }, #endif { .procname = "user_reserve_kbytes", diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 28bd5d6ed1bf..ad416016d1e9 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -67,6 +67,8 @@ int sysctl_memory_failure_early_kill __read_mostly = 0; int sysctl_memory_failure_recovery __read_mostly = 1; +int sysctl_enable_soft_offline __read_mostly = 1; + atomic_long_t num_poisoned_pages __read_mostly = ATOMIC_LONG_INIT(0); static bool page_handle_poison(struct page *page, bool hugepage_or_freepage, bool release) @@ -1996,7 +1998,9 @@ static int soft_offline_free_page(struct page *page) * @page: page to offline * @flags: flags. Same as memory_failure(). * - * Returns 0 on success, otherwise negated errno. + * Returns 0 on success, + * -EOPNOTSUPP for disabled by /proc/sys/vm/enable_soft_offline, + * < 0 otherwise negated errno. * * Soft offline a page, by migration or invalidation, * without killing anything. This is for the case when @@ -2027,6 +2031,13 @@ int soft_offline_page(struct page *page, int flags) return -EIO; } + if (!sysctl_enable_soft_offline) { + pr_info_once("disabled by /proc/sys/vm/enable_soft_offline\n"); + if (flags & MF_COUNT_INCREASED) + put_page(page); + return -EOPNOTSUPP; + } + if (PageHWPoison(page)) { pr_info("soft offline: %#lx page already poisoned\n", pfn); if (flags & MF_COUNT_INCREASED) -- 2.33.0
From: Kyle Meyer <kyle.meyer@hpe.com> maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/ID6J4B Reference: https://lore.kernel.org/all/aMiu_Uku6Y5ZbuhM@hpe.com/T/#u -------------------------------- Some BIOS suppress ("cloak") corrected memory errors until a threshold is reached. Once that threshold is reached, BIOS reports a CPER with the "error threshold exceeded" bit set via GHES and the corresponding page is soft offlined. BIOS does not know the page type of the corresponding page. If the corresponding page happens to be a HugeTLB page, it will be dissolved, permanently reducing the HugeTLB page pool. This can be problematic for workloads that depend on a fixed number of HugeTLB pages. Currently, soft offline must be disabled to prevent HugeTLB pages from being soft offlined. This patch provides a middle ground. Soft offline can be disabled for HugeTLB pages while remaining enabled for non-HugeTLB pages, preserving the benefits of soft offline without the risk of BIOS soft offlining HugeTLB pages. Commit 56374430c5dfc ("mm/memory-failure: userspace controls soft-offlining pages") introduced the following sysctl interface to control soft offline: /proc/sys/vm/enable_soft_offline The interface does not distinguish between page types: 0 - Soft offline is disabled 1 - Soft offline is enabled Convert enable_soft_offline to a bitmask and support disabling soft offline for HugeTLB pages: Bits: 0 - Enable soft offline 1 - Disable soft offline for HugeTLB pages Supported values: 0 - Soft offline is disabled 1 - Soft offline is enabled 3 - Soft offline is enabled (disabled for HugeTLB pages) Existing behavior is preserved. Update documentation and HugeTLB soft offline self tests. Tony said: : Recap of original problem is that some BIOS keep track of error : threshold per-rank and use this GHES mechanism to report threshold : exceeded on the rank. : : Systems that stay up a long time can accumulate enough soft errors to : trigger this threshold. But the action of taking a page offline isn't : going to help. For a 4K page this is merely annoying. For 1G page it : can mess things up badly. : : My original patch for this just skipped the GHES->offline process for : huge pages. But I wasn't aware of the sysctl control. That provides a : better solution. Link: https://lkml.kernel.org/r/aMiu_Uku6Y5ZbuhM@hpe.com Signed-off-by: Kyle Meyer <kyle.meyer@hpe.com> Reported-by: Shawn Fan <shawn.fan@intel.com> Suggested-by: Tony Luck <tony.luck@intel.com> Cc: Borislav Betkov <bp@alien8.de> Cc: David Hildenbrand <david@redhat.com> Cc: Jane Chu <jane.chu@oracle.com> Cc: Jan Kara <jack@suse.cz> Cc: Jiaqi Yan <jiaqiyan@google.com> Cc: Joel Granados <joel.granados@kernel.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Miaohe Lin <linmiaohe@huawei.com> Cc: Michal Clapinski <mclapinski@google.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Naoya Horiguchi <nao.horiguchi@gmail.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: Russ Anderson <russ.anderson@hpe.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Yafang <laoar.shao@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- .../ABI/testing/sysfs-memory-page-offline | 3 +++ kernel/sysctl.c | 2 +- mm/memory-failure.c | 16 ++++++++++++++-- 3 files changed, 18 insertions(+), 3 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-memory-page-offline b/Documentation/ABI/testing/sysfs-memory-page-offline index e14703f12fdf..93285bbadc9e 100644 --- a/Documentation/ABI/testing/sysfs-memory-page-offline +++ b/Documentation/ABI/testing/sysfs-memory-page-offline @@ -20,6 +20,9 @@ Description: number, or a error when the offlining failed. Reading the file is not allowed. + Soft-offline can be controlled via sysctl, see: + Documentation/admin-guide/sysctl/vm.rst + What: /sys/devices/system/memory/hard_offline_page Date: Sep 2009 KernelVersion: 2.6.33 diff --git a/kernel/sysctl.c b/kernel/sysctl.c index f35a1990456e..88f92eff7bf2 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -1831,7 +1831,7 @@ static struct ctl_table vm_table[] = { .mode = 0644, .proc_handler = proc_dointvec_minmax, .extra1 = &zero, - .extra2 = &one, + .extra2 = &three, }, #endif { diff --git a/mm/memory-failure.c b/mm/memory-failure.c index ad416016d1e9..7fbc9c214da9 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -63,11 +63,14 @@ #include "internal.h" #include "ras/ras_event.h" +#define SOFT_OFFLINE_ENABLED BIT(0) +#define SOFT_OFFLINE_SKIP_HUGETLB BIT(1) + int sysctl_memory_failure_early_kill __read_mostly = 0; int sysctl_memory_failure_recovery __read_mostly = 1; -int sysctl_enable_soft_offline __read_mostly = 1; +int sysctl_enable_soft_offline __read_mostly = SOFT_OFFLINE_ENABLED; atomic_long_t num_poisoned_pages __read_mostly = ATOMIC_LONG_INIT(0); @@ -2031,13 +2034,22 @@ int soft_offline_page(struct page *page, int flags) return -EIO; } - if (!sysctl_enable_soft_offline) { + if (!(sysctl_enable_soft_offline & SOFT_OFFLINE_ENABLED)) { pr_info_once("disabled by /proc/sys/vm/enable_soft_offline\n"); if (flags & MF_COUNT_INCREASED) put_page(page); return -EOPNOTSUPP; } + if (sysctl_enable_soft_offline & SOFT_OFFLINE_SKIP_HUGETLB) { + if (PageHuge(page)) { + pr_info_once("disabled for HugeTLB pages by /proc/sys/vm/enable_soft_offline\n"); + if (flags & MF_COUNT_INCREASED) + put_page(page); + return -EOPNOTSUPP; + } + } + if (PageHWPoison(page)) { pr_info("soft offline: %#lx page already poisoned\n", pfn); if (flags & MF_COUNT_INCREASED) -- 2.33.0
hulk inclusion category: feature bugzilla: https://atomgit.com/openeuler/kernel/issues/8489 -------------------------------- This reverts commit 6c86fa23cd7eeda91654934e755a752b0149dfb5. Commit 6c86fa23cd7e ("mm/memory-failure: support disabling soft offline for HugeTLB pages") introduced a new bit of sysctl_enable_soft_offline to support disabling soft offline for HugeTLB pages only. It is no longer needed. Let's revert it. Signed-off-by: Qi Xi <xiqi2@huawei.com> --- .../ABI/testing/sysfs-memory-page-offline | 3 --- kernel/sysctl.c | 2 +- mm/memory-failure.c | 16 ++-------------- 3 files changed, 3 insertions(+), 18 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-memory-page-offline b/Documentation/ABI/testing/sysfs-memory-page-offline index 93285bbadc9e..e14703f12fdf 100644 --- a/Documentation/ABI/testing/sysfs-memory-page-offline +++ b/Documentation/ABI/testing/sysfs-memory-page-offline @@ -20,9 +20,6 @@ Description: number, or a error when the offlining failed. Reading the file is not allowed. - Soft-offline can be controlled via sysctl, see: - Documentation/admin-guide/sysctl/vm.rst - What: /sys/devices/system/memory/hard_offline_page Date: Sep 2009 KernelVersion: 2.6.33 diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 88f92eff7bf2..f35a1990456e 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -1831,7 +1831,7 @@ static struct ctl_table vm_table[] = { .mode = 0644, .proc_handler = proc_dointvec_minmax, .extra1 = &zero, - .extra2 = &three, + .extra2 = &one, }, #endif { diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 7fbc9c214da9..ad416016d1e9 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -63,14 +63,11 @@ #include "internal.h" #include "ras/ras_event.h" -#define SOFT_OFFLINE_ENABLED BIT(0) -#define SOFT_OFFLINE_SKIP_HUGETLB BIT(1) - int sysctl_memory_failure_early_kill __read_mostly = 0; int sysctl_memory_failure_recovery __read_mostly = 1; -int sysctl_enable_soft_offline __read_mostly = SOFT_OFFLINE_ENABLED; +int sysctl_enable_soft_offline __read_mostly = 1; atomic_long_t num_poisoned_pages __read_mostly = ATOMIC_LONG_INIT(0); @@ -2034,22 +2031,13 @@ int soft_offline_page(struct page *page, int flags) return -EIO; } - if (!(sysctl_enable_soft_offline & SOFT_OFFLINE_ENABLED)) { + if (!sysctl_enable_soft_offline) { pr_info_once("disabled by /proc/sys/vm/enable_soft_offline\n"); if (flags & MF_COUNT_INCREASED) put_page(page); return -EOPNOTSUPP; } - if (sysctl_enable_soft_offline & SOFT_OFFLINE_SKIP_HUGETLB) { - if (PageHuge(page)) { - pr_info_once("disabled for HugeTLB pages by /proc/sys/vm/enable_soft_offline\n"); - if (flags & MF_COUNT_INCREASED) - put_page(page); - return -EOPNOTSUPP; - } - } - if (PageHWPoison(page)) { pr_info("soft offline: %#lx page already poisoned\n", pfn); if (flags & MF_COUNT_INCREASED) -- 2.33.0
hulk inclusion category: feature bugzilla: https://atomgit.com/openeuler/kernel/issues/8489 -------------------------------- This reverts commit 5c20a4adee2784f68d06b8a7c5d39dea49004e2e. Commit 5c20a4adee27 ("arm64: mm: Add copy mc support for migrate_page") adds copy mc support for both normal pages and hugetlb in page migration. However, hugetlb is not enabled for page migration. So we can revert it. Signed-off-by: Qi Xi <xiqi2@huawei.com> --- mm/migrate.c | 79 ++++++---------------------------------------------- 1 file changed, 9 insertions(+), 70 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 83f95fa8f4c5..f8c379a0b9b9 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -47,7 +47,6 @@ #include <linux/page_owner.h> #include <linux/sched/mm.h> #include <linux/ptrace.h> -#include <linux/highmem.h> #include <asm/tlbflush.h> @@ -641,33 +640,24 @@ int migrate_huge_page_move_mapping(struct address_space *mapping, * arithmetic will work across the entire page. We need something more * specialized. */ -static int __copy_gigantic_page(struct page *dst, struct page *src, - int nr_pages, bool mc) +static void __copy_gigantic_page(struct page *dst, struct page *src, + int nr_pages) { - int i, ret = 0; + int i; struct page *dst_base = dst; struct page *src_base = src; for (i = 0; i < nr_pages; ) { cond_resched(); - - if (mc) { - ret = copy_mc_highpage(dst, src); - if (ret) - return -EFAULT; - } else { - copy_highpage(dst, src); - } + copy_highpage(dst, src); i++; dst = mem_map_next(dst, dst_base, i); src = mem_map_next(src, src_base, i); } - - return ret; } -static int __copy_huge_page(struct page *dst, struct page *src, bool mc) +static void copy_huge_page(struct page *dst, struct page *src) { int i; int nr_pages; @@ -677,32 +667,20 @@ static int __copy_huge_page(struct page *dst, struct page *src, bool mc) struct hstate *h = page_hstate(src); nr_pages = pages_per_huge_page(h); - if (unlikely(nr_pages > MAX_ORDER_NR_PAGES)) - return __copy_gigantic_page(dst, src, nr_pages, mc); + if (unlikely(nr_pages > MAX_ORDER_NR_PAGES)) { + __copy_gigantic_page(dst, src, nr_pages); + return; + } } else { /* thp page */ BUG_ON(!PageTransHuge(src)); nr_pages = hpage_nr_pages(src); } - if (mc) - return copy_mc_highpages(dst, src, nr_pages); - for (i = 0; i < nr_pages; i++) { cond_resched(); copy_highpage(dst + i, src + i); } - return 0; -} - -static int copy_huge_page(struct page *dst, struct page *src) -{ - return __copy_huge_page(dst, src, false); -} - -static int copy_mc_huge_page(struct page *dst, struct page *src) -{ - return __copy_huge_page(dst, src, true); } /* @@ -778,38 +756,6 @@ void migrate_page_copy(struct page *newpage, struct page *page) } EXPORT_SYMBOL(migrate_page_copy); -static int migrate_page_copy_mc(struct page *newpage, struct page *page) -{ - int rc; - - if (PageHuge(page) || PageTransHuge(page)) - rc = copy_mc_huge_page(newpage, page); - else - rc = copy_mc_highpage(newpage, page); - - return rc; -} - -static int migrate_page_mc_extra(struct address_space *mapping, - struct page *newpage, struct page *page, - enum migrate_mode mode, int extra_count) -{ - int rc; - - rc = migrate_page_copy_mc(newpage, page); - if (rc) - return rc; - - rc = migrate_page_move_mapping(mapping, newpage, page, NULL, mode, - extra_count); - if (rc != MIGRATEPAGE_SUCCESS) - return rc; - - migrate_page_states(newpage, page); - - return rc; -} - /************************************************************ * Migration functions ***********************************************************/ @@ -828,13 +774,6 @@ int migrate_page(struct address_space *mapping, BUG_ON(PageWriteback(page)); /* Writeback must be complete */ -#ifdef CONFIG_UCE_KERNEL_RECOVERY - if (IS_ENABLED(CONFIG_ARM64) && - is_cow_kernel_recovery_enable() && - (mode != MIGRATE_SYNC_NO_COPY)) - return migrate_page_mc_extra(mapping, newpage, page, mode, 0); -#endif - rc = migrate_page_move_mapping(mapping, newpage, page, NULL, mode, 0); if (rc != MIGRATEPAGE_SUCCESS) -- 2.33.0
From: Wupeng Ma <mawupeng1@huawei.com> hulk inclusion category: feature bugzilla: https://atomgit.com/openeuler/kernel/issues/8489 -------------------------------- When encountering memory faults during page migration on arm64 systems, this change ensures the faulting user process is terminated instead of causing a kernel panic. The implementation adds proper error handling for copy operations in migrate_page(). To enable this, bit 1 for uce_kernel_recovery should be enabled: - echo 1 > /proc/sys/kernel/uce_kernel_recovery Signed-off-by: Wupeng Ma <mawupeng1@huawei.com> Signed-off-by: Qi Xi <xiqi2@huawei.com> --- mm/memory-failure.c | 7 ++++-- mm/migrate.c | 58 +++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 61 insertions(+), 4 deletions(-) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index ad416016d1e9..908be8e32823 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -60,6 +60,7 @@ #include <linux/ratelimit.h> #include <linux/page-isolation.h> #include <linux/shmem_fs.h> +#include <linux/sched.h> #include "internal.h" #include "ras/ras_event.h" @@ -2050,9 +2051,11 @@ int soft_offline_page(struct page *page, int flags) ret = get_any_page(page, pfn, flags); put_online_mems(); - if (ret > 0) + if (ret > 0) { + current->flags |= PF_UCE_KERNEL_RECOVERY; ret = soft_offline_in_use_page(page); - else if (ret == 0) + current->flags &= ~PF_UCE_KERNEL_RECOVERY; + } else if (ret == 0) if (soft_offline_free_page(page) && try_again) { try_again = false; flags &= ~MF_COUNT_INCREASED; diff --git a/mm/migrate.c b/mm/migrate.c index f8c379a0b9b9..51ae1c0018da 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -47,6 +47,7 @@ #include <linux/page_owner.h> #include <linux/sched/mm.h> #include <linux/ptrace.h> +#include <linux/highmem.h> #include <asm/tlbflush.h> @@ -657,7 +658,7 @@ static void __copy_gigantic_page(struct page *dst, struct page *src, } } -static void copy_huge_page(struct page *dst, struct page *src) +static int __copy_huge_page(struct page *dst, struct page *src, bool mc) { int i; int nr_pages; @@ -669,7 +670,7 @@ static void copy_huge_page(struct page *dst, struct page *src) if (unlikely(nr_pages > MAX_ORDER_NR_PAGES)) { __copy_gigantic_page(dst, src, nr_pages); - return; + return 0; } } else { /* thp page */ @@ -677,10 +678,24 @@ static void copy_huge_page(struct page *dst, struct page *src) nr_pages = hpage_nr_pages(src); } + if (mc) + return copy_mc_highpages(dst, src, nr_pages); + for (i = 0; i < nr_pages; i++) { cond_resched(); copy_highpage(dst + i, src + i); } + return 0; +} + +static int copy_huge_page(struct page *dst, struct page *src) +{ + return __copy_huge_page(dst, src, false); +} + +static int copy_mc_huge_page(struct page *dst, struct page *src) +{ + return __copy_huge_page(dst, src, true); } /* @@ -756,6 +771,38 @@ void migrate_page_copy(struct page *newpage, struct page *page) } EXPORT_SYMBOL(migrate_page_copy); +static int migrate_page_copy_mc(struct page *newpage, struct page *page) +{ + int rc; + + if (PageHuge(page) || PageTransHuge(page)) + rc = copy_mc_huge_page(newpage, page); + else + rc = copy_mc_highpage(newpage, page); + + return rc; +} + +static int migrate_page_mc_extra(struct address_space *mapping, + struct page *newpage, struct page *page, + enum migrate_mode mode, int extra_count) +{ + int rc; + + rc = migrate_page_copy_mc(newpage, page); + if (rc) + return rc; + + rc = migrate_page_move_mapping(mapping, newpage, page, NULL, mode, + extra_count); + if (rc != MIGRATEPAGE_SUCCESS) + return rc; + + migrate_page_states(newpage, page); + + return rc; +} + /************************************************************ * Migration functions ***********************************************************/ @@ -774,6 +821,13 @@ int migrate_page(struct address_space *mapping, BUG_ON(PageWriteback(page)); /* Writeback must be complete */ +#ifdef CONFIG_UCE_KERNEL_RECOVERY + if (is_cow_kernel_recovery_enable() && + (mode != MIGRATE_SYNC_NO_COPY) && + current->flags & PF_UCE_KERNEL_RECOVERY) + return migrate_page_mc_extra(mapping, newpage, page, mode, 0); +#endif + rc = migrate_page_move_mapping(mapping, newpage, page, NULL, mode, 0); if (rc != MIGRATEPAGE_SUCCESS) -- 2.33.0
hulk inclusion category: feature bugzilla: https://atomgit.com/openeuler/kernel/issues/8489 -------------------------------- Add sysctl interface to control GHES triggered soft-offline page handling: - BIT0: Enable/disable driver notification (for BMC reporting, default 0) - BIT1: Enable/disable soft-offline for base pages (default 1) - BIT2: Enable/disable soft-offline for HugeTLB pages (default 0) Only BIT0 changes trigger notifier chain for driver notification. BIT1-2 control whether to perform soft-offline without notifications. Default policy (0x2) performs soft-offline on base pages only. Signed-off-by: Qi Xi <xiqi2@huawei.com> --- drivers/acpi/apei/ghes.c | 77 ++++++++++++++++++++++++++++++++++++++++ include/linux/mm.h | 6 ++++ kernel/sysctl.c | 12 +++++++ 3 files changed, 95 insertions(+) diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index 73a04c0b19eb..5ab67e8898d0 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -46,6 +46,7 @@ #include <linux/sched/clock.h> #include <linux/uuid.h> #include <linux/ras.h> +#include <linux/mm.h> #include <acpi/actbl1.h> #include <acpi/ghes.h> @@ -400,6 +401,79 @@ static void ghes_clear_estatus(struct ghes *ghes) ghes->flags &= ~GHES_TO_CLEAR; } +#ifdef CONFIG_ACPI_APEI_MEMORY_FAILURE +#define APEI_PAGE_OFFLINE_NOTIFY BIT(0) +#define APEI_PAGE_OFFLINE_ALLOW_BASE_PAGE BIT(1) +#define APEI_PAGE_OFFLINE_ALLOW_HUGETLB BIT(2) + +int sysctl_apei_page_offline_policy __read_mostly = + APEI_PAGE_OFFLINE_ALLOW_BASE_PAGE; +EXPORT_SYMBOL(sysctl_apei_page_offline_policy); + +static ATOMIC_NOTIFIER_HEAD(apei_page_offline_notifier_chain); + +int register_apei_page_offline_notifier(struct notifier_block *nb) +{ + return atomic_notifier_chain_register( + &apei_page_offline_notifier_chain, nb); +} +EXPORT_SYMBOL(register_apei_page_offline_notifier); + +int unregister_apei_page_offline_notifier(struct notifier_block *nb) +{ + return atomic_notifier_chain_unregister( + &apei_page_offline_notifier_chain, nb); +} +EXPORT_SYMBOL(unregister_apei_page_offline_notifier); + +int apei_page_offline_policy_handler(struct ctl_table *table, + int write, void __user *buffer, + size_t *lenp, loff_t *ppos) +{ + int old_val, new_val; + int ret; + + old_val = sysctl_apei_page_offline_policy & APEI_PAGE_OFFLINE_NOTIFY; + + ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); + new_val = sysctl_apei_page_offline_policy & APEI_PAGE_OFFLINE_NOTIFY; + + if (write && ret == 0 && old_val != new_val) { + atomic_notifier_call_chain( + &apei_page_offline_notifier_chain, 0, + &sysctl_apei_page_offline_policy); + } + + return ret; +} + +static bool apei_page_should_offline(unsigned long pfn) +{ + struct page *page; + + page = pfn_to_online_page(pfn); + if (!page) + return false; + + if (!(sysctl_apei_page_offline_policy & APEI_PAGE_OFFLINE_ALLOW_BASE_PAGE)) { + if (!PageHuge(page)) { + pr_info_once("disabled for normal pages by /proc/sys/vm/apei_page_offline_policy\n"); + return false; + } + } + + if (!(sysctl_apei_page_offline_policy & APEI_PAGE_OFFLINE_ALLOW_HUGETLB)) { + if (PageHuge(page)) { + pr_info_once("disabled for HugeTLB pages by /proc/sys/vm/apei_page_offline_policy\n"); + return false; + } + } + + return true; +} + +#endif + static void ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, int sev) { #ifdef CONFIG_ACPI_APEI_MEMORY_FAILURE @@ -426,6 +500,9 @@ static void ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, int if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE) flags = 0; + if (flags == MF_SOFT_OFFLINE && !apei_page_should_offline(pfn)) + return; + if (flags != -1) memory_failure_queue(pfn, flags); #endif diff --git a/include/linux/mm.h b/include/linux/mm.h index 0274a82144e4..2a91d2dd2452 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2874,6 +2874,12 @@ extern int get_hwpoison_page(struct page *page); extern int sysctl_memory_failure_early_kill; extern int sysctl_memory_failure_recovery; extern int sysctl_enable_soft_offline; +#ifdef CONFIG_ACPI_APEI_MEMORY_FAILURE +extern int sysctl_apei_page_offline_policy; +extern int apei_page_offline_policy_handler(struct ctl_table *table, + int write, void __user *buffer, + size_t *lenp, loff_t *ppos); +#endif extern void shake_page(struct page *p, int access); extern atomic_long_t num_poisoned_pages __read_mostly; extern int soft_offline_page(struct page *page, int flags); diff --git a/kernel/sysctl.c b/kernel/sysctl.c index f35a1990456e..4186b6a795e3 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -130,6 +130,7 @@ static int __maybe_unused two = 2; static int __maybe_unused three = 3; static int __maybe_unused four = 4; static int __maybe_unused five = 5; +static int __maybe_unused seven = 7; static int __maybe_unused uce_kernel_recovery_max = 31; static int int_max = INT_MAX; static unsigned long zero_ul; @@ -1833,6 +1834,17 @@ static struct ctl_table vm_table[] = { .extra1 = &zero, .extra2 = &one, }, +#endif +#ifdef CONFIG_ACPI_APEI_MEMORY_FAILURE + { + .procname = "apei_page_offline_policy", + .data = &sysctl_apei_page_offline_policy, + .maxlen = sizeof(sysctl_apei_page_offline_policy), + .mode = 0644, + .proc_handler = apei_page_offline_policy_handler, + .extra1 = &zero, + .extra2 = &seven, + }, #endif { .procname = "user_reserve_kbytes", -- 2.33.0
From: Yongjian Sun <sunyongjian1@huawei.com> hulk inclusion category: feature bugzilla: https://atomgit.com/openeuler/kernel/issues/8489 -------------------------------- A bitmap inconsistency issue was observed during stress tests under mixed huge-page workloads. Ext4 reported multiple e4b bitmap check failures like: ext4_mb_complex_scan_group:2508: group 350, 8179 free clusters as per group info. But got 8192 blocks Analysis and experimentation confirmed that the issue is caused by a race condition between page migration and bitmap modification. Although this timing window is extremely narrow, it is still hit in practice: folio_lock ext4_mb_load_buddy __migrate_folio check ref count folio_mc_copy __filemap_get_folio folio_try_get(folio) ...... mb_mark_used ext4_mb_unload_buddy __folio_migrate_mapping folio_ref_freeze folio_unlock The root cause of this issue is that the fast path of load_buddy only increments the folio's reference count, which is insufficient to prevent concurrent folio migration. We observed that the folio migration process acquires the folio lock. Therefore, we can determine whether to take the fast path in load_buddy by checking the lock status. If the folio is locked, we opt for the slow path (which acquires the lock) to close this concurrency window. Additionally, this change addresses the following issues: When the DOUBLE_CHECK macro is enabled to inspect bitmap-related issues, the following error may be triggered: corruption in group 324 at byte 784(6272): f in copy != ff on disk/prealloc Analysis reveals that this is a false positive. There is a specific race window where the bitmap and the group descriptor become momentarily inconsistent, leading to this error report: ext4_mb_load_buddy ext4_mb_load_buddy __filemap_get_folio(create|lock) folio_lock ext4_mb_init_cache folio_mark_uptodate __filemap_get_folio(no lock) ...... mb_mark_used mb_mark_used_double mb_cmp_bitmaps mb_set_bits(e4b->bd_bitmap) folio_unlock The original logic assumed that since mb_cmp_bitmaps is called when the bitmap is newly loaded from disk, the folio lock would be sufficient to prevent concurrent access. However, this overlooks a specific race condition: if another process attempts to load buddy and finds the folio is already in an uptodate state, it will immediately begin using it without holding folio lock. Fixes: c9de560ded61 ("ext4: Add multi block allocator for ext4") Signed-off-by: Yongjian Sun <sunyongjian1@huawei.com> Signed-off-by: Qi Xi <xiqi2@huawei.com> --- fs/ext4/mballoc.c | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 3f2d97894212..63159aac4f52 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -1146,16 +1146,17 @@ ext4_mb_load_buddy_gfp(struct super_block *sb, ext4_group_t group, /* we could use find_or_create_page(), but it locks page * what we'd like to avoid in fast path ... */ page = find_get_page_flags(inode->i_mapping, pnum, FGP_ACCESSED); - if (page == NULL || !PageUptodate(page)) { + if (page == NULL || !PageUptodate(page) || PageLocked(page)) { + /* + * PageLocked is employed to detect ongoing page + * migrations, since concurrent migrations can lead to + * bitmap inconsistency. And if we are not uptodate that + * implies somebody just created the page but is yet to + * initialize it. We can drop the page reference and + * try to get the page with lock in both cases to avoid + * concurrency. + */ if (page) - /* - * drop the page reference and try - * to get the page with lock. If we - * are not uptodate that implies - * somebody just created the page but - * is yet to initialize the same. So - * wait for it to initialize. - */ put_page(page); page = find_or_create_page(inode->i_mapping, pnum, gfp); if (page) { @@ -1190,7 +1191,7 @@ ext4_mb_load_buddy_gfp(struct super_block *sb, ext4_group_t group, poff = block % blocks_per_page; page = find_get_page_flags(inode->i_mapping, pnum, FGP_ACCESSED); - if (page == NULL || !PageUptodate(page)) { + if (page == NULL || !PageUptodate(page) || PageLocked(page)) { if (page) put_page(page); page = find_or_create_page(inode->i_mapping, pnum, gfp); -- 2.33.0
反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://atomgit.com/openeuler/kernel/merge_requests/20855 邮件列表地址:https://mailweb.openeuler.org/archives/list/kernel@openeuler.org/message/E2Q... FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://atomgit.com/openeuler/kernel/merge_requests/20855 Mailing list address: https://mailweb.openeuler.org/archives/list/kernel@openeuler.org/message/E2Q...
participants (2)
-
patchwork bot -
Qi Xi