From: David Hildenbrand david@redhat.com
mainline inclusion from mainline-5.3-rc1 commit 22eb634632a2359769f8a2a91a41d3c566a0a450 category: bugfix bugzilla: 29418 CVE: NA
-------------------------------------------------
A proper arch_remove_memory() implementation is on its way, which also cleanly removes page tables in arch_add_memory() in case something goes wrong.
As we want to use arch_remove_memory() in case something goes wrong during memory hotplug after arch_add_memory() finished, let's add a temporary hack that is sufficient enough until we get a proper implementation that cleans up page table entries.
We will remove CONFIG_MEMORY_HOTREMOVE around this code in follow up patches.
Link: http://lkml.kernel.org/r/20190527111152.16324-5-david@redhat.com Signed-off-by: David Hildenbrand david@redhat.com Cc: Catalin Marinas catalin.marinas@arm.com Cc: Will Deacon will.deacon@arm.com Cc: Mark Rutland mark.rutland@arm.com Cc: Ard Biesheuvel ard.biesheuvel@linaro.org Cc: Chintan Pandya cpandya@codeaurora.org Cc: Mike Rapoport rppt@linux.ibm.com Cc: Jun Yao yaojun8558363@gmail.com Cc: Yu Zhao yuzhao@google.com Cc: Robin Murphy robin.murphy@arm.com Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Alex Deucher alexander.deucher@amd.com Cc: Andrew Banman andrew.banman@hpe.com Cc: Andy Lutomirski luto@kernel.org Cc: Arun KS arunks@codeaurora.org Cc: Baoquan He bhe@redhat.com Cc: Benjamin Herrenschmidt benh@kernel.crashing.org Cc: Borislav Petkov bp@alien8.de Cc: Christophe Leroy christophe.leroy@c-s.fr Cc: Chris Wilson chris@chris-wilson.co.uk Cc: Dan Williams dan.j.williams@intel.com Cc: Dave Hansen dave.hansen@linux.intel.com Cc: "David S. Miller" davem@davemloft.net Cc: Fenghua Yu fenghua.yu@intel.com Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: Heiko Carstens heiko.carstens@de.ibm.com Cc: "H. Peter Anvin" hpa@zytor.com Cc: Ingo Molnar mingo@kernel.org Cc: Jonathan Cameron Jonathan.Cameron@huawei.com Cc: Joonsoo Kim iamjoonsoo.kim@lge.com Cc: "Kirill A. Shutemov" kirill.shutemov@linux.intel.com Cc: Logan Gunthorpe logang@deltatee.com Cc: Mark Brown broonie@kernel.org Cc: Masahiro Yamada yamada.masahiro@socionext.com Cc: Mathieu Malaterre malat@debian.org Cc: Michael Ellerman mpe@ellerman.id.au Cc: Michal Hocko mhocko@suse.com Cc: Mike Rapoport rppt@linux.vnet.ibm.com Cc: "mike.travis@hpe.com" mike.travis@hpe.com Cc: Nicholas Piggin npiggin@gmail.com Cc: Oscar Salvador osalvador@suse.com Cc: Oscar Salvador osalvador@suse.de Cc: Paul Mackerras paulus@samba.org Cc: Pavel Tatashin pasha.tatashin@soleen.com Cc: Peter Zijlstra peterz@infradead.org Cc: Qian Cai cai@lca.pw Cc: "Rafael J. Wysocki" rafael@kernel.org Cc: Rich Felker dalias@libc.org Cc: Rob Herring robh@kernel.org Cc: Thomas Gleixner tglx@linutronix.de Cc: Tony Luck tony.luck@intel.com Cc: Vasily Gorbik gor@linux.ibm.com Cc: Wei Yang richard.weiyang@gmail.com Cc: Yoshinori Sato ysato@users.sourceforge.jp Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/mm/mmu.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 4626c7e..722b7b0 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1051,4 +1051,23 @@ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, return __add_pages(nid, start >> PAGE_SHIFT, size >> PAGE_SHIFT, altmap, want_memblock); } +#ifdef CONFIG_MEMORY_HOTREMOVE +void arch_remove_memory(int nid, u64 start, u64 size, + struct vmem_altmap *altmap) +{ + unsigned long start_pfn = start >> PAGE_SHIFT; + unsigned long nr_pages = size >> PAGE_SHIFT; + struct zone *zone; + + /* + * FIXME: Cleanup page tables (also in arch_add_memory() in case + * adding fails). Until then, this function should only be used + * during memory hotplug (adding memory), not for memory + * unplug. ARCH_ENABLE_MEMORY_HOTREMOVE must not be + * unlocked yet. + */ + zone = page_zone(pfn_to_page(start_pfn)); + __remove_pages(zone, start_pfn, nr_pages, altmap); +} +#endif #endif