From: David Hildenbrand david@redhat.com
mainline inclusion from mainline-5.3-rc1 commit cec3ebd083d4e8d161d0b18894c78e3311bcd026 category: bugfix bugzilla: 29418 CVE: NA
-------------------------------------------------
Patch series "mm/memory_hotplug: Factor out memory block devicehandling", v3.
We only want memory block devices for memory to be onlined/offlined (add/remove from the buddy). This is required so user space can online/offline memory and kdump gets notified about newly onlined memory.
Let's factor out creation/removal of memory block devices. This helps to further cleanup arch_add_memory/arch_remove_memory() and to make implementation of new features easier - especially sub-section memory hot add from Dan.
Anshuman Khandual is currently working on arch_remove_memory(). I added a temporary solution via "arm64/mm: Add temporary arch_remove_memory() implementation", that is sufficient as a firsts tep in the context of this series. (we don't cleanup page tables in case anything goes wrong already)
Did a quick sanity test with DIMM plug/unplug, making sure all devices and sysfs links properly get added/removed. Compile tested on s390x and x86-64.
This patch (of 11):
By converting start and size to page granularity, we actually ignore unaligned parts within a page instead of properly bailing out with an error.
Link: http://lkml.kernel.org/r/20190527111152.16324-2-david@redhat.com Signed-off-by: David Hildenbrand david@redhat.com Reviewed-by: Dan Williams dan.j.williams@intel.com Reviewed-by: Wei Yang richardw.yang@linux.intel.com Reviewed-by: Pavel Tatashin pasha.tatashin@soleen.com Reviewed-by: Oscar Salvador osalvador@suse.de Acked-by: Michal Hocko mhocko@suse.com Cc: David Hildenbrand david@redhat.com Cc: Qian Cai cai@lca.pw Cc: Arun KS arunks@codeaurora.org Cc: Mathieu Malaterre malat@debian.org Cc: Alex Deucher alexander.deucher@amd.com Cc: Andrew Banman andrew.banman@hpe.com Cc: Andy Lutomirski luto@kernel.org Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Ard Biesheuvel ard.biesheuvel@linaro.org Cc: Baoquan He bhe@redhat.com Cc: Benjamin Herrenschmidt benh@kernel.crashing.org Cc: Borislav Petkov bp@alien8.de Cc: Catalin Marinas catalin.marinas@arm.com Cc: Chintan Pandya cpandya@codeaurora.org Cc: Christophe Leroy christophe.leroy@c-s.fr Cc: Chris Wilson chris@chris-wilson.co.uk Cc: Dave Hansen dave.hansen@linux.intel.com Cc: "David S. Miller" davem@davemloft.net Cc: Fenghua Yu fenghua.yu@intel.com Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: Heiko Carstens heiko.carstens@de.ibm.com Cc: "H. Peter Anvin" hpa@zytor.com Cc: Ingo Molnar mingo@kernel.org Cc: Jonathan Cameron Jonathan.Cameron@huawei.com Cc: Joonsoo Kim iamjoonsoo.kim@lge.com Cc: Jun Yao yaojun8558363@gmail.com Cc: "Kirill A. Shutemov" kirill.shutemov@linux.intel.com Cc: Logan Gunthorpe logang@deltatee.com Cc: Mark Brown broonie@kernel.org Cc: Mark Rutland mark.rutland@arm.com Cc: Masahiro Yamada yamada.masahiro@socionext.com Cc: Michael Ellerman mpe@ellerman.id.au Cc: Mike Rapoport rppt@linux.vnet.ibm.com Cc: "mike.travis@hpe.com" mike.travis@hpe.com Cc: Nicholas Piggin npiggin@gmail.com Cc: Paul Mackerras paulus@samba.org Cc: Peter Zijlstra peterz@infradead.org Cc: "Rafael J. Wysocki" rafael@kernel.org Cc: Rich Felker dalias@libc.org Cc: Rob Herring robh@kernel.org Cc: Robin Murphy robin.murphy@arm.com Cc: Thomas Gleixner tglx@linutronix.de Cc: Tony Luck tony.luck@intel.com Cc: Vasily Gorbik gor@linux.ibm.com Cc: Will Deacon will.deacon@arm.com Cc: Yoshinori Sato ysato@users.sourceforge.jp Cc: Yu Zhao yuzhao@google.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/memory_hotplug.c | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 8a6ad9b..bfd148d 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1059,16 +1059,11 @@ int try_online_node(int nid)
static int check_hotplug_memory_range(u64 start, u64 size) { - unsigned long block_sz = memory_block_size_bytes(); - u64 block_nr_pages = block_sz >> PAGE_SHIFT; - u64 nr_pages = size >> PAGE_SHIFT; - u64 start_pfn = PFN_DOWN(start); - /* memory range must be block size aligned */ - if (!nr_pages || !IS_ALIGNED(start_pfn, block_nr_pages) || - !IS_ALIGNED(nr_pages, block_nr_pages)) { + if (!size || !IS_ALIGNED(start, memory_block_size_bytes()) || + !IS_ALIGNED(size, memory_block_size_bytes())) { pr_err("Block size [%#lx] unaligned hotplug range: start %#llx, size %#llx", - block_sz, start, size); + memory_block_size_bytes(), start, size); return -EINVAL; }
From: David Hildenbrand david@redhat.com
mainline inclusion from mainline-5.3-rc1 commit 973de24a78493d115ec157c68fd31bc0a114134e category: bugfix bugzilla: 29418 CVE: NA
-------------------------------------------------
ZONE_DEVICE is not yet supported, fail if an altmap is passed, so we don't forget arch_add_memory()/arch_remove_memory() when unlocking support.
Link: http://lkml.kernel.org/r/20190527111152.16324-3-david@redhat.com Signed-off-by: David Hildenbrand david@redhat.com Suggested-by: Dan Williams dan.j.williams@intel.com Cc: Heiko Carstens heiko.carstens@de.ibm.com Cc: Michal Hocko mhocko@suse.com Cc: Mike Rapoport rppt@linux.vnet.ibm.com Cc: David Hildenbrand david@redhat.com Cc: Vasily Gorbik gor@linux.ibm.com Cc: Oscar Salvador osalvador@suse.com Cc: Alex Deucher alexander.deucher@amd.com Cc: Andrew Banman andrew.banman@hpe.com Cc: Andy Lutomirski luto@kernel.org Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Ard Biesheuvel ard.biesheuvel@linaro.org Cc: Arun KS arunks@codeaurora.org Cc: Baoquan He bhe@redhat.com Cc: Benjamin Herrenschmidt benh@kernel.crashing.org Cc: Borislav Petkov bp@alien8.de Cc: Catalin Marinas catalin.marinas@arm.com Cc: Chintan Pandya cpandya@codeaurora.org Cc: Christophe Leroy christophe.leroy@c-s.fr Cc: Chris Wilson chris@chris-wilson.co.uk Cc: Dave Hansen dave.hansen@linux.intel.com Cc: "David S. Miller" davem@davemloft.net Cc: Fenghua Yu fenghua.yu@intel.com Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: "H. Peter Anvin" hpa@zytor.com Cc: Ingo Molnar mingo@kernel.org Cc: Jonathan Cameron Jonathan.Cameron@huawei.com Cc: Joonsoo Kim iamjoonsoo.kim@lge.com Cc: Jun Yao yaojun8558363@gmail.com Cc: "Kirill A. Shutemov" kirill.shutemov@linux.intel.com Cc: Logan Gunthorpe logang@deltatee.com Cc: Mark Brown broonie@kernel.org Cc: Mark Rutland mark.rutland@arm.com Cc: Masahiro Yamada yamada.masahiro@socionext.com Cc: Mathieu Malaterre malat@debian.org Cc: Michael Ellerman mpe@ellerman.id.au Cc: Mike Rapoport rppt@linux.ibm.com Cc: "mike.travis@hpe.com" mike.travis@hpe.com Cc: Nicholas Piggin npiggin@gmail.com Cc: Oscar Salvador osalvador@suse.de Cc: Paul Mackerras paulus@samba.org Cc: Pavel Tatashin pasha.tatashin@soleen.com Cc: Peter Zijlstra peterz@infradead.org Cc: Qian Cai cai@lca.pw Cc: "Rafael J. Wysocki" rafael@kernel.org Cc: Rich Felker dalias@libc.org Cc: Rob Herring robh@kernel.org Cc: Robin Murphy robin.murphy@arm.com Cc: Thomas Gleixner tglx@linutronix.de Cc: Tony Luck tony.luck@intel.com Cc: Wei Yang richard.weiyang@gmail.com Cc: Will Deacon will.deacon@arm.com Cc: Yoshinori Sato ysato@users.sourceforge.jp Cc: Yu Zhao yuzhao@google.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/s390/mm/init.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 3fa3e53..34bd72d 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -229,6 +229,9 @@ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, unsigned long size_pages = PFN_DOWN(size); int rc;
+ if (WARN_ON_ONCE(restrictions->altmap)) + return -EINVAL; + rc = vmem_add_mapping(start, size); if (rc) return rc;
From: David Hildenbrand david@redhat.com
mainline inclusion from mainline-5.3-rc1 commit 22eb634632a2359769f8a2a91a41d3c566a0a450 category: bugfix bugzilla: 29418 CVE: NA
-------------------------------------------------
A proper arch_remove_memory() implementation is on its way, which also cleanly removes page tables in arch_add_memory() in case something goes wrong.
As we want to use arch_remove_memory() in case something goes wrong during memory hotplug after arch_add_memory() finished, let's add a temporary hack that is sufficient enough until we get a proper implementation that cleans up page table entries.
We will remove CONFIG_MEMORY_HOTREMOVE around this code in follow up patches.
Link: http://lkml.kernel.org/r/20190527111152.16324-5-david@redhat.com Signed-off-by: David Hildenbrand david@redhat.com Cc: Catalin Marinas catalin.marinas@arm.com Cc: Will Deacon will.deacon@arm.com Cc: Mark Rutland mark.rutland@arm.com Cc: Ard Biesheuvel ard.biesheuvel@linaro.org Cc: Chintan Pandya cpandya@codeaurora.org Cc: Mike Rapoport rppt@linux.ibm.com Cc: Jun Yao yaojun8558363@gmail.com Cc: Yu Zhao yuzhao@google.com Cc: Robin Murphy robin.murphy@arm.com Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Alex Deucher alexander.deucher@amd.com Cc: Andrew Banman andrew.banman@hpe.com Cc: Andy Lutomirski luto@kernel.org Cc: Arun KS arunks@codeaurora.org Cc: Baoquan He bhe@redhat.com Cc: Benjamin Herrenschmidt benh@kernel.crashing.org Cc: Borislav Petkov bp@alien8.de Cc: Christophe Leroy christophe.leroy@c-s.fr Cc: Chris Wilson chris@chris-wilson.co.uk Cc: Dan Williams dan.j.williams@intel.com Cc: Dave Hansen dave.hansen@linux.intel.com Cc: "David S. Miller" davem@davemloft.net Cc: Fenghua Yu fenghua.yu@intel.com Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: Heiko Carstens heiko.carstens@de.ibm.com Cc: "H. Peter Anvin" hpa@zytor.com Cc: Ingo Molnar mingo@kernel.org Cc: Jonathan Cameron Jonathan.Cameron@huawei.com Cc: Joonsoo Kim iamjoonsoo.kim@lge.com Cc: "Kirill A. Shutemov" kirill.shutemov@linux.intel.com Cc: Logan Gunthorpe logang@deltatee.com Cc: Mark Brown broonie@kernel.org Cc: Masahiro Yamada yamada.masahiro@socionext.com Cc: Mathieu Malaterre malat@debian.org Cc: Michael Ellerman mpe@ellerman.id.au Cc: Michal Hocko mhocko@suse.com Cc: Mike Rapoport rppt@linux.vnet.ibm.com Cc: "mike.travis@hpe.com" mike.travis@hpe.com Cc: Nicholas Piggin npiggin@gmail.com Cc: Oscar Salvador osalvador@suse.com Cc: Oscar Salvador osalvador@suse.de Cc: Paul Mackerras paulus@samba.org Cc: Pavel Tatashin pasha.tatashin@soleen.com Cc: Peter Zijlstra peterz@infradead.org Cc: Qian Cai cai@lca.pw Cc: "Rafael J. Wysocki" rafael@kernel.org Cc: Rich Felker dalias@libc.org Cc: Rob Herring robh@kernel.org Cc: Thomas Gleixner tglx@linutronix.de Cc: Tony Luck tony.luck@intel.com Cc: Vasily Gorbik gor@linux.ibm.com Cc: Wei Yang richard.weiyang@gmail.com Cc: Yoshinori Sato ysato@users.sourceforge.jp Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/mm/mmu.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 4626c7e..722b7b0 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1051,4 +1051,23 @@ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, return __add_pages(nid, start >> PAGE_SHIFT, size >> PAGE_SHIFT, altmap, want_memblock); } +#ifdef CONFIG_MEMORY_HOTREMOVE +void arch_remove_memory(int nid, u64 start, u64 size, + struct vmem_altmap *altmap) +{ + unsigned long start_pfn = start >> PAGE_SHIFT; + unsigned long nr_pages = size >> PAGE_SHIFT; + struct zone *zone; + + /* + * FIXME: Cleanup page tables (also in arch_add_memory() in case + * adding fails). Until then, this function should only be used + * during memory hotplug (adding memory), not for memory + * unplug. ARCH_ENABLE_MEMORY_HOTREMOVE must not be + * unlocked yet. + */ + zone = page_zone(pfn_to_page(start_pfn)); + __remove_pages(zone, start_pfn, nr_pages, altmap); +} +#endif #endif