From: David Hildenbrand david@redhat.com
stable inclusion from stable-5.10.68 commit 49cf30ebb35c50234144dd2a34fe7a6d50b966e2 bugzilla: 182671 https://gitee.com/openeuler/kernel/issues/I4EWUH
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit 7cf209ba8a86410939a24cb1aeb279479a7e0ca6 upstream.
Patch series "mm/memory_hotplug: preparatory patches for new online policy and memory"
These are all cleanups and one fix previously sent as part of [1]: [PATCH v1 00/12] mm/memory_hotplug: "auto-movable" online policy and memory groups.
These patches make sense even without the other series, therefore I pulled them out to make the other series easier to digest.
[1] https://lkml.kernel.org/r/20210607195430.48228-1-david@redhat.com
This patch (of 4):
Checkpatch complained on a follow-up patch that we are using "unsigned" here, which defaults to "unsigned int" and checkpatch is correct.
As we will search for a fitting zone using the wrong pfn, we might end up onlining memory to one of the special kernel zones, such as ZONE_DMA, which can end badly as the onlined memory does not satisfy properties of these zones.
Use "unsigned long" instead, just as we do in other places when handling PFNs. This can bite us once we have physical addresses in the range of multiple TB.
Link: https://lkml.kernel.org/r/20210712124052.26491-2-david@redhat.com Fixes: e5e689302633 ("mm, memory_hotplug: display allowed zones in the preferred ordering") Signed-off-by: David Hildenbrand david@redhat.com Reviewed-by: Pankaj Gupta pankaj.gupta@ionos.com Reviewed-by: Muchun Song songmuchun@bytedance.com Reviewed-by: Oscar Salvador osalvador@suse.de Cc: David Hildenbrand david@redhat.com Cc: Vitaly Kuznetsov vkuznets@redhat.com Cc: "Michael S. Tsirkin" mst@redhat.com Cc: Jason Wang jasowang@redhat.com Cc: Pankaj Gupta pankaj.gupta.linux@gmail.com Cc: Wei Yang richard.weiyang@linux.alibaba.com Cc: Michal Hocko mhocko@kernel.org Cc: Dan Williams dan.j.williams@intel.com Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Vlastimil Babka vbabka@suse.cz Cc: Mike Rapoport rppt@kernel.org Cc: "Rafael J. Wysocki" rjw@rjwysocki.net Cc: Len Brown lenb@kernel.org Cc: Pavel Tatashin pasha.tatashin@soleen.com Cc: Heiko Carstens hca@linux.ibm.com Cc: Michael Ellerman mpe@ellerman.id.au Cc: Catalin Marinas catalin.marinas@arm.com Cc: virtualization@lists.linux-foundation.org Cc: Andy Lutomirski luto@kernel.org Cc: "Aneesh Kumar K.V" aneesh.kumar@linux.ibm.com Cc: Anton Blanchard anton@ozlabs.org Cc: Ard Biesheuvel ardb@kernel.org Cc: Baoquan He bhe@redhat.com Cc: Benjamin Herrenschmidt benh@kernel.crashing.org Cc: Borislav Petkov bp@alien8.de Cc: Christian Borntraeger borntraeger@de.ibm.com Cc: Christophe Leroy christophe.leroy@c-s.fr Cc: Dave Jiang dave.jiang@intel.com Cc: "H. Peter Anvin" hpa@zytor.com Cc: Ingo Molnar mingo@redhat.com Cc: Jia He justin.he@arm.com Cc: Joe Perches joe@perches.com Cc: Kefeng Wang wangkefeng.wang@huawei.com Cc: Laurent Dufour ldufour@linux.ibm.com Cc: Michel Lespinasse michel@lespinasse.org Cc: Nathan Lynch nathanl@linux.ibm.com Cc: Nicholas Piggin npiggin@gmail.com Cc: Paul Mackerras paulus@samba.org Cc: Peter Zijlstra peterz@infradead.org Cc: Pierre Morel pmorel@linux.ibm.com Cc: "Rafael J. Wysocki" rafael.j.wysocki@intel.com Cc: Rich Felker dalias@libc.org Cc: Scott Cheloha cheloha@linux.ibm.com Cc: Sergei Trofimovich slyfox@gentoo.org Cc: Thiago Jung Bauermann bauerman@linux.ibm.com Cc: Thomas Gleixner tglx@linutronix.de Cc: Vasily Gorbik gor@linux.ibm.com Cc: Vishal Verma vishal.l.verma@intel.com Cc: Will Deacon will@kernel.org Cc: Yoshinori Sato ysato@users.sourceforge.jp Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: David Hildenbrand david@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Chen Jun chenjun102@huawei.com Acked-by: Weilong Chen chenweilong@huawei.com
Signed-off-by: Chen Jun chenjun102@huawei.com --- include/linux/memory_hotplug.h | 4 ++-- mm/memory_hotplug.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 40a49c289184..c60bda5cbb17 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -332,8 +332,8 @@ extern void sparse_remove_section(struct mem_section *ms, unsigned long map_offset, struct vmem_altmap *altmap); extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map, unsigned long pnum); -extern struct zone *zone_for_pfn_range(int online_type, int nid, unsigned start_pfn, - unsigned long nr_pages); +extern struct zone *zone_for_pfn_range(int online_type, int nid, + unsigned long start_pfn, unsigned long nr_pages); #endif /* CONFIG_MEMORY_HOTPLUG */
#endif /* __LINUX_MEMORY_HOTPLUG_H */ diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index b05b92cdf720..a009b6395b02 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -649,8 +649,8 @@ static inline struct zone *default_zone_for_pfn(int nid, unsigned long start_pfn return movable_node_enabled ? movable_zone : kernel_zone; }
-struct zone * zone_for_pfn_range(int online_type, int nid, unsigned start_pfn, - unsigned long nr_pages) +struct zone *zone_for_pfn_range(int online_type, int nid, + unsigned long start_pfn, unsigned long nr_pages) { if (online_type == MMOP_ONLINE_KERNEL) return default_kernel_zone_for_pfn(nid, start_pfn, nr_pages);