From: Luiz Augusto von Dentz luiz.von.dentz@intel.com
stable inclusion from stable-v5.10.141 commit 38267d266336a7fb9eae9be23567a44776c6e4ca category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I64WCC CVE: CVE-2022-20566
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
-------------------------------
commit b840304fb46cdf7012722f456bce06f151b3e81b upstream.
This attempts to fix the follow errors:
In function 'memcmp', inlined from 'bacmp' at ./include/net/bluetooth/bluetooth.h:347:9, inlined from 'l2cap_global_chan_by_psm' at net/bluetooth/l2cap_core.c:2003:15: ./include/linux/fortify-string.h:44:33: error: '__builtin_memcmp' specified bound 6 exceeds source size 0 [-Werror=stringop-overread] 44 | #define __underlying_memcmp __builtin_memcmp | ^ ./include/linux/fortify-string.h:420:16: note: in expansion of macro '__underlying_memcmp' 420 | return __underlying_memcmp(p, q, size); | ^~~~~~~~~~~~~~~~~~~ In function 'memcmp', inlined from 'bacmp' at ./include/net/bluetooth/bluetooth.h:347:9, inlined from 'l2cap_global_chan_by_psm' at net/bluetooth/l2cap_core.c:2004:15: ./include/linux/fortify-string.h:44:33: error: '__builtin_memcmp' specified bound 6 exceeds source size 0 [-Werror=stringop-overread] 44 | #define __underlying_memcmp __builtin_memcmp | ^ ./include/linux/fortify-string.h:420:16: note: in expansion of macro '__underlying_memcmp' 420 | return __underlying_memcmp(p, q, size); | ^~~~~~~~~~~~~~~~~~~
Fixes: 332f1795ca20 ("Bluetooth: L2CAP: Fix l2cap_global_chan_by_psm regression") Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Cc: Sudip Mukherjee sudipm.mukherjee@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Lu Jialin lujialin4@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- net/bluetooth/l2cap_core.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c index 70bfd9e8913e..f78ad8f536f7 100644 --- a/net/bluetooth/l2cap_core.c +++ b/net/bluetooth/l2cap_core.c @@ -1988,11 +1988,11 @@ static struct l2cap_chan *l2cap_global_chan_by_psm(int state, __le16 psm, src_match = !bacmp(&c->src, src); dst_match = !bacmp(&c->dst, dst); if (src_match && dst_match) { - c = l2cap_chan_hold_unless_zero(c); - if (c) { - read_unlock(&chan_list_lock); - return c; - } + if (!l2cap_chan_hold_unless_zero(c)) + continue; + + read_unlock(&chan_list_lock); + return c; }
/* Closest match */
From: Zhengchao Shao shaozhengchao@huawei.com
stable inclusion from stable-v5.10.154 commit d9ec6e2fbd4a565b2345d4852f586b7ae3ab41fd category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I64WCC CVE: CVE-2022-20566
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
-------------------------------
[ Upstream commit 0d0e2d032811280b927650ff3c15fe5020e82533 ]
When l2cap_recv_frame() is invoked to receive data, and the cid is L2CAP_CID_A2MP, if the channel does not exist, it will create a channel. However, after a channel is created, the hold operation of the channel is not performed. In this case, the value of channel reference counting is 1. As a result, after hci_error_reset() is triggered, l2cap_conn_del() invokes the close hook function of A2MP to release the channel. Then l2cap_chan_unlock(chan) will trigger UAF issue.
The process is as follows: Receive data: l2cap_data_channel() a2mp_channel_create() --->channel ref is 2 l2cap_chan_put() --->channel ref is 1
Triger event: hci_error_reset() hci_dev_do_close() ... l2cap_disconn_cfm() l2cap_conn_del() l2cap_chan_hold() --->channel ref is 2 l2cap_chan_del() --->channel ref is 1 a2mp_chan_close_cb() --->channel ref is 0, release channel l2cap_chan_unlock() --->UAF of channel
The detailed Call Trace is as follows: BUG: KASAN: use-after-free in __mutex_unlock_slowpath+0xa6/0x5e0 Read of size 8 at addr ffff8880160664b8 by task kworker/u11:1/7593 Workqueue: hci0 hci_error_reset Call Trace: <TASK> dump_stack_lvl+0xcd/0x134 print_report.cold+0x2ba/0x719 kasan_report+0xb1/0x1e0 kasan_check_range+0x140/0x190 __mutex_unlock_slowpath+0xa6/0x5e0 l2cap_conn_del+0x404/0x7b0 l2cap_disconn_cfm+0x8c/0xc0 hci_conn_hash_flush+0x11f/0x260 hci_dev_close_sync+0x5f5/0x11f0 hci_dev_do_close+0x2d/0x70 hci_error_reset+0x9e/0x140 process_one_work+0x98a/0x1620 worker_thread+0x665/0x1080 kthread+0x2e4/0x3a0 ret_from_fork+0x1f/0x30 </TASK>
Allocated by task 7593: kasan_save_stack+0x1e/0x40 __kasan_kmalloc+0xa9/0xd0 l2cap_chan_create+0x40/0x930 amp_mgr_create+0x96/0x990 a2mp_channel_create+0x7d/0x150 l2cap_recv_frame+0x51b8/0x9a70 l2cap_recv_acldata+0xaa3/0xc00 hci_rx_work+0x702/0x1220 process_one_work+0x98a/0x1620 worker_thread+0x665/0x1080 kthread+0x2e4/0x3a0 ret_from_fork+0x1f/0x30
Freed by task 7593: kasan_save_stack+0x1e/0x40 kasan_set_track+0x21/0x30 kasan_set_free_info+0x20/0x30 ____kasan_slab_free+0x167/0x1c0 slab_free_freelist_hook+0x89/0x1c0 kfree+0xe2/0x580 l2cap_chan_put+0x22a/0x2d0 l2cap_conn_del+0x3fc/0x7b0 l2cap_disconn_cfm+0x8c/0xc0 hci_conn_hash_flush+0x11f/0x260 hci_dev_close_sync+0x5f5/0x11f0 hci_dev_do_close+0x2d/0x70 hci_error_reset+0x9e/0x140 process_one_work+0x98a/0x1620 worker_thread+0x665/0x1080 kthread+0x2e4/0x3a0 ret_from_fork+0x1f/0x30
Last potentially related work creation: kasan_save_stack+0x1e/0x40 __kasan_record_aux_stack+0xbe/0xd0 call_rcu+0x99/0x740 netlink_release+0xe6a/0x1cf0 __sock_release+0xcd/0x280 sock_close+0x18/0x20 __fput+0x27c/0xa90 task_work_run+0xdd/0x1a0 exit_to_user_mode_prepare+0x23c/0x250 syscall_exit_to_user_mode+0x19/0x50 do_syscall_64+0x42/0x80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
Second to last potentially related work creation: kasan_save_stack+0x1e/0x40 __kasan_record_aux_stack+0xbe/0xd0 call_rcu+0x99/0x740 netlink_release+0xe6a/0x1cf0 __sock_release+0xcd/0x280 sock_close+0x18/0x20 __fput+0x27c/0xa90 task_work_run+0xdd/0x1a0 exit_to_user_mode_prepare+0x23c/0x250 syscall_exit_to_user_mode+0x19/0x50 do_syscall_64+0x42/0x80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
Fixes: d0be8347c623 ("Bluetooth: L2CAP: Fix use-after-free caused by l2cap_chan_put") Signed-off-by: Zhengchao Shao shaozhengchao@huawei.com Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Signed-off-by: Sasha Levin sashal@kernel.org Signed-off-by: Lu Jialin lujialin4@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- net/bluetooth/l2cap_core.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c index f78ad8f536f7..377b3719054a 100644 --- a/net/bluetooth/l2cap_core.c +++ b/net/bluetooth/l2cap_core.c @@ -7622,6 +7622,7 @@ static void l2cap_data_channel(struct l2cap_conn *conn, u16 cid, return; }
+ l2cap_chan_hold(chan); l2cap_chan_lock(chan); } else { BT_DBG("unknown cid 0x%4.4x", cid);
From: Chen Wandun chenwandun@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I645JI
-------------------------------
Commit 46d673d7fed7 ("mm/swapfile: fix broken kabi in swap_info_struct") uses KABI_EXTEND to fix kabi broken problem, but this is not the safest way, so use new way to fix this prolem.
Introduce new struct swap_extend_info that contains extend info of swap, meanwhile use KABI_USE to repalce memory space reserved by KABI_RESERVER.
Signed-off-by: Chen Wandun chenwandun@huawei.com Reviewed-by: zhangjialin zhangjialin11@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- include/linux/swap.h | 11 +++++++---- mm/swapfile.c | 33 +++++++++++++++++++++------------ 2 files changed, 28 insertions(+), 16 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h index c55f6e410d05..7f49964f27d2 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -246,6 +246,11 @@ struct swap_cluster_list { struct swap_cluster_info tail; };
+struct swap_extend_info { + struct percpu_ref users; /* indicate and keep swap device valid. */ + struct completion comp; /* seldom referenced */ +}; + /* * The in-memory structure used to track swap areas. */ @@ -293,10 +298,8 @@ struct swap_info_struct { */ struct work_struct discard_work; /* discard worker */ struct swap_cluster_list discard_clusters; /* discard clusters list */ - KABI_RESERVE(1) + KABI_USE(1, struct swap_extend_info *sei) KABI_RESERVE(2) - KABI_EXTEND(struct percpu_ref users) /* indicate and keep swap device valid. */ - KABI_EXTEND(struct completion comp) /* seldom referenced */ struct plist_node avail_lists[]; /* * entries in swap_avail_heads, one * entry per node. @@ -537,7 +540,7 @@ sector_t swap_page_sector(struct page *page);
static inline void put_swap_device(struct swap_info_struct *si) { - percpu_ref_put(&si->users); + percpu_ref_put(&si->sei->users); }
#else /* CONFIG_SWAP */ diff --git a/mm/swapfile.c b/mm/swapfile.c index 0c560204acf4..4594e808ebc0 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -515,10 +515,10 @@ static void swap_discard_work(struct work_struct *work)
static void swap_users_ref_free(struct percpu_ref *ref) { - struct swap_info_struct *si; + struct swap_extend_info *sei;
- si = container_of(ref, struct swap_info_struct, users); - complete(&si->comp); + sei = container_of(ref, struct swap_extend_info, users); + complete(&sei->comp); }
static void alloc_cluster(struct swap_info_struct *si, unsigned long idx) @@ -1316,7 +1316,7 @@ struct swap_info_struct *get_swap_device(swp_entry_t entry) si = swp_swap_info(entry); if (!si) goto bad_nofile; - if (!percpu_ref_tryget_live(&si->users)) + if (!percpu_ref_tryget_live(&si->sei->users)) goto out; /* * Guarantee the si->users are checked before accessing other @@ -1336,7 +1336,7 @@ struct swap_info_struct *get_swap_device(swp_entry_t entry) out: return NULL; put_out: - percpu_ref_put(&si->users); + percpu_ref_put(&si->sei->users); return NULL; }
@@ -2542,7 +2542,7 @@ static void enable_swap_info(struct swap_info_struct *p, int prio, /* * Finished initializing swap device, now it's safe to reference it. */ - percpu_ref_resurrect(&p->users); + percpu_ref_resurrect(&p->sei->users); spin_lock(&swap_lock); spin_lock(&p->lock); _enable_swap_info(p); @@ -2665,9 +2665,9 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) * We need synchronize_rcu() here to protect the accessing to * the swap cache data structure. */ - percpu_ref_kill(&p->users); + percpu_ref_kill(&p->sei->users); synchronize_rcu(); - wait_for_completion(&p->comp); + wait_for_completion(&p->sei->comp);
flush_work(&p->discard_work);
@@ -2899,8 +2899,15 @@ static struct swap_info_struct *alloc_swap_info(void) if (!p) return ERR_PTR(-ENOMEM);
- if (percpu_ref_init(&p->users, swap_users_ref_free, + p->sei = kvzalloc(sizeof(struct swap_extend_info), GFP_KERNEL); + if (!p->sei) { + kvfree(p); + return ERR_PTR(-ENOMEM); + } + + if (percpu_ref_init(&p->sei->users, swap_users_ref_free, PERCPU_REF_INIT_DEAD, GFP_KERNEL)) { + kvfree(p->sei); kvfree(p); return ERR_PTR(-ENOMEM); } @@ -2912,7 +2919,8 @@ static struct swap_info_struct *alloc_swap_info(void) } if (type >= MAX_SWAPFILES) { spin_unlock(&swap_lock); - percpu_ref_exit(&p->users); + percpu_ref_exit(&p->sei->users); + kvfree(p->sei); kvfree(p); return ERR_PTR(-EPERM); } @@ -2941,12 +2949,13 @@ static struct swap_info_struct *alloc_swap_info(void) p->flags = SWP_USED; spin_unlock(&swap_lock); if (defer) { - percpu_ref_exit(&defer->users); + percpu_ref_exit(&defer->sei->users); + kvfree(defer->sei); kvfree(defer); } spin_lock_init(&p->lock); spin_lock_init(&p->cont_lock); - init_completion(&p->comp); + init_completion(&p->sei->comp);
return p; }
From: Jiasheng Jiang jiasheng@iscas.ac.cn
mainline inclusion from mainline-v5.19-rc1 commit ed713e2bc093239ccd380c2ce8ae9e4162f5c037 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I6698Y CVE: CVE-2022-3114
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
--------------------------------
As the potential failure of the kcalloc(), it should be better to check it in order to avoid the dereference of the NULL pointer.
Fixes: 379c9a24cc23 ("clk: imx: Fix reparenting of UARTs not associated with stdout") Signed-off-by: Jiasheng Jiang jiasheng@iscas.ac.cn Reviewed-by: Abel Vesa abel.vesa@nxp.com Link: https://lore.kernel.org/r/20220310080257.1988412-1-jiasheng@iscas.ac.cn Signed-off-by: Abel Vesa abel.vesa@nxp.com Signed-off-by: Yi Yang yiyang13@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- drivers/clk/imx/clk.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/clk/imx/clk.c b/drivers/clk/imx/clk.c index 7cc669934253..99249ab361d2 100644 --- a/drivers/clk/imx/clk.c +++ b/drivers/clk/imx/clk.c @@ -173,6 +173,8 @@ void imx_register_uart_clocks(unsigned int clk_count) int i;
imx_uart_clocks = kcalloc(clk_count, sizeof(struct clk *), GFP_KERNEL); + if (!imx_uart_clocks) + return;
if (!of_stdout) return;
From: Pu Wen puwen@hygon.cn
mainline inclusion from mainline-v5.13-rc1 commit 59eca2fa1934de42d8aa44d3bef655c92ea69703 category: bugfix bugzilla: 188047, https://gitee.com/src-openeuler/kernel/issues/I64FW4 CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
--------------------------------
Set the maximum DIE per package variable on Hygon using the nodes_per_socket value in order to do per-DIE manipulations for drivers such as powercap.
Signed-off-by: Pu Wen puwen@hygon.cn Signed-off-by: Borislav Petkov bp@suse.de Signed-off-by: Ingo Molnar mingo@kernel.org Link: https://lkml.kernel.org/r/20210302020217.1827-1-puwen@hygon.cn Signed-off-by: Yuyao Lin linyuyao1@huawei.com Reviewed-by: Xiongfeng Wang wangxiongfeng2@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- arch/x86/kernel/cpu/hygon.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/hygon.c b/arch/x86/kernel/cpu/hygon.c index 774ca6bfda9f..1f6de1d5ca84 100644 --- a/arch/x86/kernel/cpu/hygon.c +++ b/arch/x86/kernel/cpu/hygon.c @@ -235,12 +235,12 @@ static void bsp_init_hygon(struct cpuinfo_x86 *c) u32 ecx;
ecx = cpuid_ecx(0x8000001e); - nodes_per_socket = ((ecx >> 8) & 7) + 1; + __max_die_per_package = nodes_per_socket = ((ecx >> 8) & 7) + 1; } else if (boot_cpu_has(X86_FEATURE_NODEID_MSR)) { u64 value;
rdmsrl(MSR_FAM10H_NODE_ID, value); - nodes_per_socket = ((value >> 3) & 7) + 1; + __max_die_per_package = nodes_per_socket = ((value >> 3) & 7) + 1; }
if (!boot_cpu_has(X86_FEATURE_AMD_SSBD) &&
From: Liu Shixin liushixin2@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I66OCA CVE: NA
--------------------------------
The percpu pool will be cleared by clear_percpu_pools(), and then check whether all pages are already freed. If some pages are not freed, we will firstly isolate the freed pages and then migrate the used pages. Since we missed to get lock of percpu_pool, the used pages can be free to percpu_pool while the isolation is going. In such case, the list operation will be unreliable.
To fix this problem, we need to get all related locks sequentially and clear the perpcu_pool again before isolate the freed pages.
Fixes: cdbeee51d044 ("mm/dynamic_hugetlb: add migration function") Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Tong Tiangen tongtiangen@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- mm/dynamic_hugetlb.c | 58 +++++++++++++++++++++++++++++--------------- 1 file changed, 38 insertions(+), 20 deletions(-)
diff --git a/mm/dynamic_hugetlb.c b/mm/dynamic_hugetlb.c index c6629a8de7c0..ca728450a341 100644 --- a/mm/dynamic_hugetlb.c +++ b/mm/dynamic_hugetlb.c @@ -182,25 +182,6 @@ static void reclaim_pages_from_percpu_pool(struct dhugetlb_pool *hpool, } }
-static void clear_percpu_pools(struct dhugetlb_pool *hpool) -{ - struct percpu_pages_pool *percpu_pool; - int i; - - lockdep_assert_held(&hpool->lock); - - spin_unlock(&hpool->lock); - for (i = 0; i < NR_PERCPU_POOL; i++) - spin_lock(&hpool->percpu_pool[i].lock); - spin_lock(&hpool->lock); - for (i = 0; i < NR_PERCPU_POOL; i++) { - percpu_pool = &hpool->percpu_pool[i]; - reclaim_pages_from_percpu_pool(hpool, percpu_pool, percpu_pool->free_pages); - } - for (i = 0; i < NR_PERCPU_POOL; i++) - spin_unlock(&hpool->percpu_pool[i].lock); -} - /* We only try 5 times to reclaim pages */ #define HPOOL_RECLAIM_RETRIES 5
@@ -210,6 +191,7 @@ static int hpool_merge_page(struct dhugetlb_pool *hpool, int hpages_pool_idx, bo struct split_hugepage *split_page, *split_next; unsigned long nr_pages, block_size; struct page *page, *next, *p; + struct percpu_pages_pool *percpu_pool; bool need_migrate = false, need_initial = false; int i, try; LIST_HEAD(wait_page_list); @@ -241,7 +223,22 @@ static int hpool_merge_page(struct dhugetlb_pool *hpool, int hpages_pool_idx, bo try = 0;
merge: - clear_percpu_pools(hpool); + /* + * If we are merging 4K page to 2M page, we need to get + * lock of percpu pool sequentially and clear percpu pool. + */ + if (hpages_pool_idx == HUGE_PAGES_POOL_2M) { + spin_unlock(&hpool->lock); + for (i = 0; i < NR_PERCPU_POOL; i++) + spin_lock(&hpool->percpu_pool[i].lock); + spin_lock(&hpool->lock); + for (i = 0; i < NR_PERCPU_POOL; i++) { + percpu_pool = &hpool->percpu_pool[i]; + reclaim_pages_from_percpu_pool(hpool, percpu_pool, + percpu_pool->free_pages); + } + } + page = pfn_to_page(split_page->start_pfn); for (i = 0; i < nr_pages; i+= block_size) { p = pfn_to_page(split_page->start_pfn + i); @@ -252,6 +249,14 @@ static int hpool_merge_page(struct dhugetlb_pool *hpool, int hpages_pool_idx, bo goto migrate; } } + if (hpages_pool_idx == HUGE_PAGES_POOL_2M) { + /* + * All target 4K page are in src_hpages_pool, we + * can unlock percpu pool. + */ + for (i = 0; i < NR_PERCPU_POOL; i++) + spin_unlock(&hpool->percpu_pool[i].lock); + }
list_del(&split_page->head_pages); hpages_pool->split_normal_pages--; @@ -284,8 +289,14 @@ static int hpool_merge_page(struct dhugetlb_pool *hpool, int hpages_pool_idx, bo trace_dynamic_hugetlb_split_merge(hpool, page, DHUGETLB_MERGE, page_size(page)); return 0; next: + if (hpages_pool_idx == HUGE_PAGES_POOL_2M) { + /* Unlock percpu pool before try next */ + for (i = 0; i < NR_PERCPU_POOL; i++) + spin_unlock(&hpool->percpu_pool[i].lock); + } continue; migrate: + /* page migration only used for HUGE_PAGES_POOL_2M */ if (try++ >= HPOOL_RECLAIM_RETRIES) goto next;
@@ -300,7 +311,10 @@ static int hpool_merge_page(struct dhugetlb_pool *hpool, int hpages_pool_idx, bo }
/* Unlock and try migration. */ + for (i = 0; i < NR_PERCPU_POOL; i++) + spin_unlock(&hpool->percpu_pool[i].lock); spin_unlock(&hpool->lock); + for (i = 0; i < nr_pages; i+= block_size) { p = pfn_to_page(split_page->start_pfn + i); if (PagePool(p)) @@ -312,6 +326,10 @@ static int hpool_merge_page(struct dhugetlb_pool *hpool, int hpages_pool_idx, bo } spin_lock(&hpool->lock);
+ /* + * Move all isolate pages to src_hpages_pool and then try + * merge again. + */ list_for_each_entry_safe(page, next, &wait_page_list, lru) { list_move_tail(&page->lru, &src_hpages_pool->hugepage_freelists); src_hpages_pool->free_normal_pages++;
From: Liu Shixin liushixin2@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I66OCA CVE: NA
--------------------------------
Since free_pages_prepare() will clear the PagePool without lock in free_page_to_dhugetlb_pool() and free_page_list_to_dhugetlb_pool(), it is unreliable to check whether a page is freed by PagePool in hpool_merge_page(). Move free_pages_prepare() after ClearPagePool(), which can guarantee all allocated page has PagePool flag.
Fixes: 71197c63bfe9 ("mm/dynamic_hugetlb: free pages to dhugetlb_pool") Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Tong Tiangen tongtiangen@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- mm/dynamic_hugetlb.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/mm/dynamic_hugetlb.c b/mm/dynamic_hugetlb.c index ca728450a341..6b615009c3a4 100644 --- a/mm/dynamic_hugetlb.c +++ b/mm/dynamic_hugetlb.c @@ -577,6 +577,10 @@ static void __free_page_to_dhugetlb_pool(struct page *page) spin_lock_irqsave(&percpu_pool->lock, flags);
ClearPagePool(page); + if (!free_pages_prepare(page, 0, true)) { + SetPagePool(page); + goto out; + } list_add(&page->lru, &percpu_pool->head_page); percpu_pool->free_pages++; percpu_pool->used_pages--; @@ -585,7 +589,7 @@ static void __free_page_to_dhugetlb_pool(struct page *page) reclaim_pages_from_percpu_pool(hpool, percpu_pool, PERCPU_POOL_PAGE_BATCH); spin_unlock(&hpool->lock); } - +out: spin_unlock_irqrestore(&percpu_pool->lock, flags); put_hpool(hpool); } @@ -595,8 +599,7 @@ bool free_page_to_dhugetlb_pool(struct page *page) if (!dhugetlb_enabled || !PagePool(page)) return false;
- if (free_pages_prepare(page, 0, true)) - __free_page_to_dhugetlb_pool(page); + __free_page_to_dhugetlb_pool(page); return true; }
@@ -610,8 +613,7 @@ void free_page_list_to_dhugetlb_pool(struct list_head *list) list_for_each_entry_safe(page, next, list, lru) { if (PagePool(page)) { list_del(&page->lru); - if (free_pages_prepare(page, 0, true)) - __free_page_to_dhugetlb_pool(page); + __free_page_to_dhugetlb_pool(page); } } }