Kernel
Threads by month
- ----- 2026 -----
- April
- March
- February
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- 43 participants
- 23288 discussions
[PATCH OLK-5.10] media: dvb-net: fix OOB access in ULE extension header tables
by Liu Kai 25 Apr '26
by Liu Kai 25 Apr '26
25 Apr '26
From: Ariel Silver <arielsilver77(a)gmail.com>
mainline inclusion
from mainline-v7.0-rc3
commit 24d87712727a5017ad142d63940589a36cd25647
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14132
CVE: CVE-2026-31405
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
The ule_mandatory_ext_handlers[] and ule_optional_ext_handlers[] tables
in handle_one_ule_extension() are declared with 255 elements (valid
indices 0-254), but the index htype is derived from network-controlled
data as (ule_sndu_type & 0x00FF), giving a range of 0-255. When
htype equals 255, an out-of-bounds read occurs on the function pointer
table, and the OOB value may be called as a function pointer.
Add a bounds check on htype against the array size before either table
is accessed. Out-of-range values now cause the SNDU to be discarded.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Reported-by: Ariel Silver <arielsilver77(a)gmail.com>
Signed-off-by: Ariel Silver <arielsilver77(a)gmail.com>
Cc: stable(a)vger.kernel.org
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei(a)kernel.org>
Signed-off-by: Liu Kai <liukai284(a)huawei.com>
---
drivers/media/dvb-core/dvb_net.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/media/dvb-core/dvb_net.c b/drivers/media/dvb-core/dvb_net.c
index c594b1bdfcaa..c8cbe901bcf0 100644
--- a/drivers/media/dvb-core/dvb_net.c
+++ b/drivers/media/dvb-core/dvb_net.c
@@ -228,6 +228,9 @@ static int handle_one_ule_extension( struct dvb_net_priv *p )
unsigned char hlen = (p->ule_sndu_type & 0x0700) >> 8;
unsigned char htype = p->ule_sndu_type & 0x00FF;
+ if (htype >= ARRAY_SIZE(ule_mandatory_ext_handlers))
+ return -1;
+
/* Discriminate mandatory and optional extension headers. */
if (hlen == 0) {
/* Mandatory extension header */
--
2.34.1
2
1
25 Apr '26
From: Hyunwoo Kim <imv4bel(a)gmail.com>
mainline inclusion
from mainline-v7.0-rc6
commit 9bbb19d21ded7d78645506f20d8c44895e3d0fb9
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14217
CVE: CVE-2026-31476
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
When a multichannel session binding request fails (e.g. wrong password),
the error path unconditionally sets sess->state = SMB2_SESSION_EXPIRED.
However, during binding, sess points to the target session looked up via
ksmbd_session_lookup_slowpath() -- which belongs to another connection's
user. This allows a remote attacker to invalidate any active session by
simply sending a binding request with a wrong password (DoS).
Fix this by skipping session expiration when the failed request was
a binding attempt, since the session does not belong to the current
connection. The reference taken by ksmbd_session_lookup_slowpath() is
still correctly released via ksmbd_user_session_put().
Cc: stable(a)vger.kernel.org
Signed-off-by: Hyunwoo Kim <imv4bel(a)gmail.com>
Acked-by: Namjae Jeon <linkinjeon(a)kernel.org>
Signed-off-by: Steve French <stfrench(a)microsoft.com>
Conflicts:
fs/smb/server/smb2pdu.c
[Commit 38c8a9a52082 ("smb: move client and server files to common
directory fs/smb") move smb2pdu.c from fs/ksmbd to fs/smb/server.]
Signed-off-by: Li Lingfeng <lilingfeng3(a)huawei.com>
---
fs/ksmbd/smb2pdu.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c
index d75de3035327..176a6a51be6d 100644
--- a/fs/ksmbd/smb2pdu.c
+++ b/fs/ksmbd/smb2pdu.c
@@ -1912,8 +1912,14 @@ int smb2_sess_setup(struct ksmbd_work *work)
if (sess->user && sess->user->flags & KSMBD_USER_FLAG_DELAY_SESSION)
try_delay = true;
- sess->last_active = jiffies;
- sess->state = SMB2_SESSION_EXPIRED;
+ /*
+ * For binding requests, session belongs to another
+ * connection. Do not expire it.
+ */
+ if (!(req->Flags & SMB2_SESSION_REQ_FLAG_BINDING)) {
+ sess->last_active = jiffies;
+ sess->state = SMB2_SESSION_EXPIRED;
+ }
if (try_delay) {
ksmbd_conn_set_need_reconnect(conn);
ssleep(5);
--
2.52.0
2
1
From: Keith Busch <kbusch(a)kernel.org>
stable inclusion
from stable-v5.10.253
commit 965e2c943f065122f14282a88d70a8a92e12a4da
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14256
CVE: CVE-2026-31523
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 166e31d7dbf6aa44829b98aa446bda5c9580f12a ]
A user can change the polled queue count at run time. There's a brief
window during a reset where a hipri task may try to poll that queue
before the block layer has updated the queue maps, which would race with
the now interrupt driven queue and may cause double completions.
Reviewed-by: Christoph Hellwig <hch(a)lst.de>
Reviewed-by: Kanchan Joshi <joshi.k(a)samsung.com>
Signed-off-by: Keith Busch <kbusch(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Li Lingfeng <lilingfeng3(a)huawei.com>
---
drivers/nvme/host/pci.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 5591ecff6ee8..8be07d3c99a5 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1102,7 +1102,8 @@ static int nvme_poll(struct blk_mq_hw_ctx *hctx)
struct nvme_queue *nvmeq = hctx->driver_data;
bool found;
- if (!nvme_cqe_pending(nvmeq))
+ if (!test_bit(NVMEQ_POLLED, &nvmeq->flags) ||
+ !nvme_cqe_pending(nvmeq))
return 0;
spin_lock(&nvmeq->cq_poll_lock);
--
2.52.0
2
1
From: Keith Busch <kbusch(a)kernel.org>
stable inclusion
from stable-v6.6.131
commit 6f12734c4b619f923a4df0b1a46b8098b187d324
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14256
CVE: CVE-2026-31523
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 166e31d7dbf6aa44829b98aa446bda5c9580f12a ]
A user can change the polled queue count at run time. There's a brief
window during a reset where a hipri task may try to poll that queue
before the block layer has updated the queue maps, which would race with
the now interrupt driven queue and may cause double completions.
Reviewed-by: Christoph Hellwig <hch(a)lst.de>
Reviewed-by: Kanchan Joshi <joshi.k(a)samsung.com>
Signed-off-by: Keith Busch <kbusch(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Li Lingfeng <lilingfeng3(a)huawei.com>
---
drivers/nvme/host/pci.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index d01712c3b144..da4bb0da4f27 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1131,7 +1131,8 @@ static int nvme_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)
struct nvme_queue *nvmeq = hctx->driver_data;
bool found;
- if (!nvme_cqe_pending(nvmeq))
+ if (!test_bit(NVMEQ_POLLED, &nvmeq->flags) ||
+ !nvme_cqe_pending(nvmeq))
return 0;
spin_lock(&nvmeq->cq_poll_lock);
--
2.52.0
2
1
25 Apr '26
From: Edward Adam Davis <eadavis(a)qq.com>
stable inclusion
from stable-v6.6.131
commit ecc50bfca9b5c2ee6aeef998181689b80477367b
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14191
CVE: CVE-2026-31448
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 5422fe71d26d42af6c454ca9527faaad4e677d6c upstream.
On the mkdir/mknod path, when mapping logical blocks to physical blocks,
if inserting a new extent into the extent tree fails (in this example,
because the file system disabled the huge file feature when marking the
inode as dirty), ext4_ext_map_blocks() only calls ext4_free_blocks() to
reclaim the physical block without deleting the corresponding data in
the extent tree. This causes subsequent mkdir operations to reference
the previously reclaimed physical block number again, even though this
physical block is already being used by the xattr block. Therefore, a
situation arises where both the directory and xattr are using the same
buffer head block in memory simultaneously.
The above causes ext4_xattr_block_set() to enter an infinite loop about
"inserted" and cannot release the inode lock, ultimately leading to the
143s blocking problem mentioned in [1].
If the metadata is corrupted, then trying to remove some extent space
can do even more harm. Also in case EXT4_GET_BLOCKS_DELALLOC_RESERVE
was passed, remove space wrongly update quota information.
Jan Kara suggests distinguishing between two cases:
1) The error is ENOSPC or EDQUOT - in this case the filesystem is fully
consistent and we must maintain its consistency including all the
accounting. However these errors can happen only early before we've
inserted the extent into the extent tree. So current code works correctly
for this case.
2) Some other error - this means metadata is corrupted. We should strive to
do as few modifications as possible to limit damage. So I'd just skip
freeing of allocated blocks.
[1]
INFO: task syz.0.17:5995 blocked for more than 143 seconds.
Call Trace:
inode_lock_nested include/linux/fs.h:1073 [inline]
__start_dirop fs/namei.c:2923 [inline]
start_dirop fs/namei.c:2934 [inline]
Reported-by: syzbot+512459401510e2a9a39f(a)syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=1659aaaaa8d9d11265d7
Tested-by: syzbot+1659aaaaa8d9d11265d7(a)syzkaller.appspotmail.com
Reported-by: syzbot+1659aaaaa8d9d11265d7(a)syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=512459401510e2a9a39f
Tested-by: syzbot+1659aaaaa8d9d11265d7(a)syzkaller.appspotmail.com
Signed-off-by: Edward Adam Davis <eadavis(a)qq.com>
Reviewed-by: Jan Kara <jack(a)suse.cz>
Tested-by: syzbot+512459401510e2a9a39f(a)syzkaller.appspotmail.com
Link: https://patch.msgid.link/tencent_43696283A68450B761D76866C6F360E36705@qq.com
Signed-off-by: Theodore Ts'o <tytso(a)mit.edu>
Cc: stable(a)kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Conflicts:
fs/ext4/extents.c
[Commit fb138df7d886 ("ext4: get rid of ppath in
ext4_ext_insert_extent()") change the way to judge the result of
ext4_ext_insert_extent() in ext4_ext_map_blocks().]
Signed-off-by: Li Lingfeng <lilingfeng3(a)huawei.com>
---
fs/ext4/extents.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
index 4c77284be84d..7045784f9340 100644
--- a/fs/ext4/extents.c
+++ b/fs/ext4/extents.c
@@ -4373,9 +4373,13 @@ int ext4_ext_map_blocks(handle_t *handle, struct inode *inode,
err = ext4_ext_insert_extent(handle, inode, &path, &newex, flags);
if (err) {
- if (allocated_clusters) {
+ /*
+ * Gracefully handle out of space conditions. If the filesystem
+ * is inconsistent, we'll just leak allocated blocks to avoid
+ * causing even more damage.
+ */
+ if (allocated_clusters && (err == -EDQUOT || err == -ENOSPC)) {
int fb_flags = 0;
-
/*
* free data blocks we just allocated.
* not a good idea to call discard here directly,
--
2.52.0
2
1
25 Apr '26
From: Edward Adam Davis <eadavis(a)qq.com>
stable inclusion
from stable-v6.6.131
commit ecc50bfca9b5c2ee6aeef998181689b80477367b
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14191
CVE: CVE-2026-31448
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 5422fe71d26d42af6c454ca9527faaad4e677d6c upstream.
On the mkdir/mknod path, when mapping logical blocks to physical blocks,
if inserting a new extent into the extent tree fails (in this example,
because the file system disabled the huge file feature when marking the
inode as dirty), ext4_ext_map_blocks() only calls ext4_free_blocks() to
reclaim the physical block without deleting the corresponding data in
the extent tree. This causes subsequent mkdir operations to reference
the previously reclaimed physical block number again, even though this
physical block is already being used by the xattr block. Therefore, a
situation arises where both the directory and xattr are using the same
buffer head block in memory simultaneously.
The above causes ext4_xattr_block_set() to enter an infinite loop about
"inserted" and cannot release the inode lock, ultimately leading to the
143s blocking problem mentioned in [1].
If the metadata is corrupted, then trying to remove some extent space
can do even more harm. Also in case EXT4_GET_BLOCKS_DELALLOC_RESERVE
was passed, remove space wrongly update quota information.
Jan Kara suggests distinguishing between two cases:
1) The error is ENOSPC or EDQUOT - in this case the filesystem is fully
consistent and we must maintain its consistency including all the
accounting. However these errors can happen only early before we've
inserted the extent into the extent tree. So current code works correctly
for this case.
2) Some other error - this means metadata is corrupted. We should strive to
do as few modifications as possible to limit damage. So I'd just skip
freeing of allocated blocks.
[1]
INFO: task syz.0.17:5995 blocked for more than 143 seconds.
Call Trace:
inode_lock_nested include/linux/fs.h:1073 [inline]
__start_dirop fs/namei.c:2923 [inline]
start_dirop fs/namei.c:2934 [inline]
Reported-by: syzbot+512459401510e2a9a39f(a)syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=1659aaaaa8d9d11265d7
Tested-by: syzbot+1659aaaaa8d9d11265d7(a)syzkaller.appspotmail.com
Reported-by: syzbot+1659aaaaa8d9d11265d7(a)syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=512459401510e2a9a39f
Tested-by: syzbot+1659aaaaa8d9d11265d7(a)syzkaller.appspotmail.com
Signed-off-by: Edward Adam Davis <eadavis(a)qq.com>
Reviewed-by: Jan Kara <jack(a)suse.cz>
Tested-by: syzbot+512459401510e2a9a39f(a)syzkaller.appspotmail.com
Link: https://patch.msgid.link/tencent_43696283A68450B761D76866C6F360E36705@qq.com
Signed-off-by: Theodore Ts'o <tytso(a)mit.edu>
Cc: stable(a)kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Conflicts:
fs/ext4/extents.c
[Commit fb138df7d886 ("ext4: get rid of ppath in
ext4_ext_insert_extent()") change the way to judge the result of
ext4_ext_insert_extent() in ext4_ext_map_blocks().]
Signed-off-by: Li Lingfeng <lilingfeng3(a)huawei.com>
---
fs/ext4/extents.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
index dbc2154f7d4e..997c437685a0 100644
--- a/fs/ext4/extents.c
+++ b/fs/ext4/extents.c
@@ -4381,9 +4381,13 @@ int ext4_ext_map_blocks(handle_t *handle, struct inode *inode,
err = ext4_ext_insert_extent(handle, inode, &path, &newex, flags);
if (err) {
- if (allocated_clusters) {
+ /*
+ * Gracefully handle out of space conditions. If the filesystem
+ * is inconsistent, we'll just leak allocated blocks to avoid
+ * causing even more damage.
+ */
+ if (allocated_clusters && (err == -EDQUOT || err == -ENOSPC)) {
int fb_flags = 0;
-
/*
* free data blocks we just allocated.
* not a good idea to call discard here directly,
--
2.52.0
2
1
24 Apr '26
From: Asim Viladi Oglu Manizada <manizada(a)pm.me>
mainline inclusion
from mainline-v7.0-rc7
commit fda9522ed6afaec45cabc198d8492270c394c7bc
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14175
CVE: CVE-2026-31402
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
When a compound request such as READ + QUERY_INFO(Security) is received,
and the first command (READ) consumes most of the response buffer,
ksmbd could write beyond the allocated buffer while building a security
descriptor.
The root cause was that smb2_get_info_sec() checked buffer space using
ppntsd_size from xattr, while build_sec_desc() often synthesized a
significantly larger descriptor from POSIX ACLs.
This patch introduces smb_acl_sec_desc_scratch_len() to accurately
compute the final descriptor size beforehand, performs proper buffer
checking with smb2_calc_max_out_buf_len(), and uses exact-sized
allocation + iov pinning.
Cc: stable(a)vger.kernel.org
Fixes: e2b76ab8b5c9 ("ksmbd: add support for read compound")
Signed-off-by: Asim Viladi Oglu Manizada <manizada(a)pm.me>
Signed-off-by: Namjae Jeon <linkinjeon(a)kernel.org>
Signed-off-by: Steve French <stfrench(a)microsoft.com>
Conflicts:
fs/smb/server/smb2pdu.c
[Commit 0066f623bce8 ("ksmbd: use __GFP_RETRY_MAYFAIL") change the way to
alloc smb_ntsd.]
Signed-off-by: Li Lingfeng <lilingfeng3(a)huawei.com>
---
fs/smb/server/smb2pdu.c | 121 +++++++++++++++++++++++++++++-----------
fs/smb/server/smbacl.c | 43 ++++++++++++++
fs/smb/server/smbacl.h | 2 +
3 files changed, 134 insertions(+), 32 deletions(-)
diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
index 3681006c7ac8..ea7d467bd5cf 100644
--- a/fs/smb/server/smb2pdu.c
+++ b/fs/smb/server/smb2pdu.c
@@ -3369,20 +3369,24 @@ int smb2_open(struct ksmbd_work *work)
KSMBD_SHARE_FLAG_ACL_XATTR)) {
struct smb_fattr fattr;
struct smb_ntsd *pntsd;
- int pntsd_size, ace_num = 0;
+ int pntsd_size;
+ size_t scratch_len;
ksmbd_acls_fattr(&fattr, idmap, inode);
- if (fattr.cf_acls)
- ace_num = fattr.cf_acls->a_count;
- if (fattr.cf_dacls)
- ace_num += fattr.cf_dacls->a_count;
-
- pntsd = kmalloc(sizeof(struct smb_ntsd) +
- sizeof(struct smb_sid) * 3 +
- sizeof(struct smb_acl) +
- sizeof(struct smb_ace) * ace_num * 2,
- GFP_KERNEL);
+ scratch_len = smb_acl_sec_desc_scratch_len(&fattr,
+ NULL, 0,
+ OWNER_SECINFO | GROUP_SECINFO |
+ DACL_SECINFO);
+ if (!scratch_len || scratch_len == SIZE_MAX) {
+ rc = -EFBIG;
+ posix_acl_release(fattr.cf_acls);
+ posix_acl_release(fattr.cf_dacls);
+ goto err_out;
+ }
+
+ pntsd = kvzalloc(scratch_len, GFP_KERNEL);
if (!pntsd) {
+ rc = -ENOMEM;
posix_acl_release(fattr.cf_acls);
posix_acl_release(fattr.cf_dacls);
goto err_out;
@@ -3397,7 +3401,7 @@ int smb2_open(struct ksmbd_work *work)
posix_acl_release(fattr.cf_acls);
posix_acl_release(fattr.cf_dacls);
if (rc) {
- kfree(pntsd);
+ kvfree(pntsd);
goto err_out;
}
@@ -3407,7 +3411,7 @@ int smb2_open(struct ksmbd_work *work)
pntsd,
pntsd_size,
false);
- kfree(pntsd);
+ kvfree(pntsd);
if (rc)
pr_err("failed to store ntacl in xattr : %d\n",
rc);
@@ -5301,8 +5305,9 @@ static int smb2_get_info_file(struct ksmbd_work *work,
if (test_share_config_flag(work->tcon->share_conf,
KSMBD_SHARE_FLAG_PIPE)) {
/* smb2 info file called for pipe */
- return smb2_get_info_file_pipe(work->sess, req, rsp,
+ rc = smb2_get_info_file_pipe(work->sess, req, rsp,
work->response_buf);
+ goto iov_pin_out;
}
if (work->next_smb2_rcv_hdr_off) {
@@ -5402,6 +5407,12 @@ static int smb2_get_info_file(struct ksmbd_work *work,
rc = buffer_check_err(le32_to_cpu(req->OutputBufferLength),
rsp, work->response_buf);
ksmbd_fd_put(work, fp);
+
+iov_pin_out:
+ if (!rc)
+ rc = ksmbd_iov_pin_rsp(work, (void *)rsp,
+ offsetof(struct smb2_query_info_rsp, Buffer) +
+ le32_to_cpu(rsp->OutputBufferLength));
return rc;
}
@@ -5621,6 +5632,11 @@ static int smb2_get_info_filesystem(struct ksmbd_work *work,
rc = buffer_check_err(le32_to_cpu(req->OutputBufferLength),
rsp, work->response_buf);
path_put(&path);
+
+ if (!rc)
+ rc = ksmbd_iov_pin_rsp(work, (void *)rsp,
+ offsetof(struct smb2_query_info_rsp, Buffer) +
+ le32_to_cpu(rsp->OutputBufferLength));
return rc;
}
@@ -5630,13 +5646,14 @@ static int smb2_get_info_sec(struct ksmbd_work *work,
{
struct ksmbd_file *fp;
struct mnt_idmap *idmap;
- struct smb_ntsd *pntsd = (struct smb_ntsd *)rsp->Buffer, *ppntsd = NULL;
+ struct smb_ntsd *pntsd = NULL, *ppntsd = NULL;
struct smb_fattr fattr = {{0}};
struct inode *inode;
__u32 secdesclen = 0;
unsigned int id = KSMBD_NO_FID, pid = KSMBD_NO_FID;
int addition_info = le32_to_cpu(req->AdditionalInformation);
- int rc = 0, ppntsd_size = 0;
+ int rc = 0, ppntsd_size = 0, max_len;
+ size_t scratch_len = 0;
if (addition_info & ~(OWNER_SECINFO | GROUP_SECINFO | DACL_SECINFO |
PROTECTED_DACL_SECINFO |
@@ -5644,6 +5661,11 @@ static int smb2_get_info_sec(struct ksmbd_work *work,
ksmbd_debug(SMB, "Unsupported addition info: 0x%x)\n",
addition_info);
+ pntsd = kzalloc(ALIGN(sizeof(struct smb_ntsd), 8),
+ GFP_KERNEL);
+ if (!pntsd)
+ return -ENOMEM;
+
pntsd->revision = cpu_to_le16(1);
pntsd->type = cpu_to_le16(SELF_RELATIVE | DACL_PROTECTED);
pntsd->osidoffset = 0;
@@ -5652,9 +5674,7 @@ static int smb2_get_info_sec(struct ksmbd_work *work,
pntsd->dacloffset = 0;
secdesclen = sizeof(struct smb_ntsd);
- rsp->OutputBufferLength = cpu_to_le32(secdesclen);
-
- return 0;
+ goto iov_pin;
}
if (work->next_smb2_rcv_hdr_off) {
@@ -5686,18 +5706,58 @@ static int smb2_get_info_sec(struct ksmbd_work *work,
&ppntsd);
/* Check if sd buffer size exceeds response buffer size */
- if (smb2_resp_buf_len(work, 8) > ppntsd_size)
- rc = build_sec_desc(idmap, pntsd, ppntsd, ppntsd_size,
- addition_info, &secdesclen, &fattr);
+ max_len = smb2_calc_max_out_buf_len(work,
+ offsetof(struct smb2_query_info_rsp, Buffer),
+ le32_to_cpu(req->OutputBufferLength));
+ if (max_len < 0) {
+ rc = -EINVAL;
+ goto release_acl;
+ }
+
+ scratch_len = smb_acl_sec_desc_scratch_len(&fattr, ppntsd,
+ ppntsd_size, addition_info);
+ if (!scratch_len || scratch_len == SIZE_MAX) {
+ rc = -EFBIG;
+ goto release_acl;
+ }
+
+ pntsd = kvzalloc(scratch_len, GFP_KERNEL);
+ if (!pntsd) {
+ rc = -ENOMEM;
+ goto release_acl;
+ }
+
+ rc = build_sec_desc(idmap, pntsd, ppntsd, ppntsd_size,
+ addition_info, &secdesclen, &fattr);
+
+release_acl:
posix_acl_release(fattr.cf_acls);
posix_acl_release(fattr.cf_dacls);
kfree(ppntsd);
ksmbd_fd_put(work, fp);
+
+ if (!rc && ALIGN(secdesclen, 8) > scratch_len)
+ rc = -EFBIG;
if (rc)
- return rc;
+ goto err_out;
+iov_pin:
rsp->OutputBufferLength = cpu_to_le32(secdesclen);
- return 0;
+ rc = buffer_check_err(le32_to_cpu(req->OutputBufferLength),
+ rsp, work->response_buf);
+ if (rc)
+ goto err_out;
+
+ rc = ksmbd_iov_pin_rsp_read(work, (void *)rsp,
+ offsetof(struct smb2_query_info_rsp, Buffer),
+ pntsd, secdesclen);
+err_out:
+ if (rc) {
+ rsp->OutputBufferLength = 0;
+ kvfree(pntsd);
+ }
+
+ return rc;
}
/**
@@ -5721,6 +5781,9 @@ int smb2_query_info(struct ksmbd_work *work)
goto err_out;
}
+ rsp->StructureSize = cpu_to_le16(9);
+ rsp->OutputBufferOffset = cpu_to_le16(72);
+
switch (req->InfoType) {
case SMB2_O_INFO_FILE:
ksmbd_debug(SMB, "GOT SMB2_O_INFO_FILE\n");
@@ -5741,14 +5804,6 @@ int smb2_query_info(struct ksmbd_work *work)
}
ksmbd_revert_fsids(work);
- if (!rc) {
- rsp->StructureSize = cpu_to_le16(9);
- rsp->OutputBufferOffset = cpu_to_le16(72);
- rc = ksmbd_iov_pin_rsp(work, (void *)rsp,
- offsetof(struct smb2_query_info_rsp, Buffer) +
- le32_to_cpu(rsp->OutputBufferLength));
- }
-
err_out:
if (rc < 0) {
if (rc == -EACCES)
@@ -5759,6 +5814,8 @@ int smb2_query_info(struct ksmbd_work *work)
rsp->hdr.Status = STATUS_UNEXPECTED_IO_ERROR;
else if (rc == -ENOMEM)
rsp->hdr.Status = STATUS_INSUFFICIENT_RESOURCES;
+ else if (rc == -EINVAL && rsp->hdr.Status == 0)
+ rsp->hdr.Status = STATUS_INVALID_PARAMETER;
else if (rc == -EOPNOTSUPP || rsp->hdr.Status == 0)
rsp->hdr.Status = STATUS_INVALID_INFO_CLASS;
smb2_set_err_rsp(work);
diff --git a/fs/smb/server/smbacl.c b/fs/smb/server/smbacl.c
index 07d31dee99c0..b7818241e295 100644
--- a/fs/smb/server/smbacl.c
+++ b/fs/smb/server/smbacl.c
@@ -915,6 +915,49 @@ int parse_sec_desc(struct mnt_idmap *idmap, struct smb_ntsd *pntsd,
return 0;
}
+size_t smb_acl_sec_desc_scratch_len(struct smb_fattr *fattr,
+ struct smb_ntsd *ppntsd, int ppntsd_size, int addition_info)
+{
+ size_t len = sizeof(struct smb_ntsd);
+ size_t tmp;
+
+ if (addition_info & OWNER_SECINFO)
+ len += sizeof(struct smb_sid);
+ if (addition_info & GROUP_SECINFO)
+ len += sizeof(struct smb_sid);
+ if (!(addition_info & DACL_SECINFO))
+ return len;
+
+ len += sizeof(struct smb_acl);
+ if (ppntsd && ppntsd_size > 0) {
+ unsigned int dacl_offset = le32_to_cpu(ppntsd->dacloffset);
+
+ if (dacl_offset < ppntsd_size &&
+ check_add_overflow(len, ppntsd_size - dacl_offset, &len))
+ return 0;
+ }
+
+ if (fattr->cf_acls) {
+ if (check_mul_overflow((size_t)fattr->cf_acls->a_count,
+ 2 * sizeof(struct smb_ace), &tmp) ||
+ check_add_overflow(len, tmp, &len))
+ return 0;
+ } else {
+ /* default/minimum DACL */
+ if (check_add_overflow(len, 5 * sizeof(struct smb_ace), &len))
+ return 0;
+ }
+
+ if (fattr->cf_dacls) {
+ if (check_mul_overflow((size_t)fattr->cf_dacls->a_count,
+ sizeof(struct smb_ace), &tmp) ||
+ check_add_overflow(len, tmp, &len))
+ return 0;
+ }
+
+ return len;
+}
+
/* Convert permission bits from mode to equivalent CIFS ACL */
int build_sec_desc(struct mnt_idmap *idmap,
struct smb_ntsd *pntsd, struct smb_ntsd *ppntsd,
diff --git a/fs/smb/server/smbacl.h b/fs/smb/server/smbacl.h
index 2b52861707d8..64e6f8ebb68e 100644
--- a/fs/smb/server/smbacl.h
+++ b/fs/smb/server/smbacl.h
@@ -210,6 +210,8 @@ int set_info_sec(struct ksmbd_conn *conn, struct ksmbd_tree_connect *tcon,
bool type_check, bool get_write);
void id_to_sid(unsigned int cid, uint sidtype, struct smb_sid *ssid);
void ksmbd_init_domain(u32 *sub_auth);
+size_t smb_acl_sec_desc_scratch_len(struct smb_fattr *fattr,
+ struct smb_ntsd *ppntsd, int ppntsd_size, int addition_info);
static inline uid_t posix_acl_uid_translate(struct mnt_idmap *idmap,
struct posix_acl_entry *pace)
--
2.52.0
2
1
[PATCH OLK-6.6] sched/fair: Clear rel_deadline when initializing forked entities
by Zicheng Qu 24 Apr '26
by Zicheng Qu 24 Apr '26
24 Apr '26
hulk inclusion
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14279
CVE: NA
--------------------------------
A yield-triggered crash can happen when a newly forked sched_entity
enters the fair class with se->rel_deadline unexpectedly set.
The failing sequence is:
1. A task is forked while se->rel_deadline is still set.
2. __sched_fork() initializes vruntime, vlag and other sched_entity
state, but does not clear rel_deadline.
3. On the first enqueue, enqueue_entity() calls place_entity().
4. Because se->rel_deadline is set, place_entity() treats se->deadline
as a relative deadline and converts it to an absolute deadline by
adding the current vruntime.
5. However, the forked entity's deadline is not a valid inherited
relative deadline for this new scheduling instance, so the conversion
produces an abnormally large deadline.
6. If the task later calls sched_yield(), yield_task_fair() advances
se->vruntime to se->deadline.
7. The inflated vruntime is then used by the following enqueue path,
where the vruntime-derived key can overflow when multiplied by the
entity weight.
8. This corrupts cfs_rq->sum_w_vruntime, breaks EEVDF eligibility
calculation, and can eventually make all entities appear ineligible.
pick_next_entity() may then return NULL unexpectedly, leading to a
later NULL dereference.
A captured trace shows the effect clearly. Before yield, the entity's
vruntime was around:
9834017729983308
After yield_task_fair() executed:
se->vruntime = se->deadline
the vruntime jumped to:
19668035460670230
and the deadline was later advanced further to:
19668035463470230
This shows that the deadline had already become abnormally large before
yield_task_fair() copied it into vruntime.
rel_deadline is only meaningful when se->deadline really carries a
relative deadline that still needs to be placed against vruntime. A
freshly forked sched_entity should not inherit or retain this state.
Clear se->rel_deadline in __sched_fork(), together with the other
sched_entity runtime state, so that the first enqueue does not interpret
the new entity's deadline as a stale relative deadline.
Fixes: 82e9d0456e06 ("sched/fair: Avoid re-setting virtual deadline on 'migrations'")
Signed-off-by: Zicheng Qu <quzicheng(a)huawei.com>
---
kernel/sched/core.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index b833c69d000e..cc0ea2b06f2e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4543,6 +4543,7 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
p->se.vruntime = 0;
p->se.vlag = 0;
p->se.slice = sysctl_sched_base_slice;
+ p->se.rel_deadline = 0;
INIT_LIST_HEAD(&p->se.group_node);
#ifdef CONFIG_FAIR_GROUP_SCHED
--
2.34.1
2
1
[PATCH OLK-6.6] perf/x86/intel: Add missing branch counters constraint apply
by Luo Gengkun 24 Apr '26
by Luo Gengkun 24 Apr '26
24 Apr '26
From: Dapeng Mi <dapeng1.mi(a)linux.intel.com>
mainline inclusion
from mainline-v7.0-rc5
commit 1d07bbd7ea36ea0b8dfa8068dbe67eb3a32d9590
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14277
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
----------------------------------------------------------------------
When running the command:
'perf record -e "{instructions,instructions:p}" -j any,counter sleep 1',
a "shift-out-of-bounds" warning is reported on CWF.
UBSAN: shift-out-of-bounds in /kbuild/src/consumer/arch/x86/events/intel/lbr.c:970:15
shift exponent 64 is too large for 64-bit type 'long long unsigned int'
......
intel_pmu_lbr_counters_reorder.isra.0.cold+0x2a/0xa7
intel_pmu_lbr_save_brstack+0xc0/0x4c0
setup_arch_pebs_sample_data+0x114b/0x2400
The warning occurs because the second "instructions:p" event, which
involves branch counters sampling, is incorrectly programmed to fixed
counter 0 instead of the general-purpose (GP) counters 0-3 that support
branch counters sampling. Currently only GP counters 0-3 support branch
counters sampling on CWF, any event involving branch counters sampling
should be programed on GP counters 0-3. Since the counter index of fixed
counter 0 is 32, it leads to the "src" value in below code is right
shifted 64 bits and trigger the "shift-out-of-bounds" warning.
cnt = (src >> (order[j] * LBR_INFO_BR_CNTR_BITS)) & LBR_INFO_BR_CNTR_MASK;
The root cause is the loss of the branch counters constraint for the
new event in the branch counters sampling event group. Since it isn't
yet part of the sibling list. This results in the second
"instructions:p" event being programmed on fixed counter 0 incorrectly
instead of the appropriate GP counters 0-3.
To address this, we apply the missing branch counters constraint for
the last event in the group. Additionally, we introduce a new function,
`intel_set_branch_counter_constr()`, to apply the branch counters
constraint and avoid code duplication.
Fixes: 33744916196b ("perf/x86/intel: Support branch counters logging")
Reported-by: Xudong Hao <xudong.hao(a)intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi(a)linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Link: https://patch.msgid.link/20260228053320.140406-2-dapeng1.mi@linux.intel.com
Cc: stable(a)vger.kernel.org
Conflicts:
arch/x86/events/intel/core.c
[Fix conflict because of context diff.]
Signed-off-by: Luo Gengkun <luogengkun2(a)huawei.com>
---
arch/x86/events/intel/core.c | 31 +++++++++++++++++++++----------
1 file changed, 21 insertions(+), 10 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 5c6673124520..1844d4917bbb 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -4210,6 +4210,19 @@ static inline void intel_pmu_set_acr_caused_constr(struct perf_event *event,
event->hw_ext->dyn_constraint &= hybrid(event->pmu, acr_cause_mask64);
}
+static inline int intel_set_branch_counter_constr(struct perf_event *event,
+ int *num)
+{
+ if (branch_sample_call_stack(event))
+ return -EINVAL;
+ if (branch_sample_counters(event)) {
+ (*num)++;
+ event->hw_ext->dyn_constraint &= x86_pmu.lbr_counters;
+ }
+
+ return 0;
+}
+
static int intel_pmu_hw_config(struct perf_event *event)
{
int ret = x86_pmu_hw_config(event);
@@ -4265,21 +4278,19 @@ static int intel_pmu_hw_config(struct perf_event *event)
* group, which requires the extra space to store the counters.
*/
leader = event->group_leader;
- if (branch_sample_call_stack(leader))
+ if (intel_set_branch_counter_constr(leader, &num))
return -EINVAL;
- if (branch_sample_counters(leader)) {
- num++;
- leader->hw_ext->dyn_constraint &= x86_pmu.lbr_counters;
- }
leader->hw.flags |= PERF_X86_EVENT_BRANCH_COUNTERS;
for_each_sibling_event(sibling, leader) {
- if (branch_sample_call_stack(sibling))
+ if (intel_set_branch_counter_constr(sibling, &num))
+ return -EINVAL;
+ }
+
+ /* event isn't installed as a sibling yet. */
+ if (event != leader) {
+ if (intel_set_branch_counter_constr(event, &num))
return -EINVAL;
- if (branch_sample_counters(sibling)) {
- num++;
- sibling->hw_ext->dyn_constraint &= x86_pmu.lbr_counters;
- }
}
if (num > fls(x86_pmu.lbr_counters))
--
2.34.1
2
1
Luo Gengkun (1):
perf: Fix kabi breakage of perf_output_handle
Peter Zijlstra (2):
perf: Extract a few helpers
perf: Make sure to use pmu_ctx->pmu for groups
Peter Zijlstra (Intel) (1):
perf: Avoid the read if the count is already updated
include/linux/perf_event.h | 9 +++-
kernel/events/core.c | 91 +++++++++++++++++++------------------
kernel/events/ring_buffer.c | 1 +
3 files changed, 55 insertions(+), 46 deletions(-)
--
2.34.1
2
6
[PATCH OLK-6.6] bpf: Fix undefined behavior in interpreter sdiv/smod for INT_MIN
by Pu Lehui 24 Apr '26
by Pu Lehui 24 Apr '26
24 Apr '26
From: Jenny Guanni Qu <qguanni(a)gmail.com>
stable inclusion
from stable-v6.6.131
commit 694ea55f1b1c74f9942d91ec366ae9e822422e42
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14258
CVE: CVE-2026-31525
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit c77b30bd1dcb61f66c640ff7d2757816210c7cb0 ]
The BPF interpreter's signed 32-bit division and modulo handlers use
the kernel abs() macro on s32 operands. The abs() macro documentation
(include/linux/math.h) explicitly states the result is undefined when
the input is the type minimum. When DST contains S32_MIN (0x80000000),
abs((s32)DST) triggers undefined behavior and returns S32_MIN unchanged
on arm64/x86. This value is then sign-extended to u64 as
0xFFFFFFFF80000000, causing do_div() to compute the wrong result.
The verifier's abstract interpretation (scalar32_min_max_sdiv) computes
the mathematically correct result for range tracking, creating a
verifier/interpreter mismatch that can be exploited for out-of-bounds
map value access.
Introduce abs_s32() which handles S32_MIN correctly by casting to u32
before negating, avoiding signed overflow entirely. Replace all 8
abs((s32)...) call sites in the interpreter's sdiv32/smod32 handlers.
s32 is the only affected case -- the s64 division/modulo handlers do
not use abs().
Fixes: ec0e2da95f72 ("bpf: Support new signed div/mod instructions.")
Acked-by: Yonghong Song <yonghong.song(a)linux.dev>
Acked-by: Mykyta Yatsenko <yatsenko(a)meta.com>
Signed-off-by: Jenny Guanni Qu <qguanni(a)gmail.com>
Link: https://lore.kernel.org/r/20260311011116.2108005-2-qguanni@gmail.com
Signed-off-by: Alexei Starovoitov <ast(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Pu Lehui <pulehui(a)huawei.com>
---
kernel/bpf/core.c | 22 ++++++++++++++--------
1 file changed, 14 insertions(+), 8 deletions(-)
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 72a7503a59d0..e9ca3e36783e 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1668,6 +1668,12 @@ bool bpf_opcode_in_insntable(u8 code)
}
#ifndef CONFIG_BPF_JIT_ALWAYS_ON
+/* Absolute value of s32 without undefined behavior for S32_MIN */
+static u32 abs_s32(s32 x)
+{
+ return x >= 0 ? (u32)x : -(u32)x;
+}
+
/**
* ___bpf_prog_run - run eBPF program on a given context
* @regs: is the array of MAX_BPF_EXT_REG eBPF pseudo-registers
@@ -1832,8 +1838,8 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
DST = do_div(AX, (u32) SRC);
break;
case 1:
- AX = abs((s32)DST);
- AX = do_div(AX, abs((s32)SRC));
+ AX = abs_s32((s32)DST);
+ AX = do_div(AX, abs_s32((s32)SRC));
if ((s32)DST < 0)
DST = (u32)-AX;
else
@@ -1860,8 +1866,8 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
DST = do_div(AX, (u32) IMM);
break;
case 1:
- AX = abs((s32)DST);
- AX = do_div(AX, abs((s32)IMM));
+ AX = abs_s32((s32)DST);
+ AX = do_div(AX, abs_s32((s32)IMM));
if ((s32)DST < 0)
DST = (u32)-AX;
else
@@ -1887,8 +1893,8 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
DST = (u32) AX;
break;
case 1:
- AX = abs((s32)DST);
- do_div(AX, abs((s32)SRC));
+ AX = abs_s32((s32)DST);
+ do_div(AX, abs_s32((s32)SRC));
if (((s32)DST < 0) == ((s32)SRC < 0))
DST = (u32)AX;
else
@@ -1914,8 +1920,8 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
DST = (u32) AX;
break;
case 1:
- AX = abs((s32)DST);
- do_div(AX, abs((s32)IMM));
+ AX = abs_s32((s32)DST);
+ do_div(AX, abs_s32((s32)IMM));
if (((s32)DST < 0) == ((s32)IMM < 0))
DST = (u32)AX;
else
--
2.34.1
2
1
Peter Zijlstra (2):
perf: Extract a few helpers
perf: Make sure to use pmu_ctx->pmu for groups
Peter Zijlstra (Intel) (1):
perf: Avoid the read if the count is already updated
include/linux/perf_event.h | 8 +++-
kernel/events/core.c | 91 +++++++++++++++++++------------------
kernel/events/ring_buffer.c | 1 +
3 files changed, 54 insertions(+), 46 deletions(-)
--
2.34.1
2
4
Fix CVE-2026-31467.
Jiucheng Xu (1):
erofs: add GFP_NOIO in the bio completion if needed
fs/erofs/zdata.c | 3 +++
1 file changed, 3 insertions(+)
--
2.34.1
2
2
hulk inclusion
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/9010
----------------------------------------
The current bpf_sched_set_task_prefer_nid() passes the full node mask
directly to set_prefer_cpus_ptr(), which may result in preferred CPUs
being outside of task->cpus_ptr.
In such cases, the scheduler will ignore the preferred CPUs completely,
causing the preferred node setting to NOT work at all.
Fix this by computing the intersection between task->cpus_ptr and
cpumask_of_node(nid), ensuring the resulting preferred mask is a
subset of the task's allowed CPUs.
Fixes: 4e47cb331dc0 ("sched/ebpf: Add kfunc to set the preferred NUMA node for the task")
Signed-off-by: Hui Tang <tanghui20(a)huawei.com>
---
kernel/sched/bpf_sched.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/bpf_sched.c b/kernel/sched/bpf_sched.c
index 6849a58439b2..ffe13eed601c 100644
--- a/kernel/sched/bpf_sched.c
+++ b/kernel/sched/bpf_sched.c
@@ -212,13 +212,16 @@ __bpf_kfunc long bpf_sched_tag_of_entity(struct sched_entity *se)
__bpf_kfunc int bpf_sched_set_task_prefer_nid(struct task_struct *task, int nid)
{
+ cpumask_t mask;
+
if (!task)
return -EINVAL;
if ((unsigned int)nid >= nr_node_ids)
return -EINVAL;
- return set_prefer_cpus_ptr(task, cpumask_of_node(nid));
+ cpumask_and(&mask, task->cpus_ptr, cpumask_of_node(nid));
+ return set_prefer_cpus_ptr(task, &mask);
}
BTF_SET8_START(sched_task_kfunc_btf_ids)
--
2.34.1
2
1
hulk inclusion
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/9010
----------------------------------------
The current bpf_sched_set_task_prefer_nid() passes the full node mask
directly to set_prefer_cpus_ptr(), which may result in preferred CPUs
being outside of task->cpus_ptr.
In such cases, the scheduler will ignore the preferred CPUs completely,
causing the preferred node setting to NOT work at all.
Fix this by computing the intersection between task->cpus_ptr and
cpumask_of_node(nid), ensuring the resulting preferred mask is a
subset of the task's allowed CPUs.
Fixes: 4e47cb331dc0b ("sched/ebpf: Add kfunc to set the preferred...")
Signed-off-by: Hui Tang <tanghui20(a)huawei.com>
---
kernel/sched/bpf_sched.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/bpf_sched.c b/kernel/sched/bpf_sched.c
index 6849a58439b2..ffe13eed601c 100644
--- a/kernel/sched/bpf_sched.c
+++ b/kernel/sched/bpf_sched.c
@@ -212,13 +212,16 @@ __bpf_kfunc long bpf_sched_tag_of_entity(struct sched_entity *se)
__bpf_kfunc int bpf_sched_set_task_prefer_nid(struct task_struct *task, int nid)
{
+ cpumask_t mask;
+
if (!task)
return -EINVAL;
if ((unsigned int)nid >= nr_node_ids)
return -EINVAL;
- return set_prefer_cpus_ptr(task, cpumask_of_node(nid));
+ cpumask_and(&mask, task->cpus_ptr, cpumask_of_node(nid));
+ return set_prefer_cpus_ptr(task, &mask);
}
BTF_SET8_START(sched_task_kfunc_btf_ids)
--
2.34.1
2
1
hulk inclusion
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/9010
----------------------------------------
The current bpf_sched_set_task_prefer_nid() passes the full node mask
directly to set_prefer_cpus_ptr(), which may result in preferred CPUs
being outside of task->cpus_ptr.
In such cases, the scheduler will ignore the preferred CPUs completely,
causing the preferred node setting to NOT work at all.
Fix this by computing the intersection between task->cpus_ptr and
cpumask_of_node(nid), ensuring the resulting preferred mask is a
subset of the task's allowed CPUs.
Fixes: 4e47cb331dc0b ("sched/ebpf: Add kfunc to set the preferred...")
Signed-off-by: Hui Tang <tanghui20(a)huawei.com>
---
kernel/sched/bpf_sched.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/bpf_sched.c b/kernel/sched/bpf_sched.c
index 6849a58439b2..ffe13eed601c 100644
--- a/kernel/sched/bpf_sched.c
+++ b/kernel/sched/bpf_sched.c
@@ -212,13 +212,16 @@ __bpf_kfunc long bpf_sched_tag_of_entity(struct sched_entity *se)
__bpf_kfunc int bpf_sched_set_task_prefer_nid(struct task_struct *task, int nid)
{
+ cpumask_t mask;
+
if (!task)
return -EINVAL;
if ((unsigned int)nid >= nr_node_ids)
return -EINVAL;
- return set_prefer_cpus_ptr(task, cpumask_of_node(nid));
+ cpumask_and(&mask, task->cpus_ptr, cpumask_of_node(nid));
+ return set_prefer_cpus_ptr(task, &mask);
}
BTF_SET8_START(sched_task_kfunc_btf_ids)
--
2.34.1
1
0
Fix CVE-2026-31398.
David Hildenbrand (4):
mm: convert FPB_IGNORE_* into FPB_RESPECT_*
mm: smaller folio_pte_batch() improvements
mm: split folio_pte_batch() into folio_pte_batch() and
folio_pte_batch_flags()
mm: remove boolean output parameters from folio_pte_batch_ext()
Dev Jain (2):
mm: introduce FPB_RESPECT_WRITE for PTE batching infrastructure
mm/rmap: fix incorrect pte restoration for lazyfree folios
Petr Vaněk (1):
mm: fix folio_pte_batch() on XEN PV
mm/internal.h | 151 +++++++++++++++++++++++++++----------------------
mm/madvise.c | 27 ++-------
mm/memory.c | 21 +++----
mm/mempolicy.c | 5 +-
mm/mremap.c | 4 +-
mm/rmap.c | 11 +++-
mm/util.c | 29 ++++++++++
7 files changed, 135 insertions(+), 113 deletions(-)
--
2.43.0
2
8
Daniel Borkmann (1):
bpf: Fix incorrect pruning due to atomic fetch precision tracking
Qi Tang (1):
bpf: reject direct access to nullable PTR_TO_BUF pointers
Varun R Mallya (1):
bpf: Reject sleepable kprobe_multi programs at attach time
kernel/bpf/verifier.c | 28 ++++++++++++++++++++++++----
kernel/trace/bpf_trace.c | 4 ++++
2 files changed, 28 insertions(+), 4 deletions(-)
--
2.34.1
2
4
Daniel Borkmann (1):
bpf: Fix incorrect pruning due to atomic fetch precision tracking
Qi Tang (1):
bpf: reject direct access to nullable PTR_TO_BUF pointers
Varun R Mallya (1):
bpf: Reject sleepable kprobe_multi programs at attach time
kernel/bpf/verifier.c | 28 ++++++++++++++++++++++++----
kernel/trace/bpf_trace.c | 4 ++++
2 files changed, 28 insertions(+), 4 deletions(-)
--
2.34.1
2
4
22 Apr '26
From: Varun R Mallya <varunrmallya(a)gmail.com>
mainline inclusion
from mainline-v7.0-rc7
commit eb7024bfcc5f68ed11ed9dd4891a3073c15f04a8
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/9004
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
kprobe.multi programs run in atomic/RCU context and cannot sleep.
However, bpf_kprobe_multi_link_attach() did not validate whether the
program being attached had the sleepable flag set, allowing sleepable
helpers such as bpf_copy_from_user() to be invoked from a non-sleepable
context.
This causes a "sleeping function called from invalid context" splat:
BUG: sleeping function called from invalid context at ./include/linux/uaccess.h:169
in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 1787, name: sudo
preempt_count: 1, expected: 0
RCU nest depth: 2, expected: 0
Fix this by rejecting sleepable programs early in
bpf_kprobe_multi_link_attach(), before any further processing.
Fixes: 0dcac2725406 ("bpf: Add multi kprobe link")
Signed-off-by: Varun R Mallya <varunrmallya(a)gmail.com>
Acked-by: Kumar Kartikeya Dwivedi <memxor(a)gmail.com>
Acked-by: Leon Hwang <leon.hwang(a)linux.dev>
Acked-by: Jiri Olsa <jolsa(a)kernel.org>
Link: https://lore.kernel.org/r/20260401191126.440683-1-varunrmallya@gmail.com
Signed-off-by: Alexei Starovoitov <ast(a)kernel.org>
Conflicts:
kernel/trace/bpf_trace.c
[ctx conflicts]
Signed-off-by: Pu Lehui <pulehui(a)huawei.com>
---
kernel/trace/bpf_trace.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 50556b239023..1cdb832b9be5 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -2901,6 +2901,10 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
if (sizeof(u64) != sizeof(void *))
return -EOPNOTSUPP;
+ /* kprobe_multi is not allowed to be sleepable. */
+ if (prog->sleepable)
+ return -EINVAL;
+
if (prog->expected_attach_type != BPF_TRACE_KPROBE_MULTI)
return -EINVAL;
--
2.34.1
2
1
Jinqian Yang (4):
arm64: add check for GICv4.1 before enabling vtimer
KVM: arm64: Fix mbigen vtimer interrupt loss bug
irqchip/gic-v3-its: Read ITS_VERSION with vendor checking
mbigen: fix soft pg bug on HIP12
Zhou Wang (1):
irqchip: add hip12 support for vtimer irq bypass
arch/arm64/kvm/arch_timer.c | 23 +++-
arch/arm64/kvm/vgic/vgic-v3.c | 2 +-
drivers/irqchip/irq-gic-v3-its.c | 2 +-
drivers/irqchip/irq-gic-v3.c | 8 ++
drivers/irqchip/irq-mbigen.c | 202 +++++++++++++++++++++++++------
5 files changed, 195 insertions(+), 42 deletions(-)
--
2.33.0
2
6
Fix CVE-2026-31398.
David Hildenbrand (4):
mm: convert FPB_IGNORE_* into FPB_RESPECT_*
mm: smaller folio_pte_batch() improvements
mm: split folio_pte_batch() into folio_pte_batch() and
folio_pte_batch_flags()
mm: remove boolean output parameters from folio_pte_batch_ext()
Dev Jain (2):
mm: introduce FPB_RESPECT_WRITE for PTE batching infrastructure
mm/rmap: fix incorrect pte restoration for lazyfree folios
Petr Vaněk (1):
mm: fix folio_pte_batch() on XEN PV
mm/internal.h | 151 +++++++++++++++++++++++++++----------------------
mm/madvise.c | 27 ++-------
mm/memory.c | 21 +++----
mm/mempolicy.c | 5 +-
mm/mremap.c | 4 +-
mm/rmap.c | 11 +++-
mm/util.c | 29 ++++++++++
7 files changed, 135 insertions(+), 113 deletions(-)
--
2.43.0
2
8
[PATCH OLK-6.6] net: usb: cdc_ncm: add ndpoffset to NDP32 nframes bounds check
by Quanmin Yan 21 Apr '26
by Quanmin Yan 21 Apr '26
21 Apr '26
From: Tobi Gaertner <tob.gaertner(a)me.com>
stable inclusion
from stable-v6.6.130
commit 125f932a76a97904ef8a555f1dd53e5d0e288c54
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14087
CVE: CVE-2026-23447
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 77914255155e68a20aa41175edeecf8121dac391 ]
The same bounds-check bug fixed for NDP16 in the previous patch also
exists in cdc_ncm_rx_verify_ndp32(). The DPE array size is validated
against the total skb length without accounting for ndpoffset, allowing
out-of-bounds reads when the NDP32 is placed near the end of the NTB.
Add ndpoffset to the nframes bounds check and use struct_size_t() to
express the NDP-plus-DPE-array size more clearly.
Compile-tested only.
Fixes: 0fa81b304a79 ("cdc_ncm: Implement the 32-bit version of NCM Transfer Block")
Signed-off-by: Tobi Gaertner <tob.gaertner(a)me.com>
Link: https://patch.msgid.link/20260314054640.2895026-3-tob.gaertner@me.com
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Quanmin Yan <yanquanmin1(a)huawei.com>
---
drivers/net/usb/cdc_ncm.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
index ae7a2829fe49..56dfd4cd2aa4 100644
--- a/drivers/net/usb/cdc_ncm.c
+++ b/drivers/net/usb/cdc_ncm.c
@@ -1693,6 +1693,7 @@ int cdc_ncm_rx_verify_ndp32(struct sk_buff *skb_in, int ndpoffset)
struct usbnet *dev = netdev_priv(skb_in->dev);
struct usb_cdc_ncm_ndp32 *ndp32;
int ret = -EINVAL;
+ size_t ndp_len;
if ((ndpoffset + sizeof(struct usb_cdc_ncm_ndp32)) > skb_in->len) {
netif_dbg(dev, rx_err, dev->net, "invalid NDP offset <%u>\n",
@@ -1712,8 +1713,8 @@ int cdc_ncm_rx_verify_ndp32(struct sk_buff *skb_in, int ndpoffset)
sizeof(struct usb_cdc_ncm_dpe32));
ret--; /* we process NDP entries except for the last one */
- if ((sizeof(struct usb_cdc_ncm_ndp32) +
- ret * (sizeof(struct usb_cdc_ncm_dpe32))) > skb_in->len) {
+ ndp_len = struct_size_t(struct usb_cdc_ncm_ndp32, dpe32, ret);
+ if (ndpoffset + ndp_len > skb_in->len) {
netif_dbg(dev, rx_err, dev->net, "Invalid nframes = %d\n", ret);
ret = -EINVAL;
}
--
2.43.0
2
1
From: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
stable inclusion
from stable-v6.6.130
commit 72f90f481c6a059680b9b976695d4cfb04fba1f3
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13969
CVE: CVE-2026-23312
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 4b063c002ca759d1b299988ee23f564c9609c875 upstream.
The kaweth driver should validate that the device it is probing has the
proper number and types of USB endpoints it is expecting before it binds
to it. If a malicious device were to not have the same urbs the driver
will crash later on when it blindly accesses these endpoints.
Cc: stable <stable(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Reviewed-by: Simon Horman <horms(a)kernel.org>
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Link: https://patch.msgid.link/2026022305-substance-virtual-c728@gregkh
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Jinjiang Tu <tujinjiang(a)huawei.com>
---
drivers/net/usb/kaweth.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/drivers/net/usb/kaweth.c b/drivers/net/usb/kaweth.c
index c9efb7df892e..a8c3ecf7d810 100644
--- a/drivers/net/usb/kaweth.c
+++ b/drivers/net/usb/kaweth.c
@@ -885,6 +885,13 @@ static int kaweth_probe(
const eth_addr_t bcast_addr = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF };
int result = 0;
int rv = -EIO;
+ static const u8 bulk_ep_addr[] = {
+ 1 | USB_DIR_IN,
+ 2 | USB_DIR_OUT,
+ 0};
+ static const u8 int_ep_addr[] = {
+ 3 | USB_DIR_IN,
+ 0};
dev_dbg(dev,
"Kawasaki Device Probe (Device number:%d): 0x%4.4x:0x%4.4x:0x%4.4x\n",
@@ -898,6 +905,12 @@ static int kaweth_probe(
(int)udev->descriptor.bLength,
(int)udev->descriptor.bDescriptorType);
+ if (!usb_check_bulk_endpoints(intf, bulk_ep_addr) ||
+ !usb_check_int_endpoints(intf, int_ep_addr)) {
+ dev_err(dev, "couldn't find required endpoints\n");
+ return -ENODEV;
+ }
+
netdev = alloc_etherdev(sizeof(*kaweth));
if (!netdev)
return -ENOMEM;
--
2.43.0
2
1
[PATCH OLK-6.6] perf/core: Fix race between perf_event_exit_task and perf_pending_task
by Luo Gengkun 21 Apr '26
by Luo Gengkun 21 Apr '26
21 Apr '26
hulk inclusion
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14165
--------------------------------
A race condition exists between perf_event_exit_task() and
perf_pending_task().
During begin_new_exec(), perf_event_exit_task() may be called, and the
PF_EXITING flag is not set on task. so perf_sigtrap() continues to execute
and triggers WARN_ON_ONCE(event->ctx->task != current).
To fix this problem, also check if the event->ctx->task is TASK_TOMBSTONE.
Fixes: 97ba62b27867 ("perf: Add support for SIGTRAP on perf events")
Signed-off-by: Luo Gengkun <luogengkun2(a)huawei.com>
---
kernel/events/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index cdd34d6e3dd4..7657ddf008ce 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6782,7 +6782,7 @@ static void perf_sigtrap(struct perf_event *event)
* Both perf_pending_task() and perf_pending_irq() can race with the
* task exiting.
*/
- if (current->flags & PF_EXITING)
+ if (current->flags & PF_EXITING || event->ctx->task == TASK_TOMBSTONE)
return;
/*
--
2.34.1
2
1
[PATCH OLK-6.6] arm64: mm: fix pass user prot to ioremap_prot in generic_access_phys
by Jinjiang Tu 20 Apr '26
by Jinjiang Tu 20 Apr '26
20 Apr '26
hulk inclusion
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8982
----------------------------------------
Here is a syzkaller error log:
[0000000020ffc000] pgd=080000010598d403, p4d=080000010598d403, pud=0800000125ddb403,
pmd=080000007833c403, pte=01608000007fcfcf
Unable to handle kernel read from unreadable memory at virtual address ffff80008ea89000
KASAN: probably user-memory-access in range [0x0000000475448000-0x0000000475448007]
Mem abort info:
ESR = 0x000000009600000f
EC = 0x25: DABT (current EL), IL = 32 bits
SET = 0, FnV = 0
EA = 0, S1PTW = 0
FSC = 0x0f: level 3 permission fault
Data abort info:
ISV = 0, ISS = 0x0000000f, ISS2 = 0x00000000
CM = 0, WnR = 0, TnD = 0, TagAccess = 0
GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
swapper pgtable: 4k pages, 48-bit VAs, pgdp=00000001244aa000
[ffff80008ea89000] pgd=100000013ffff403, p4d=100000013ffff403, pud=100000013fffe403,
pmd=100000010a453403, pte=01608000007fcfcf
Internal error: Oops: 000000009600000f [#1] SMP
Modules linked in: team
CPU: 1 PID: 10840 Comm: syz.9.83 Kdump: loaded Tainted: G
Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
pstate: 20400005 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : __memcpy_fromio+0x80/0xf8
lr : generic_access_phys+0x20c/0x2b8
sp : ffff8000a0507960
x29: ffff8000a0507960 x28: 1ffff000140a0f44 x27: ffff00003833cfe0
x26: 0000000000000000 x25: 0000000000001000 x24: 0010000000000001
x23: ffff80008ea89000 x22: ffff00004ea63000 x21: 0000000000001000
x20: ffff80008ea89000 x19: ffff00004ea62000 x18: 0000000000000000
x17: 0000000000000000 x16: 0000000000000000 x15: ffff8000806f1e3c
x14: ffff8000806f1d44 x13: 0000000041b58ab3 x12: ffff7000140a0f23
x11: 1ffff000140a0f22 x10: ffff7000140a0f22 x9 : ffff800080579d24
x8 : 0000000000000004 x7 : 0000000000000003 x6 : 0000000000000001
x5 : ffff8000a0507910 x4 : ffff7000140a0f22 x3 : dfff800000000000
x2 : 0000000000001000 x1 : ffff80008ea89000 x0 : ffff00004ea62000
Call trace:
__memcpy_fromio+0x80/0xf8
generic_access_phys+0x20c/0x2b8
__access_remote_vm+0x46c/0x5b8
access_remote_vm+0x18/0x30
environ_read+0x238/0x3e8
vfs_read+0xe4/0x2b0
ksys_read+0xcc/0x178
__arm64_sys_read+0x4c/0x68
invoke_syscall+0x68/0x1a0
el0_svc_common.constprop.0+0x11c/0x150
do_el0_svc+0x38/0x50
el0_svc+0x50/0x258
el0t_64_sync_handler+0xc0/0xc8
el0t_64_sync+0x1a4/0x1a8
Code: 91002339 aa1403f7 8b190276 d503201f (f94002f8)
The local syzkaller first maps I/O address from /dev/mem to userspace,
overiding the stack vma with MAP_FIXED flag. As a result, when reading
/proc/$pid/environ, generic_access_phys() is called to access the region,
which triggers a PAN permission-check fault and causes a kernel access
fault.
The root cause is that generic_access_phys() passes a user pte to
ioremap_prot(), the user pte sets PTE_USER and PTE_NG bits. Consequently,
any subsequent kernel-mode access to the remapped address raises a fault.
To fix it, define arch_mk_kernel_prot() to convert user prot to kernel
prot for arm64, and call arch_mk_kernel_prot() in generic_access_phys(),
so that a user prot is passed to ioremap_prot().
Mainline kernel fixes this issue by commit f6bf47ab32e0 ("arm64: io:
Rename ioremap_prot() to __ioremap_prot()") and commit 8f098037139b
("arm64: io: Extract user memory type in ioremap_prot()"), which breaks
KABI. So don't fix with the mainline kernel approach.
Fixes: 893dea9ccd08 ("arm64: Add HAVE_IOREMAP_PROT support")
Signed-off-by: Zeng Heng <zengheng4(a)huawei.com>
Signed-off-by: Jinjiang Tu <tujinjiang(a)huawei.com>
---
arch/arm64/include/asm/io.h | 11 +++++++++++
mm/memory.c | 14 ++++++++++++--
2 files changed, 23 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
index e9d6bcfddf35..227c16c8fc1d 100644
--- a/arch/arm64/include/asm/io.h
+++ b/arch/arm64/include/asm/io.h
@@ -146,6 +146,17 @@ int arm64_ioremap_prot_hook_register(const ioremap_prot_hook_t hook);
#define ioremap_prot ioremap_prot
+#define arch_mk_kernel_prot arch_mk_kernel_prot
+static inline unsigned long arch_mk_kernel_prot(unsigned long user_prot)
+{
+ unsigned long kernel_prot_val;
+
+ kernel_prot_val = _PAGE_KERNEL & ~PTE_ATTRINDX_MASK;
+ kernel_prot_val |= user_prot & PTE_ATTRINDX_MASK;
+
+ return kernel_prot_val;
+}
+
#define _PAGE_IOREMAP PROT_DEVICE_nGnRE
#define ioremap_wc(addr, size) \
diff --git a/mm/memory.c b/mm/memory.c
index 794233ef55eb..28de289fe973 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6536,6 +6536,14 @@ int follow_phys(struct vm_area_struct *vma,
return ret;
}
+#ifndef arch_mk_kernel_prot
+#define arch_mk_kernel_prot arch_mk_kernel_prot
+static inline unsigned long arch_mk_kernel_prot(unsigned long user_prot)
+{
+ return user_prot;
+}
+#endif
+
/**
* generic_access_phys - generic implementation for iomem mmap access
* @vma: the vma to access
@@ -6552,7 +6560,8 @@ int generic_access_phys(struct vm_area_struct *vma, unsigned long addr,
void *buf, int len, int write)
{
resource_size_t phys_addr;
- unsigned long prot = 0;
+ unsigned long user_prot = 0;
+ unsigned long prot;
void __iomem *maddr;
pte_t *ptep, pte;
spinlock_t *ptl;
@@ -6568,12 +6577,13 @@ int generic_access_phys(struct vm_area_struct *vma, unsigned long addr,
pte = ptep_get(ptep);
pte_unmap_unlock(ptep, ptl);
- prot = pgprot_val(pte_pgprot(pte));
+ user_prot = pgprot_val(pte_pgprot(pte));
phys_addr = (resource_size_t)pte_pfn(pte) << PAGE_SHIFT;
if ((write & FOLL_WRITE) && !pte_write(pte))
return -EINVAL;
+ prot = arch_mk_kernel_prot(user_prot);
maddr = ioremap_prot(phys_addr, PAGE_ALIGN(len + offset), prot);
if (!maddr)
return -ENOMEM;
--
2.43.0
2
1
[PATCH OLK-6.6] drm/logicvc: Fix device node reference leak in logicvc_drm_config_parse()
by Jiacheng Yu 20 Apr '26
by Jiacheng Yu 20 Apr '26
20 Apr '26
From: Felix Gu <ustc.gu(a)gmail.com>
stable inclusion
from stable-v6.6.130
commit 0bd326dffd9e103335d77d9c31275c0d5a7979eb
category: bugfix
bugzilla: http://atomgit.com/src-openeuler/kernel/issues/14066
CVE: CVE-2026-23426
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit fef0e649f8b42bdffe4a916dd46e1b1e9ad2f207 ]
The logicvc_drm_config_parse() function calls of_get_child_by_name() to
find the "layers" node but fails to release the reference, leading to a
device node reference leak.
Fix this by using the __free(device_node) cleanup attribute to automatic
release the reference when the variable goes out of scope.
Fixes: efeeaefe9be5 ("drm: Add support for the LogiCVC display controller")
Signed-off-by: Felix Gu <ustc.gu(a)gmail.com>
Reviewed-by: Luca Ceresoli <luca.ceresoli(a)bootlin.com>
Reviewed-by: Kory Maincent <kory.maincent(a)bootlin.com>
Link: https://patch.msgid.link/20260130-logicvc_drm-v1-1-04366463750c@gmail.com
Signed-off-by: Luca Ceresoli <luca.ceresoli(a)bootlin.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Jiacheng Yu <yujiacheng3(a)huawei.com>
---
drivers/gpu/drm/logicvc/logicvc_drm.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/logicvc/logicvc_drm.c b/drivers/gpu/drm/logicvc/logicvc_drm.c
index 749debd3d6a57..df74572e6d2ea 100644
--- a/drivers/gpu/drm/logicvc/logicvc_drm.c
+++ b/drivers/gpu/drm/logicvc/logicvc_drm.c
@@ -90,7 +90,6 @@ static int logicvc_drm_config_parse(struct logicvc_drm *logicvc)
struct device *dev = drm_dev->dev;
struct device_node *of_node = dev->of_node;
struct logicvc_drm_config *config = &logicvc->config;
- struct device_node *layers_node;
int ret;
logicvc_of_property_parse_bool(of_node, LOGICVC_OF_PROPERTY_DITHERING,
@@ -126,7 +125,8 @@ static int logicvc_drm_config_parse(struct logicvc_drm *logicvc)
if (ret)
return ret;
- layers_node = of_get_child_by_name(of_node, "layers");
+ struct device_node *layers_node __free(device_node) =
+ of_get_child_by_name(of_node, "layers");
if (!layers_node) {
drm_err(drm_dev, "Missing non-optional layers node\n");
return -EINVAL;
--
2.43.0
2
1
[PATCH OLK-6.6] perf/core: Fix race between perf_event_exit_task and perf_pending_task
by Luo Gengkun 20 Apr '26
by Luo Gengkun 20 Apr '26
20 Apr '26
Offering: HULK
hulk inclusion
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14165
--------------------------------
A race condition exists between perf_event_exit_task() and
perf_pending_task().
During begin_new_exec(), perf_event_exit_task() may be called, and the
PF_EXITING flag is not set on task. so perf_sigtrap() continues to execute
and triggers WARN_ON_ONCE(event->ctx->task != current).
To fix this problem, also check if the event->ctx->task is TASK_TOMBSTONE.
Fixes: 97ba62b27867 ("perf: Add support for SIGTRAP on perf events")
Signed-off-by: Luo Gengkun <luogengkun2(a)huawei.com>
---
kernel/events/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index cdd34d6e3dd4..7657ddf008ce 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6782,7 +6782,7 @@ static void perf_sigtrap(struct perf_event *event)
* Both perf_pending_task() and perf_pending_irq() can race with the
* task exiting.
*/
- if (current->flags & PF_EXITING)
+ if (current->flags & PF_EXITING || event->ctx->task == TASK_TOMBSTONE)
return;
/*
--
2.34.1
2
1
[PATCH OLK-6.6 0/2] ice: change XDP RxQ frag_size from DMA write length to xdp.frame_sz
by Pan Taixi 20 Apr '26
by Pan Taixi 20 Apr '26
20 Apr '26
Backport "ice: change XDP RxQ frag_size from DMA write length to xdp.frame_sz" to
OLK-6.6
Include "xsk: introduce helper to determine rxq->frag_size" to introduce the helper
function xsk_pool_get_rx_frag_step()
Larysa Zaremba (2):
xsk: introduce helper to determine rxq->frag_size
ice: change XDP RxQ frag_size from DMA write length to xdp.frame_sz
drivers/net/ethernet/intel/ice/ice_base.c | 7 ++++---
include/net/xdp_sock_drv.h | 10 ++++++++++
2 files changed, 14 insertions(+), 3 deletions(-)
--
2.34.1
2
1
[PATCH OLK-6.6 2/2] ice: change XDP RxQ frag_size from DMA write length to xdp.frame_sz
by Pan Taixi 20 Apr '26
by Pan Taixi 20 Apr '26
20 Apr '26
From: Larysa Zaremba <larysa.zaremba(a)intel.com>
mainline inclusion
from mainline-v7.0-rc3
commit e142dc4ef0f451b7ef99d09aaa84e9389af629d7
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14020/
CVE: CVE-2026-23377
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
The only user of frag_size field in XDP RxQ info is
bpf_xdp_frags_increase_tail(). It clearly expects whole buff size instead
of DMA write size. Different assumptions in ice driver configuration lead
to negative tailroom.
This allows to trigger kernel panic, when using
XDP_ADJUST_TAIL_GROW_MULTI_BUFF xskxceiver test and changing packet size to
6912 and the requested offset to a huge value, e.g.
XSK_UMEM__MAX_FRAME_SIZE * 100.
Due to other quirks of the ZC configuration in ice, panic is not observed
in ZC mode, but tailroom growing still fails when it should not.
Use fill queue buffer truesize instead of DMA write size in XDP RxQ info.
Fix ZC mode too by using the new helper.
Fixes: 2fba7dc5157b ("ice: Add support for XDP multi-buffer on Rx side")
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov(a)intel.com>
Signed-off-by: Larysa Zaremba <larysa.zaremba(a)intel.com>
Link: https://patch.msgid.link/20260305111253.2317394-5-larysa.zaremba@intel.com
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Conflicts:
drivers/net/ethernet/intel/ice/ice_base.c
[Adapted calculation of frag size. truesize is introduced into mainline in patch
93f53db9f9dc ("ice: switch to Page Pool"), which is not merged. Use frame_sz as
the truesize of underlying buffer.]
Signed-off-by: Pan Taixi <pantaixi1(a)huawei.com>
---
drivers/net/ethernet/intel/ice/ice_base.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
index 9a0682b05c4f..1408089a6549 100644
--- a/drivers/net/ethernet/intel/ice/ice_base.c
+++ b/drivers/net/ethernet/intel/ice/ice_base.c
@@ -557,7 +557,7 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
ring->q_index,
ring->q_vector->napi.napi_id,
- ring->rx_buf_len);
+ ice_get_frame_sz(ring));
if (err)
return err;
}
@@ -568,10 +568,11 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
ring->rx_buf_len =
xsk_pool_get_rx_frame_size(ring->xsk_pool);
+ u32 frag_size = xsk_pool_get_rx_frag_step(ring->xsk_pool);
err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
ring->q_index,
ring->q_vector->napi.napi_id,
- ring->rx_buf_len);
+ frag_size);
if (err)
return err;
err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq,
@@ -588,7 +589,7 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
ring->q_index,
ring->q_vector->napi.napi_id,
- ring->rx_buf_len);
+ ice_get_frame_sz(ring));
if (err)
return err;
}
--
2.34.1
1
0
20 Apr '26
From: Larysa Zaremba <larysa.zaremba(a)intel.com>
stable inclusion
from stable-v6.6.130
commit 183f940bdf9074f4fd7d32badeb0d73c93dc2070
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14020/
CVE: CVE-2026-23377
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 16394d80539937d348dd3b9ea32415c54e67a81b ]
rxq->frag_size is basically a step between consecutive strictly aligned
frames. In ZC mode, chunk size fits exactly, but if chunks are unaligned,
there is no safe way to determine accessible space to grow tailroom.
Report frag_size to be zero, if chunks are unaligned, chunk_size otherwise.
Fixes: 24ea50127ecf ("xsk: support mbuf on ZC RX")
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov(a)intel.com>
Signed-off-by: Larysa Zaremba <larysa.zaremba(a)intel.com>
Link: https://patch.msgid.link/20260305111253.2317394-3-larysa.zaremba@intel.com
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Signed-off-by: Pan Taixi <pantaixi1(a)huawei.com>
---
include/net/xdp_sock_drv.h | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h
index 5425f7ad5ebd..e1e46de7a875 100644
--- a/include/net/xdp_sock_drv.h
+++ b/include/net/xdp_sock_drv.h
@@ -41,6 +41,11 @@ static inline u32 xsk_pool_get_rx_frame_size(struct xsk_buff_pool *pool)
return xsk_pool_get_chunk_size(pool) - xsk_pool_get_headroom(pool);
}
+static inline u32 xsk_pool_get_rx_frag_step(struct xsk_buff_pool *pool)
+{
+ return pool->unaligned ? 0 : xsk_pool_get_chunk_size(pool);
+}
+
static inline void xsk_pool_set_rxq_info(struct xsk_buff_pool *pool,
struct xdp_rxq_info *rxq)
{
@@ -263,6 +268,11 @@ static inline u32 xsk_pool_get_rx_frame_size(struct xsk_buff_pool *pool)
return 0;
}
+static inline u32 xsk_pool_get_rx_frag_step(struct xsk_buff_pool *pool)
+{
+ return 0;
+}
+
static inline void xsk_pool_set_rxq_info(struct xsk_buff_pool *pool,
struct xdp_rxq_info *rxq)
{
--
2.34.1
1
0
[PATCH OLK-6.6] mm/numa_remote: set default distance between remote nodes to 255
by Jinjiang Tu 20 Apr '26
by Jinjiang Tu 20 Apr '26
20 Apr '26
hulk inclusion
category: other
bugzilla: https://atomgit.com/openeuler/kernel/issues/8974
----------------------------------------
Set default distance between remote nodes to 255, indicating unreachable.
Signed-off-by: Jinjiang Tu <tujinjiang(a)huawei.com>
---
drivers/base/numa_remote.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/base/numa_remote.c b/drivers/base/numa_remote.c
index 19f16d265820..74eb87ac8b26 100644
--- a/drivers/base/numa_remote.c
+++ b/drivers/base/numa_remote.c
@@ -16,8 +16,8 @@
/* The default distance between local node and remote node */
#define REMOTE_TO_LOCAL_DISTANCE 100
-/* The default distance between two remtoe node */
-#define REMOTE_TO_REMOTE_DISTANCE 254
+/* The default distance between two remote node */
+#define REMOTE_TO_REMOTE_DISTANCE 255
bool numa_remote_enabled __ro_after_init;
static bool numa_remote_nofallback_mode __ro_after_init;
--
2.43.0
2
1
[PATCH OLK-6.6 v8 0/2] kvm: arm64: Transition from CPU Type to MIDR Register for Virtualization Feature Detection
by liqiqi 20 Apr '26
by liqiqi 20 Apr '26
20 Apr '26
Currently, there are two methods for determining whether a chip supports
specific virtualization features:
1. Reading the chip's CPU type from BIOS
2. Reading the value of the MIDR register
The issue with the first method is that each time a new chip is introduced,
the new CPU type must be defined, which leads to poor code portability and
maintainability.
Therefore, the second method has been adopted to replace the first. This
approach eliminates the dependency on CPU type by using the MIDR register.
liqiqi (2):
kvm: arm64: Add MIDR definitions and use MIDR to determine whether
features are supported
kvm: arm64: Remove cpu_type definition and it's related interfaces
arch/arm64/include/asm/cputype.h | 4 +
arch/arm64/kvm/arm.c | 1 -
arch/arm64/kvm/hisilicon/hisi_virt.c | 110 +++------------------------
arch/arm64/kvm/hisilicon/hisi_virt.h | 12 ---
4 files changed, 14 insertions(+), 113 deletions(-)
--
2.43.0
2
3
[PATCH OLK-5.10] ACPI: EC: clean up handlers on probe failure in acpi_ec_setup()
by Xinyu Zheng 20 Apr '26
by Xinyu Zheng 20 Apr '26
20 Apr '26
From: Weiming Shi <bestswngs(a)gmail.com>
stable inclusion
from stable-v6.6.131
commit 9c886e63b69658959633937e3acb7ca8addf7499
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14159
CVE: CVE-2026-31426
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
When ec_install_handlers() returns -EPROBE_DEFER on reduced-hardware
platforms, it has already started the EC and installed the address
space handler with the struct acpi_ec pointer as handler context.
However, acpi_ec_setup() propagates the error without any cleanup.
The caller acpi_ec_add() then frees the struct acpi_ec for non-boot
instances, leaving a dangling handler context in ACPICA.
Any subsequent AML evaluation that accesses an EC OpRegion field
dispatches into acpi_ec_space_handler() with the freed pointer,
causing a use-after-free:
BUG: KASAN: slab-use-after-free in mutex_lock (kernel/locking/mutex.c:289)
Write of size 8 at addr ffff88800721de38 by task init/1
Call Trace:
<TASK>
mutex_lock (kernel/locking/mutex.c:289)
acpi_ec_space_handler (drivers/acpi/ec.c:1362)
acpi_ev_address_space_dispatch (drivers/acpi/acpica/evregion.c:293)
acpi_ex_access_region (drivers/acpi/acpica/exfldio.c:246)
acpi_ex_field_datum_io (drivers/acpi/acpica/exfldio.c:509)
acpi_ex_extract_from_field (drivers/acpi/acpica/exfldio.c:700)
acpi_ex_read_data_from_field (drivers/acpi/acpica/exfield.c:327)
acpi_ex_resolve_node_to_value (drivers/acpi/acpica/exresolv.c:392)
</TASK>
Allocated by task 1:
acpi_ec_alloc (drivers/acpi/ec.c:1424)
acpi_ec_add (drivers/acpi/ec.c:1692)
Freed by task 1:
kfree (mm/slub.c:6876)
acpi_ec_add (drivers/acpi/ec.c:1751)
The bug triggers on reduced-hardware EC platforms (ec->gpe < 0)
when the GPIO IRQ provider defers probing. Once the stale handler
exists, any unprivileged sysfs read that causes AML to touch an
EC OpRegion (battery, thermal, backlight) exercises the dangling
pointer.
Fix this by calling ec_remove_handlers() in the error path of
acpi_ec_setup() before clearing first_ec. ec_remove_handlers()
checks each EC_FLAGS_* bit before acting, so it is safe to call
regardless of how far ec_install_handlers() progressed:
-ENODEV (handler not installed): only calls acpi_ec_stop()
-EPROBE_DEFER (handler installed): removes handler, stops EC
Fixes: 03e9a0e05739 ("ACPI: EC: Consolidate event handler installation code")
Reported-by: Xiang Mei <xmei5(a)asu.edu>
Signed-off-by: Weiming Shi <bestswngs(a)gmail.com>
Link: https://patch.msgid.link/20260324165458.1337233-2-bestswngs@gmail.com
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
Conflicts:
drivers/acpi/ec.c
[context conflict]
Signed-off-by: Xinyu Zheng <zhengxinyu6(a)huawei.com>
---
drivers/acpi/ec.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
index b20206316fbe..79b4dd9b733b 100644
--- a/drivers/acpi/ec.c
+++ b/drivers/acpi/ec.c
@@ -1618,8 +1618,10 @@ static int acpi_ec_setup(struct acpi_ec *ec, struct acpi_device *device)
int ret;
ret = ec_install_handlers(ec, device);
- if (ret)
+ if (ret) {
+ ec_remove_handlers(ec);
return ret;
+ }
/* First EC capable of handling transactions */
if (!first_ec)
--
2.34.1
2
1
[OLK-5.10] [Backport] ACPI: EC: clean up handlers on probe failure in acpi_ec_setup()
by Xinyu Zheng 20 Apr '26
by Xinyu Zheng 20 Apr '26
20 Apr '26
From: Weiming Shi <bestswngs(a)gmail.com>
stable inclusion
from stable-v6.6.131
commit 9c886e63b69658959633937e3acb7ca8addf7499
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14159
CVE: CVE-2026-31426
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
When ec_install_handlers() returns -EPROBE_DEFER on reduced-hardware
platforms, it has already started the EC and installed the address
space handler with the struct acpi_ec pointer as handler context.
However, acpi_ec_setup() propagates the error without any cleanup.
The caller acpi_ec_add() then frees the struct acpi_ec for non-boot
instances, leaving a dangling handler context in ACPICA.
Any subsequent AML evaluation that accesses an EC OpRegion field
dispatches into acpi_ec_space_handler() with the freed pointer,
causing a use-after-free:
BUG: KASAN: slab-use-after-free in mutex_lock (kernel/locking/mutex.c:289)
Write of size 8 at addr ffff88800721de38 by task init/1
Call Trace:
<TASK>
mutex_lock (kernel/locking/mutex.c:289)
acpi_ec_space_handler (drivers/acpi/ec.c:1362)
acpi_ev_address_space_dispatch (drivers/acpi/acpica/evregion.c:293)
acpi_ex_access_region (drivers/acpi/acpica/exfldio.c:246)
acpi_ex_field_datum_io (drivers/acpi/acpica/exfldio.c:509)
acpi_ex_extract_from_field (drivers/acpi/acpica/exfldio.c:700)
acpi_ex_read_data_from_field (drivers/acpi/acpica/exfield.c:327)
acpi_ex_resolve_node_to_value (drivers/acpi/acpica/exresolv.c:392)
</TASK>
Allocated by task 1:
acpi_ec_alloc (drivers/acpi/ec.c:1424)
acpi_ec_add (drivers/acpi/ec.c:1692)
Freed by task 1:
kfree (mm/slub.c:6876)
acpi_ec_add (drivers/acpi/ec.c:1751)
The bug triggers on reduced-hardware EC platforms (ec->gpe < 0)
when the GPIO IRQ provider defers probing. Once the stale handler
exists, any unprivileged sysfs read that causes AML to touch an
EC OpRegion (battery, thermal, backlight) exercises the dangling
pointer.
Fix this by calling ec_remove_handlers() in the error path of
acpi_ec_setup() before clearing first_ec. ec_remove_handlers()
checks each EC_FLAGS_* bit before acting, so it is safe to call
regardless of how far ec_install_handlers() progressed:
-ENODEV (handler not installed): only calls acpi_ec_stop()
-EPROBE_DEFER (handler installed): removes handler, stops EC
Fixes: 03e9a0e05739 ("ACPI: EC: Consolidate event handler installation code")
Reported-by: Xiang Mei <xmei5(a)asu.edu>
Signed-off-by: Weiming Shi <bestswngs(a)gmail.com>
Link: https://patch.msgid.link/20260324165458.1337233-2-bestswngs@gmail.com
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
Conflicts:
drivers/acpi/ec.c
[context conflict]
Signed-off-by: Xinyu Zheng <zhengxinyu6(a)huawei.com>
---
drivers/acpi/ec.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
index b20206316fbe..79b4dd9b733b 100644
--- a/drivers/acpi/ec.c
+++ b/drivers/acpi/ec.c
@@ -1618,8 +1618,10 @@ static int acpi_ec_setup(struct acpi_ec *ec, struct acpi_device *device)
int ret;
ret = ec_install_handlers(ec, device);
- if (ret)
+ if (ret) {
+ ec_remove_handlers(ec);
return ret;
+ }
/* First EC capable of handling transactions */
if (!first_ec)
--
2.34.1
1
0
[PATCH OLK-6.6] jfs: add dmapctl integrity check to prevent invalid operations
by Li Lingfeng 20 Apr '26
by Li Lingfeng 20 Apr '26
20 Apr '26
From: Yun Zhou <yun.zhou(a)windriver.com>
mainline inclusion
from mainline-v7.1-rc1
commit cce219b203c4b9cb445e910c7090d1f58af847c5
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8972
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
Add check_dmapctl() to validate dmapctl structure integrity, focusing on
preventing invalid operations caused by on-disk corruption.
Key checks:
- nleafs bounded by [0, LPERCTL] (maximum leaf nodes per dmapctl).
- l2nleafs bounded by [0, L2LPERCTL] and consistent with nleafs
(nleafs must be 2^l2nleafs).
- leafidx must be exactly CTLLEAFIND (expected leaf index position).
- height bounded by [0, L2LPERCTL >> 1] (valid tree height range).
- budmin validity: NOFREE only if nleafs=0; otherwise >= BUDMIN.
- Leaf nodes fit within stree array (leafidx + nleafs <= CTLTREESIZE).
- Leaf node values are either non-negative or NOFREE.
Invoked in dbAllocAG(), dbFindCtl(), dbAdjCtl() and dbExtendFS() when
accessing dmapctl pages, catching corruption early before dmap operations
trigger invalid memory access or logic errors.
This fixes the following UBSAN warning.
[58245.668090][T14017] ------------[ cut here ]------------
[58245.668103][T14017] UBSAN: shift-out-of-bounds in fs/jfs/jfs_dmap.c:2641:11
[58245.668119][T14017] shift exponent 110 is too large for 32-bit type 'int'
[58245.668137][T14017] CPU: 0 UID: 0 PID: 14017 Comm: 4c1966e88c28fa9 Tainted: G E 6.18.0-rc4-00253-g21ce5d4ba045-dirty #124 PREEMPT_{RT,(full)}
[58245.668174][T14017] Tainted: [E]=UNSIGNED_MODULE
[58245.668176][T14017] Hardware name: QEMU Ubuntu 25.04 PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[58245.668184][T14017] Call Trace:
[58245.668200][T14017] <TASK>
[58245.668208][T14017] dump_stack_lvl+0x189/0x250
[58245.668288][T14017] ? __pfx_dump_stack_lvl+0x10/0x10
[58245.668301][T14017] ? __pfx__printk+0x10/0x10
[58245.668315][T14017] ? lock_metapage+0x303/0x400 [jfs]
[58245.668406][T14017] ubsan_epilogue+0xa/0x40
[58245.668422][T14017] __ubsan_handle_shift_out_of_bounds+0x386/0x410
[58245.668462][T14017] dbSplit+0x1f8/0x200 [jfs]
[58245.668543][T14017] dbAdjCtl+0x34c/0xa20 [jfs]
[58245.668628][T14017] dbAllocNear+0x2ee/0x3d0 [jfs]
[58245.668710][T14017] dbAlloc+0x933/0xba0 [jfs]
[58245.668797][T14017] ea_write+0x374/0xdd0 [jfs]
[58245.668888][T14017] ? __pfx_ea_write+0x10/0x10 [jfs]
[58245.668966][T14017] ? __jfs_setxattr+0x76e/0x1120 [jfs]
[58245.669046][T14017] __jfs_setxattr+0xa01/0x1120 [jfs]
[58245.669135][T14017] ? __pfx___jfs_setxattr+0x10/0x10 [jfs]
[58245.669216][T14017] ? mutex_lock_nested+0x154/0x1d0
[58245.669252][T14017] ? __jfs_xattr_set+0xb9/0x170 [jfs]
[58245.669333][T14017] __jfs_xattr_set+0xda/0x170 [jfs]
[58245.669430][T14017] ? __pfx___jfs_xattr_set+0x10/0x10 [jfs]
[58245.669509][T14017] ? xattr_full_name+0x6f/0x90
[58245.669546][T14017] ? jfs_xattr_set+0x33/0x60 [jfs]
[58245.669636][T14017] ? __pfx_jfs_xattr_set+0x10/0x10 [jfs]
[58245.669726][T14017] __vfs_setxattr+0x43c/0x480
[58245.669743][T14017] __vfs_setxattr_noperm+0x12d/0x660
[58245.669756][T14017] vfs_setxattr+0x16b/0x2f0
[58245.669768][T14017] ? __pfx_vfs_setxattr+0x10/0x10
[58245.669782][T14017] filename_setxattr+0x274/0x600
[58245.669795][T14017] ? __pfx_filename_setxattr+0x10/0x10
[58245.669806][T14017] ? getname_flags+0x1e5/0x540
[58245.669829][T14017] path_setxattrat+0x364/0x3a0
[58245.669840][T14017] ? __pfx_path_setxattrat+0x10/0x10
[58245.669859][T14017] ? __se_sys_chdir+0x1b9/0x280
[58245.669876][T14017] __x64_sys_lsetxattr+0xbf/0xe0
[58245.669888][T14017] do_syscall_64+0xfa/0xfa0
[58245.669901][T14017] ? lockdep_hardirqs_on+0x9c/0x150
[58245.669913][T14017] ? entry_SYSCALL_64_after_hwframe+0x77/0x7f
[58245.669927][T14017] ? exc_page_fault+0xab/0x100
[58245.669937][T14017] entry_SYSCALL_64_after_hwframe+0x77/0x7f
Reported-by: syzbot+4c1966e88c28fa96e053(a)syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=4c1966e88c28fa96e053
Signed-off-by: Yun Zhou <yun.zhou(a)windriver.com>
Signed-off-by: Dave Kleikamp <dave.kleikamp(a)oracle.com>
Signed-off-by: Li Lingfeng <lilingfeng3(a)huawei.com>
---
fs/jfs/jfs_dmap.c | 114 ++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 111 insertions(+), 3 deletions(-)
diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
index 7c010ef6edb0..3369b951bcc9 100644
--- a/fs/jfs/jfs_dmap.c
+++ b/fs/jfs/jfs_dmap.c
@@ -133,6 +133,93 @@ static const s8 budtab[256] = {
2, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, -1
};
+/*
+ * check_dmapctl - Validate integrity of a dmapctl structure
+ * @dcp: Pointer to the dmapctl structure to check
+ *
+ * Return: true if valid, false if corrupted
+ */
+static bool check_dmapctl(struct dmapctl *dcp)
+{
+ s8 budmin = dcp->budmin;
+ u32 nleafs, l2nleafs, leafidx, height;
+ int i;
+
+ nleafs = le32_to_cpu(dcp->nleafs);
+ /* Check basic field ranges */
+ if (unlikely(nleafs > LPERCTL)) {
+ jfs_err("dmapctl: invalid nleafs %u (max %u)",
+ nleafs, LPERCTL);
+ return false;
+ }
+
+ l2nleafs = le32_to_cpu(dcp->l2nleafs);
+ if (unlikely(l2nleafs > L2LPERCTL)) {
+ jfs_err("dmapctl: invalid l2nleafs %u (max %u)",
+ l2nleafs, L2LPERCTL);
+ return false;
+ }
+
+ /* Verify nleafs matches l2nleafs (must be power of two) */
+ if (unlikely((1U << l2nleafs) != nleafs)) {
+ jfs_err("dmapctl: nleafs %u != 2^%u",
+ nleafs, l2nleafs);
+ return false;
+ }
+
+ leafidx = le32_to_cpu(dcp->leafidx);
+ /* Check leaf index matches expected position */
+ if (unlikely(leafidx != CTLLEAFIND)) {
+ jfs_err("dmapctl: invalid leafidx %u (expected %u)",
+ leafidx, CTLLEAFIND);
+ return false;
+ }
+
+ height = le32_to_cpu(dcp->height);
+ /* Check tree height is within valid range */
+ if (unlikely(height > (L2LPERCTL >> 1))) {
+ jfs_err("dmapctl: invalid height %u (max %u)",
+ height, L2LPERCTL >> 1);
+ return false;
+ }
+
+ /* Check budmin is valid (cannot be NOFREE for non-empty tree) */
+ if (budmin == NOFREE) {
+ if (unlikely(nleafs > 0)) {
+ jfs_err("dmapctl: budmin is NOFREE but nleafs %u",
+ nleafs);
+ return false;
+ }
+ } else if (unlikely(budmin < BUDMIN)) {
+ jfs_err("dmapctl: invalid budmin %d (min %d)",
+ budmin, BUDMIN);
+ return false;
+ }
+
+ /* Check leaf nodes fit within stree array */
+ if (unlikely(leafidx + nleafs > CTLTREESIZE)) {
+ jfs_err("dmapctl: leaf range exceeds stree size (end %u > %u)",
+ leafidx + nleafs, CTLTREESIZE);
+ return false;
+ }
+
+ /* Check leaf nodes have valid values */
+ for (i = leafidx; i < leafidx + nleafs; i++) {
+ s8 val = dcp->stree[i];
+
+ if (unlikely(val < NOFREE)) {
+ jfs_err("dmapctl: invalid leaf value %d at index %d",
+ val, i);
+ return false;
+ } else if (unlikely(val > 31)) {
+ jfs_err("dmapctl: leaf value %d too large at index %d", val, i);
+ return false;
+ }
+ }
+
+ return true;
+}
+
/*
* NAME: dbMount()
*
@@ -1376,7 +1463,7 @@ dbAllocAG(struct bmap * bmp, int agno, s64 nblocks, int l2nb, s64 * results)
dcp = (struct dmapctl *) mp->data;
budmin = dcp->budmin;
- if (dcp->leafidx != cpu_to_le32(CTLLEAFIND)) {
+ if (unlikely(!check_dmapctl(dcp))) {
jfs_error(bmp->db_ipbmap->i_sb, "Corrupt dmapctl page\n");
release_metapage(mp);
return -EIO;
@@ -1706,7 +1793,7 @@ static int dbFindCtl(struct bmap * bmp, int l2nb, int level, s64 * blkno)
dcp = (struct dmapctl *) mp->data;
budmin = dcp->budmin;
- if (dcp->leafidx != cpu_to_le32(CTLLEAFIND)) {
+ if (unlikely(!check_dmapctl(dcp))) {
jfs_error(bmp->db_ipbmap->i_sb,
"Corrupt dmapctl page\n");
release_metapage(mp);
@@ -2489,7 +2576,7 @@ dbAdjCtl(struct bmap * bmp, s64 blkno, int newval, int alloc, int level)
return -EIO;
dcp = (struct dmapctl *) mp->data;
- if (dcp->leafidx != cpu_to_le32(CTLLEAFIND)) {
+ if (unlikely(!check_dmapctl(dcp))) {
jfs_error(bmp->db_ipbmap->i_sb, "Corrupt dmapctl page\n");
release_metapage(mp);
return -EIO;
@@ -3458,6 +3545,11 @@ int dbExtendFS(struct inode *ipbmap, s64 blkno, s64 nblocks)
return -EIO;
}
l2dcp = (struct dmapctl *) l2mp->data;
+ if (unlikely(!check_dmapctl(l2dcp))) {
+ jfs_error(ipbmap->i_sb, "Corrupt dmapctl page\n");
+ release_metapage(l2mp);
+ return -EIO;
+ }
/* compute start L1 */
k = blkno >> L2MAXL1SIZE;
@@ -3475,6 +3567,10 @@ int dbExtendFS(struct inode *ipbmap, s64 blkno, s64 nblocks)
if (l1mp == NULL)
goto errout;
l1dcp = (struct dmapctl *) l1mp->data;
+ if (unlikely(!check_dmapctl(l1dcp))) {
+ jfs_error(ipbmap->i_sb, "Corrupt dmapctl page\n");
+ goto errout;
+ }
/* compute start L0 */
j = (blkno & (MAXL1SIZE - 1)) >> L2MAXL0SIZE;
@@ -3488,6 +3584,10 @@ int dbExtendFS(struct inode *ipbmap, s64 blkno, s64 nblocks)
goto errout;
l1dcp = (struct dmapctl *) l1mp->data;
+ if (unlikely(!check_dmapctl(l1dcp))) {
+ jfs_error(ipbmap->i_sb, "Corrupt dmapctl page\n");
+ goto errout;
+ }
/* compute start L0 */
j = 0;
@@ -3507,6 +3607,10 @@ int dbExtendFS(struct inode *ipbmap, s64 blkno, s64 nblocks)
if (l0mp == NULL)
goto errout;
l0dcp = (struct dmapctl *) l0mp->data;
+ if (unlikely(!check_dmapctl(l0dcp))) {
+ jfs_error(ipbmap->i_sb, "Corrupt dmapctl page\n");
+ goto errout;
+ }
/* compute start dmap */
i = (blkno & (MAXL0SIZE - 1)) >>
@@ -3522,6 +3626,10 @@ int dbExtendFS(struct inode *ipbmap, s64 blkno, s64 nblocks)
goto errout;
l0dcp = (struct dmapctl *) l0mp->data;
+ if (unlikely(!check_dmapctl(l0dcp))) {
+ jfs_error(ipbmap->i_sb, "Corrupt dmapctl page\n");
+ goto errout;
+ }
/* compute start dmap */
i = 0;
--
2.52.0
2
1
[PATCH OLK-6.6] ACPI: APEI: GHES: Don't send SIGBUS to kernel threads on ARM HW errors
by Wupeng Ma 20 Apr '26
by Wupeng Ma 20 Apr '26
20 Apr '26
hulk inclusion
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8386
------------------------------------------
On arm64, when a Synchronous External Abort (SEA) occurs in kernel
context (e.g., during memory compaction via kcompactd), the error
follows this path:
do_sea()
-> do_apei_claim_sea()
-> apei_claim_sea()
-> ghes_notify_sea()
-> ghes_in_nmi_spool_from_list()
-> irq_work_queue(&ghes_proc_irq_work)
-> ghes_proc_in_irq() [IRQ context]
-> ghes_do_proc()
-> ghes_handle_arm_hw_error()
When ghes_handle_arm_hw_error() returns false (error not recoverable),
ghes_do_proc() sends SIGBUS to the current task via force_sig(SIGBUS).
However, kernel threads (e.g., kcompactd) have current->mm == NULL.
Sending SIGBUS to a kernel thread is meaningless and may cause
unexpected behavior. The SIGBUS signal should only be delivered to
user-space processes.
Fix by adding a check for current->mm before sending the signal.
This ensures that only tasks with a valid userspace memory mapping
receive the SIGBUS signal when a hardware error cannot be recovered.
Fixes: 9c72f69e011e ("arm64: add support for ARCH_HAS_COPY_MC")
Signed-off-by: Wupeng Ma <mawupeng1(a)huawei.com>
---
drivers/acpi/apei/ghes.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
index 51cd04307ee4..faf521a45dbd 100644
--- a/drivers/acpi/apei/ghes.c
+++ b/drivers/acpi/apei/ghes.c
@@ -818,7 +818,7 @@ static void ghes_do_proc(struct ghes *ghes,
* If no memory failure work is queued for abnormal synchronous
* errors, do a force kill.
*/
- if (sync && !queued) {
+ if (sync && !queued && current->mm) {
pr_err("Sending SIGBUS to current task due to memory error not recovered");
force_sig(SIGBUS);
}
--
2.43.0
2
1
20 Apr '26
From: Larysa Zaremba <larysa.zaremba(a)intel.com>
stable inclusion
from stable-v6.6.130
commit 183f940bdf9074f4fd7d32badeb0d73c93dc2070
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14020/
CVE: CVE-2026-23377
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 16394d80539937d348dd3b9ea32415c54e67a81b ]
rxq->frag_size is basically a step between consecutive strictly aligned
frames. In ZC mode, chunk size fits exactly, but if chunks are unaligned,
there is no safe way to determine accessible space to grow tailroom.
Report frag_size to be zero, if chunks are unaligned, chunk_size otherwise.
Fixes: 24ea50127ecf ("xsk: support mbuf on ZC RX")
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov(a)intel.com>
Signed-off-by: Larysa Zaremba <larysa.zaremba(a)intel.com>
Link: https://patch.msgid.link/20260305111253.2317394-3-larysa.zaremba@intel.com
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Signed-off-by: Pan Taixi <pantaixi1(a)huawei.com>
---
include/net/xdp_sock_drv.h | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h
index 5425f7ad5ebd..e1e46de7a875 100644
--- a/include/net/xdp_sock_drv.h
+++ b/include/net/xdp_sock_drv.h
@@ -41,6 +41,11 @@ static inline u32 xsk_pool_get_rx_frame_size(struct xsk_buff_pool *pool)
return xsk_pool_get_chunk_size(pool) - xsk_pool_get_headroom(pool);
}
+static inline u32 xsk_pool_get_rx_frag_step(struct xsk_buff_pool *pool)
+{
+ return pool->unaligned ? 0 : xsk_pool_get_chunk_size(pool);
+}
+
static inline void xsk_pool_set_rxq_info(struct xsk_buff_pool *pool,
struct xdp_rxq_info *rxq)
{
@@ -263,6 +268,11 @@ static inline u32 xsk_pool_get_rx_frame_size(struct xsk_buff_pool *pool)
return 0;
}
+static inline u32 xsk_pool_get_rx_frag_step(struct xsk_buff_pool *pool)
+{
+ return 0;
+}
+
static inline void xsk_pool_set_rxq_info(struct xsk_buff_pool *pool,
struct xdp_rxq_info *rxq)
{
--
2.34.1
2
1
[PATCH OLK-6.6 2/2] ice: change XDP RxQ frag_size from DMA write length to xdp.frame_sz
by Pan Taixi 20 Apr '26
by Pan Taixi 20 Apr '26
20 Apr '26
From: Larysa Zaremba <larysa.zaremba(a)intel.com>
mainline inclusion
from mainline-v7.0-rc3
commit e142dc4ef0f451b7ef99d09aaa84e9389af629d7
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14020/
CVE: CVE-2026-23377
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
The only user of frag_size field in XDP RxQ info is
bpf_xdp_frags_increase_tail(). It clearly expects whole buff size instead
of DMA write size. Different assumptions in ice driver configuration lead
to negative tailroom.
This allows to trigger kernel panic, when using
XDP_ADJUST_TAIL_GROW_MULTI_BUFF xskxceiver test and changing packet size to
6912 and the requested offset to a huge value, e.g.
XSK_UMEM__MAX_FRAME_SIZE * 100.
Due to other quirks of the ZC configuration in ice, panic is not observed
in ZC mode, but tailroom growing still fails when it should not.
Use fill queue buffer truesize instead of DMA write size in XDP RxQ info.
Fix ZC mode too by using the new helper.
Fixes: 2fba7dc5157b ("ice: Add support for XDP multi-buffer on Rx side")
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov(a)intel.com>
Signed-off-by: Larysa Zaremba <larysa.zaremba(a)intel.com>
Link: https://patch.msgid.link/20260305111253.2317394-5-larysa.zaremba@intel.com
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Conflicts:
drivers/net/ethernet/intel/ice/ice_base.c
[Adapted calculation of frag size. truesize is introduced into mainline in patch
93f53db9f9dc ("ice: switch to Page Pool"), which is not merged. Use frame_sz as
the truesize of underlying buffer.]
Signed-off-by: Pan Taixi <pantaixi1(a)huawei.com>
---
drivers/net/ethernet/intel/ice/ice_base.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
index 9a0682b05c4f..1408089a6549 100644
--- a/drivers/net/ethernet/intel/ice/ice_base.c
+++ b/drivers/net/ethernet/intel/ice/ice_base.c
@@ -557,7 +557,7 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
ring->q_index,
ring->q_vector->napi.napi_id,
- ring->rx_buf_len);
+ ice_get_frame_sz(ring));
if (err)
return err;
}
@@ -568,10 +568,11 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
ring->rx_buf_len =
xsk_pool_get_rx_frame_size(ring->xsk_pool);
+ u32 frag_size = xsk_pool_get_rx_frag_step(ring->xsk_pool);
err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
ring->q_index,
ring->q_vector->napi.napi_id,
- ring->rx_buf_len);
+ frag_size);
if (err)
return err;
err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq,
@@ -588,7 +589,7 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
ring->q_index,
ring->q_vector->napi.napi_id,
- ring->rx_buf_len);
+ ice_get_frame_sz(ring));
if (err)
return err;
}
--
2.34.1
1
0
FIX CVE-2026-23473
Jens Axboe (2):
io_uring/poll: improve readability of poll reference decrementing
io_uring/poll: fix multishot recv missing EOF on wakeup race
io_uring/poll.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
--
2.39.2
2
3
[PATCH OLK-6.6 0/2] ice: change XDP RxQ frag_size from DMA write length to xdp.frame_sz
by Pan Taixi 18 Apr '26
by Pan Taixi 18 Apr '26
18 Apr '26
Backport "ice: change XDP RxQ frag_size from DMA write length to xdp.frame_sz" to
OLK-6.6
Include "xsk: introduce helper to determine rxq->frag_size" to introduce the helper
function xsk_pool_get_rx_frag_step()
Larysa Zaremba (2):
xsk: introduce helper to determine rxq->frag_size
ice: change XDP RxQ frag_size from DMA write length to xdp.frame_sz
drivers/net/ethernet/intel/ice/ice_base.c | 7 ++++---
include/net/xdp_sock_drv.h | 10 ++++++++++
2 files changed, 14 insertions(+), 3 deletions(-)
--
2.34.1
1
0
From: Paulo Alcantara <pc(a)manguebit.org>
stable inclusion
from stable-v6.1.167
commit fd4547830720647d4af02ee50f883c4b1cca06e4
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14118
CVE: CVE-2026-31392
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 12b4c5d98cd7ca46d5035a57bcd995df614c14e1 upstream.
Customer reported that some of their krb5 mounts were failing against
a single server as the client was trying to mount the shares with
wrong credentials. It turned out the client was reusing SMB session
from first mount to try mounting the other shares, even though a
different username= option had been specified to the other mounts.
By using username mount option along with sec=krb5 to search for
principals from keytab is supported by cifs.upcall(8) since
cifs-utils-4.8. So fix this by matching username mount option in
match_session() even with Kerberos.
For example, the second mount below should fail with -ENOKEY as there
is no 'foobar' principal in keytab (/etc/krb5.keytab). The client
ends up reusing SMB session from first mount to perform the second
one, which is wrong.
```
$ ktutil
ktutil: add_entry -password -p testuser -k 1 -e aes256-cts
Password for testuser(a)ZELDA.TEST:
ktutil: write_kt /etc/krb5.keytab
ktutil: quit
$ klist -ke
Keytab name: FILE:/etc/krb5.keytab
KVNO Principal
---- ----------------------------------------------------------------
1 testuser(a)ZELDA.TEST (aes256-cts-hmac-sha1-96)
$ mount.cifs //w22-root2/scratch /mnt/1 -o sec=krb5,username=testuser
$ mount.cifs //w22-root2/scratch /mnt/2 -o sec=krb5,username=foobar
$ mount -t cifs | grep -Po 'username=\K\w+'
testuser
testuser
```
Reported-by: Oscar Santos <ossantos(a)redhat.com>
Signed-off-by: Paulo Alcantara (Red Hat) <pc(a)manguebit.org>
Cc: David Howells <dhowells(a)redhat.com>
Cc: linux-cifs(a)vger.kernel.org
Cc: stable(a)vger.kernel.org
Signed-off-by: Steve French <stfrench(a)microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Conflicts:
fs/cifs/connect.c
fs/smb/client/connect.c
[Linux has moved cifs to the smb/client.]
Signed-off-by: Zizhi Wo <wozizhi(a)huawei.com>
---
fs/cifs/connect.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
index 287230b70e51..a69f6f1f31bc 100644
--- a/fs/cifs/connect.c
+++ b/fs/cifs/connect.c
@@ -2737,10 +2737,14 @@ static int match_session(struct cifs_ses *ses, struct smb_vol *vol)
switch (ses->sectype) {
case Kerberos:
if (!uid_eq(vol->cred_uid, ses->cred_uid))
return 0;
+ if (strncmp(ses->user_name ?: "",
+ vol->username ?: "",
+ CIFS_MAX_USERNAME_LEN))
+ return 0;
break;
default:
/* NULL username means anonymous session */
if (ses->user_name == NULL) {
if (!vol->nullauth)
--
2.39.2
2
1
Introduce xSched dmem
Alexander Pavlenko (2):
xSched/dmem: introduce xsched_dmem_alloc()
xSched/dmem: introduce xsched_dmem_free()
Liu Kai (2):
xSched/dmem: add support for XPU devices to register/unregister dmem
regions
xSched/dmem: introduce xsched_dmem_cleanup()
include/linux/cgroup_dmem.h | 2 -
include/linux/xsched.h | 14 ++
include/uapi/linux/xsched/xcu_vstream.h | 9 +
kernel/xsched/Makefile | 1 +
kernel/xsched/core.c | 3 +
kernel/xsched/dmem.c | 263 ++++++++++++++++++++++++
kernel/xsched/vstream.c | 114 ++++++++++
7 files changed, 404 insertions(+), 2 deletions(-)
create mode 100644 kernel/xsched/dmem.c
--
2.34.1
2
5
Introduce xSched dmem
Alexander Pavlenko (2):
xSched/dmem: introduce xsched_dmem_alloc()
xSched/dmem: introduce xsched_dmem_free()
Liu Kai (2):
xSched/dmem: add support for XPU devices to register/unregister dmem
regions
xSched/dmem: introduce xsched_dmem_cleanup()
include/linux/cgroup_dmem.h | 2 -
include/linux/xsched.h | 14 ++
include/uapi/linux/xsched/xcu_vstream.h | 9 +
kernel/xsched/Makefile | 1 +
kernel/xsched/core.c | 3 +
kernel/xsched/dmem.c | 263 ++++++++++++++++++++++++
kernel/xsched/vstream.c | 114 ++++++++++
7 files changed, 404 insertions(+), 2 deletions(-)
create mode 100644 kernel/xsched/dmem.c
--
2.34.1
2
5
Alan Maguire (16):
kbuild,bpf: Switch to using --btf_features for pahole v1.26 and later
kbuild, bpf: Use test-ge check for v1.25-only pahole
libbpf: Add btf__distill_base() creating split BTF with distilled base
BTF
selftests/bpf: Test distilled base, split BTF generation
libbpf: Split BTF relocation
selftests/bpf: Extend distilled BTF tests to cover BTF relocation
resolve_btfids: Handle presence of .BTF.base section
libbpf: BTF relocation followup fixing naming, loop logic
module, bpf: Store BTF base pointer in struct module
libbpf: Split field iter code into its own file kernel
libbpf,bpf: Share BTF relocate-related code with kernel
kbuild,bpf: Add module-specific pahole flags for distilled base BTF
selftests/bpf: Add kfunc_call test for simple dtor in bpf_testmod
bpf: fix build when CONFIG_DEBUG_INFO_BTF[_MODULES] is undefined
libbpf: Fix error handling in btf__distill_base()
libbpf: Fix license for btf_relocate.c
Alexander Lobakin (1):
bitops: make BYTES_TO_BITS() treewide-available
Alexei Starovoitov (2):
s390/bpf: Fix indirect trampoline generation
bpf: Introduce "volatile compare" macros
Andrii Nakryiko (13):
bpf: Emit global subprog name in verifier logs
bpf: Validate global subprogs lazily
selftests/bpf: Add lazy global subprog validation tests
libbpf: Add btf__new_split() API that was declared but not implemented
bpf: move sleepable flag from bpf_prog_aux to bpf_prog
libbpf: Add BTF field iterator
libbpf: Make use of BTF field iterator in BPF linker code
libbpf: Make use of BTF field iterator in BTF handling code
bpftool: Use BTF field iterator in btfgen
libbpf: Remove callback-based type/string BTF field visitor helpers
bpf: extract iterator argument type and name validation logic
bpf: allow passing struct bpf_iter_<type> as kfunc arguments
selftests/bpf: test passing iterator to a kfunc
Benjamin Tissoires (1):
bpf: introduce in_sleepable() helper
Christophe Leroy (2):
bpf: Remove arch_unprotect_bpf_trampoline()
bpf: Check return from set_memory_rox()
Chuyi Zhou (13):
cgroup: Prepare for using css_task_iter_*() in BPF
bpf: Introduce css_task open-coded iterator kfuncs
bpf: Introduce task open coded iterator kfuncs
bpf: Introduce css open-coded iterator kfuncs
bpf: teach the verifier to enforce css_iter and task_iter in RCU CS
bpf: Let bpf_iter_task_new accept null task ptr
selftests/bpf: rename bpf_iter_task.c to bpf_iter_tasks.c
selftests/bpf: Add tests for open-coded task and css iter
bpf: Relax allowlist for css_task iter
selftests/bpf: Add tests for css_task iter combining with cgroup iter
selftests/bpf: Add test for using css_task iter in sleepable progs
bpf: Let verifier consider {task,cgroup} is trusted in bpf_iter_reg
selftests/bpf: get trusted cgrp from bpf_iter__cgroup directly
Daniel Xu (3):
bpf: btf: Support flags for BTF_SET8 sets
bpf: btf: Add BTF_KFUNCS_START/END macro pair
bpf: treewide: Annotate BPF kfuncs in BTF
Dave Marchevsky (6):
bpf: Don't explicitly emit BTF for struct btf_iter_num
selftests/bpf: Rename bpf_iter_task_vma.c to bpf_iter_task_vmas.c
bpf: Introduce task_vma open-coded iterator kfuncs
selftests/bpf: Add tests for open-coded task_vma iter
bpf: Add __bpf_kfunc_{start,end}_defs macros
bpf: Add __bpf_hook_{start,end} macros
David Vernet (2):
bpf: Add ability to pin bpf timer to calling CPU
selftests/bpf: Test pinning bpf timer to a core
Eduard Zingerman (2):
libbpf: Make btf_parse_elf process .BTF.base transparently
selftests/bpf: Check if distilled base inherits source endianness
Geliang Tang (1):
bpf, btf: Check btf for register_bpf_struct_ops
Hou Tao (6):
bpf: Free dynamically allocated bits in bpf_iter_bits_destroy()
bpf: Add bpf_mem_alloc_check_size() helper
bpf: Check the validity of nr_words in bpf_iter_bits_new()
bpf: Use __u64 to save the bits in bits iterator
selftests/bpf: Add three test cases for bits_iter
selftests/bpf: Use -4095 as the bad address for bits iterator
Kui-Feng Lee (29):
bpf: refactory struct_ops type initialization to a function.
bpf: get type information with BTF_ID_LIST
bpf, net: introduce bpf_struct_ops_desc.
bpf: add struct_ops_tab to btf.
bpf: make struct_ops_map support btfs other than btf_vmlinux.
bpf: pass btf object id in bpf_map_info.
bpf: lookup struct_ops types from a given module BTF.
bpf: pass attached BTF to the bpf_struct_ops subsystem
bpf: hold module refcnt in bpf_struct_ops map creation and prog
verification.
bpf: validate value_type
bpf, net: switch to dynamic registration
libbpf: Find correct module BTFs for struct_ops maps and progs.
bpf: export btf_ctx_access to modules.
selftests/bpf: test case for register_bpf_struct_ops().
bpf: Fix error checks against bpf_get_btf_vmlinux().
bpf: Remove an unnecessary check.
selftests/bpf: Suppress warning message of an unused variable.
bpf: add btf pointer to struct bpf_ctx_arg_aux.
bpf: Move __kfunc_param_match_suffix() to btf.c.
bpf: Create argument information for nullable arguments.
selftests/bpf: Test PTR_MAYBE_NULL arguments of struct_ops operators.
libbpf: Set btf_value_type_id of struct bpf_map for struct_ops.
libbpf: Convert st_ops->data to shadow type.
bpftool: Generated shadow variables for struct_ops maps.
bpftool: Add an example for struct_ops map and shadow type.
selftests/bpf: Test if shadow types work correctly.
bpf, net: validate struct_ops when updating value.
bpf: struct_ops supports more than one page for trampolines.
selftests/bpf: Test struct_ops maps with a large number of struct_ops
program.
Kumar Kartikeya Dwivedi (4):
bpf: Allow calling static subprogs while holding a bpf_spin_lock
selftests/bpf: Add test for static subprog call in lock cs
bpf: Transfer RCU lock state between subprog calls
selftests/bpf: Add tests for RCU lock transfer between subprogs
Luo Gengkun (7):
bpf: Fix kabi-breakage for bpf_func_info_aux
bpf: Fix kabi-breakage for bpf_tramp_image
bpf: Fix kabi for bpf_attr
bpf_verifier: Fix kabi for bpf_verifier_env
bpf: Fix kabi for bpf_ctx_arg_aux
bpf: Fix kabi for bpf_prog_aux and bpf_prog
selftests/bpf: modify test_loader that didn't support running
bpf_prog_type_syscall programs
Martin KaFai Lau (4):
libbpf: Ensure undefined bpf_attr field stays 0
bpf: Remove unnecessary err < 0 check in
bpf_struct_ops_map_update_elem
bpf: Fix a crash when btf_parse_base() returns an error pointer
bpf: Reject struct_ops registration that uses module ptr and the
module btf_id is missing
Masahiro Yamada (1):
kbuild: avoid too many execution of scripts/pahole-flags.sh
Matthieu Baerts (1):
bpf: fix compilation error without CGROUPS
Peter Zijlstra (6):
cfi: Flip headers
x86/cfi,bpf: Fix BPF JIT call
x86/cfi,bpf: Fix bpf_callback_t CFI
x86/cfi,bpf: Fix bpf_struct_ops CFI
cfi: Add CFI_NOSEAL()
bpf: Fix dtor CFI
Pu Lehui (8):
riscv, bpf: Fix unpredictable kernel crash about RV64 struct_ops
bpf: Fix kabi breakage in struct module
riscv, bpf: Fix out-of-bounds issue when preparing trampoline image
selftests/bpf: Fix btf leak on new btf alloc failure in btf_distill
test
libbpf: Fix return zero when elf_begin failed
libbpf: Fix incorrect traversal end type ID when marking
BTF_IS_EMBEDDED
selftests/bpf: Add distilled BTF test about marking BTF_IS_EMBEDDED
selftests/bpf: Add file_read_pattern to gitignore
Song Liu (8):
bpf: Charge modmem for struct_ops trampoline
bpf: Let bpf_prog_pack_free handle any pointer
bpf: Adjust argument names of arch_prepare_bpf_trampoline()
bpf: Add helpers for trampoline image management
bpf, x86: Adjust arch_prepare_bpf_trampoline return value
bpf: Add arch_bpf_trampoline_size()
bpf: Use arch_bpf_trampoline_size
x86, bpf: Use bpf_prog_pack for bpf trampoline
T.J. Mercier (1):
bpf, docs: Fix broken link to renamed bpf_iter_task_vmas.c
Tony Ambardar (1):
libbpf: Ensure new BTF objects inherit input endianness
Yafang Shao (2):
bpf: Add bits iterator
selftests/bpf: Add selftest for bits iter
Documentation/bpf/bpf_iterators.rst | 2 +-
Documentation/bpf/kfuncs.rst | 14 +-
MAINTAINERS | 2 +-
Makefile | 4 +-
arch/arm64/kernel/bpf-rvi.c | 4 +-
arch/arm64/net/bpf_jit_comp.c | 55 +-
arch/riscv/include/asm/cfi.h | 3 +-
arch/riscv/kernel/cfi.c | 2 +-
arch/riscv/net/bpf_jit_comp64.c | 48 +-
arch/s390/net/bpf_jit_comp.c | 59 +-
arch/x86/include/asm/cfi.h | 126 ++-
arch/x86/kernel/alternative.c | 87 +-
arch/x86/kernel/cfi.c | 4 +-
arch/x86/net/bpf_jit_comp.c | 261 ++++--
block/blk-cgroup.c | 4 +-
drivers/hid/bpf/hid_bpf_dispatch.c | 12 +-
fs/proc/stat.c | 4 +-
include/asm-generic/Kbuild | 1 +
include/asm-generic/cfi.h | 5 +
include/linux/bitops.h | 2 +
include/linux/bpf.h | 130 ++-
include/linux/bpf_mem_alloc.h | 3 +
include/linux/bpf_verifier.h | 21 +-
include/linux/btf.h | 105 +++
include/linux/btf_ids.h | 21 +-
include/linux/cfi.h | 12 +
include/linux/cgroup.h | 12 +-
include/linux/filter.h | 2 +-
include/linux/module.h | 5 +
include/uapi/linux/bpf.h | 16 +-
kernel/bpf-rvi/common_kfuncs.c | 4 +-
kernel/bpf/Makefile | 8 +-
kernel/bpf/bpf_iter.c | 12 +-
kernel/bpf/bpf_struct_ops.c | 748 ++++++++++++------
kernel/bpf/bpf_struct_ops_types.h | 12 -
kernel/bpf/btf.c | 431 ++++++++--
kernel/bpf/cgroup_iter.c | 65 +-
kernel/bpf/core.c | 76 +-
kernel/bpf/cpumask.c | 18 +-
kernel/bpf/dispatcher.c | 7 +-
kernel/bpf/helpers.c | 202 ++++-
kernel/bpf/map_iter.c | 10 +-
kernel/bpf/memalloc.c | 14 +-
kernel/bpf/syscall.c | 12 +-
kernel/bpf/task_iter.c | 242 +++++-
kernel/bpf/trampoline.c | 99 ++-
kernel/bpf/verifier.c | 317 ++++++--
kernel/cgroup/cgroup.c | 18 +-
kernel/cgroup/cpuset.c | 4 +-
kernel/cgroup/rstat.c | 13 +-
kernel/events/core.c | 2 +-
kernel/module/main.c | 5 +-
kernel/sched/bpf_sched.c | 8 +-
kernel/sched/cpuacct.c | 4 +-
kernel/trace/bpf_trace.c | 12 +-
kernel/trace/trace_probe.c | 2 -
net/bpf/bpf_dummy_struct_ops.c | 72 +-
net/bpf/test_run.c | 30 +-
net/core/filter.c | 33 +-
net/core/xdp.c | 10 +-
net/ipv4/bpf_tcp_ca.c | 93 ++-
net/ipv4/fou_bpf.c | 10 +-
net/ipv4/tcp_bbr.c | 4 +-
net/ipv4/tcp_cong.c | 6 +-
net/ipv4/tcp_cubic.c | 4 +-
net/ipv4/tcp_dctcp.c | 4 +-
net/netfilter/nf_conntrack_bpf.c | 10 +-
net/netfilter/nf_nat_bpf.c | 10 +-
net/socket.c | 8 +-
net/xfrm/xfrm_interface_bpf.c | 10 +-
scripts/Makefile.btf | 33 +
scripts/Makefile.modfinal | 2 +-
scripts/pahole-flags.sh | 30 -
.../bpf/bpftool/Documentation/bpftool-gen.rst | 58 +-
tools/bpf/bpftool/gen.c | 253 +++++-
tools/bpf/resolve_btfids/main.c | 8 +
tools/include/linux/bitops.h | 2 +
tools/include/uapi/linux/bpf.h | 14 +-
tools/lib/bpf/Build | 2 +-
tools/lib/bpf/bpf.c | 4 +-
tools/lib/bpf/bpf.h | 4 +-
tools/lib/bpf/btf.c | 704 ++++++++++++-----
tools/lib/bpf/btf.h | 36 +
tools/lib/bpf/btf_iter.c | 177 +++++
tools/lib/bpf/btf_relocate.c | 519 ++++++++++++
tools/lib/bpf/libbpf.c | 86 +-
tools/lib/bpf/libbpf.map | 4 +-
tools/lib/bpf/libbpf_internal.h | 29 +-
tools/lib/bpf/libbpf_probes.c | 1 +
tools/lib/bpf/linker.c | 58 +-
tools/perf/util/probe-finder.c | 4 +-
tools/testing/selftests/bpf/.gitignore | 1 +
.../testing/selftests/bpf/bpf_experimental.h | 96 +++
.../selftests/bpf/bpf_testmod/bpf_testmod.c | 160 +++-
.../selftests/bpf/bpf_testmod/bpf_testmod.h | 61 ++
.../bpf/bpf_testmod/bpf_testmod_kfunc.h | 9 +
.../selftests/bpf/prog_tests/bpf_iter.c | 44 +-
.../selftests/bpf/prog_tests/btf_distill.c | 692 ++++++++++++++++
.../selftests/bpf/prog_tests/cgroup_iter.c | 33 +
.../testing/selftests/bpf/prog_tests/iters.c | 209 +++++
.../selftests/bpf/prog_tests/kfunc_call.c | 1 +
.../selftests/bpf/prog_tests/rcu_read_lock.c | 6 +
.../selftests/bpf/prog_tests/spin_lock.c | 2 +
.../prog_tests/test_struct_ops_maybe_null.c | 46 ++
.../bpf/prog_tests/test_struct_ops_module.c | 86 ++
.../prog_tests/test_struct_ops_multi_pages.c | 30 +
.../testing/selftests/bpf/prog_tests/timer.c | 4 +
.../selftests/bpf/prog_tests/verifier.c | 4 +
...f_iter_task_vma.c => bpf_iter_task_vmas.c} | 0
.../{bpf_iter_task.c => bpf_iter_tasks.c} | 0
tools/testing/selftests/bpf/progs/iters_css.c | 72 ++
.../selftests/bpf/progs/iters_css_task.c | 102 +++
.../testing/selftests/bpf/progs/iters_task.c | 41 +
.../selftests/bpf/progs/iters_task_failure.c | 105 +++
.../selftests/bpf/progs/iters_task_vma.c | 43 +
.../selftests/bpf/progs/iters_testmod_seq.c | 50 ++
.../selftests/bpf/progs/kfunc_call_test.c | 37 +
.../selftests/bpf/progs/rcu_read_lock.c | 120 +++
.../bpf/progs/struct_ops_maybe_null.c | 29 +
.../bpf/progs/struct_ops_maybe_null_fail.c | 24 +
.../selftests/bpf/progs/struct_ops_module.c | 37 +
.../bpf/progs/struct_ops_multi_pages.c | 102 +++
.../selftests/bpf/progs/test_global_func12.c | 4 +-
.../selftests/bpf/progs/test_spin_lock.c | 65 ++
.../selftests/bpf/progs/test_spin_lock_fail.c | 44 ++
tools/testing/selftests/bpf/progs/timer.c | 63 +-
.../selftests/bpf/progs/verifier_bits_iter.c | 232 ++++++
.../bpf/progs/verifier_global_subprogs.c | 92 +++
.../selftests/bpf/progs/verifier_spin_lock.c | 2 +-
.../bpf/progs/verifier_subprog_precision.c | 4 +-
tools/testing/selftests/bpf/test_loader.c | 10 +-
131 files changed, 7326 insertions(+), 1139 deletions(-)
create mode 100644 include/asm-generic/cfi.h
delete mode 100644 kernel/bpf/bpf_struct_ops_types.h
create mode 100644 scripts/Makefile.btf
delete mode 100755 scripts/pahole-flags.sh
create mode 100644 tools/lib/bpf/btf_iter.c
create mode 100644 tools/lib/bpf/btf_relocate.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/btf_distill.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_maybe_null.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_multi_pages.c
rename tools/testing/selftests/bpf/progs/{bpf_iter_task_vma.c => bpf_iter_task_vmas.c} (100%)
rename tools/testing/selftests/bpf/progs/{bpf_iter_task.c => bpf_iter_tasks.c} (100%)
create mode 100644 tools/testing/selftests/bpf/progs/iters_css.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_css_task.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_task.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_task_failure.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_task_vma.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_maybe_null.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_maybe_null_fail.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_module.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_multi_pages.c
create mode 100644 tools/testing/selftests/bpf/progs/verifier_bits_iter.c
create mode 100644 tools/testing/selftests/bpf/progs/verifier_global_subprogs.c
--
2.34.1
2
141
This patch set support ext4 hardware atomic write.
Alan Adamson (1):
nvme: Atomic write support
John Garry (10):
block: Pass blk_queue_get_max_sectors() a request pointer
block: Generalize chunk_sectors support as boundary support
block: Add core atomic write support
block: Call blkdev_dio_unaligned() from blkdev_direct_IO()
block: Add fops atomic write support
block/fs: Pass an iocb to generic_atomic_write_valid()
fs/block: Check for IOCB_DIRECT in generic_atomic_write_valid()
block: Add bdev atomic write limits helpers
fs: Export generic_atomic_write_valid()
fs: iomap: Atomic write support
Long Li (3):
block: fix kabi broken in struct queue_limits
fs: fix kabi broken in struct kstat
block: fix kabi broken in enum req_flag_bits
Prasad Singamsetty (2):
fs: Initial atomic write support
fs: Add initial atomic write support info to statx
Ritesh Harjani (IBM) (12):
ext4: Add statx support for atomic writes
ext4: Check for atomic writes support in write iter
ext4: Support setting FMODE_CAN_ATOMIC_WRITE
ext4: Do not fallback to buffered-io for DIO atomic write
ext4: Document an edge case for overwrites
ext4: Check if inode uses extents in ext4_inode_can_atomic_write()
ext4: Make ext4_meta_trans_blocks() non-static for later use
ext4: Add support for EXT4_GET_BLOCKS_QUERY_LEAF_BLOCKS
ext4: Add multi-fsblock atomic write support with bigalloc
ext4: Enable support for ext4 multi-fsblock atomic write using
bigalloc
ext4: Add atomic block write documentation
iomap: Lift blocksize restriction on atomic writes
Documentation/ABI/stable/sysfs-block | 53 +++
.../filesystems/ext4/atomic_writes.rst | 225 ++++++++++++
Documentation/filesystems/ext4/overview.rst | 1 +
block/blk-core.c | 19 +
block/blk-merge.c | 67 +++-
block/blk-mq.c | 2 +-
block/blk-settings.c | 87 +++++
block/blk-sysfs.c | 33 ++
block/blk.h | 9 +-
block/fops.c | 51 ++-
drivers/md/dm.c | 2 +-
drivers/nvme/host/core.c | 54 ++-
fs/aio.c | 8 +-
fs/btrfs/ioctl.c | 2 +-
fs/ext4/ext4.h | 35 +-
fs/ext4/extents.c | 99 +++++
fs/ext4/file.c | 31 +-
fs/ext4/inode.c | 346 ++++++++++++++++--
fs/ext4/super.c | 34 ++
fs/iomap/direct-io.c | 38 +-
fs/iomap/trace.h | 3 +-
fs/read_write.c | 22 +-
fs/stat.c | 34 ++
include/linux/blk_types.h | 8 +-
include/linux/blkdev.h | 84 ++++-
include/linux/fs.h | 18 +-
include/linux/iomap.h | 1 +
include/linux/stat.h | 5 +-
include/uapi/linux/fs.h | 5 +-
include/uapi/linux/stat.h | 11 +-
io_uring/rw.c | 8 +-
31 files changed, 1308 insertions(+), 87 deletions(-)
create mode 100644 Documentation/filesystems/ext4/atomic_writes.rst
--
2.39.2
3
30
[PATCH v3 OLK-6.6 0/5] arm64/ras: Firmware-first SEI error handling with ESB synchronization
by Wupeng Ma 15 Apr '26
by Wupeng Ma 15 Apr '26
15 Apr '26
This series introduces firmware-first RAS (Reliability, Availability, and
Serviceability) error handling for ARM64 SEI (Synchronous External Interrupt)
errors, with support for ESB (Error Synchronization Barrier) synchronization.
The implementation includes:
- Boot parameter to enable/disable ESB synchronization for SEI
- Entry code patching to use ESB instruction for SEI handling
- Vendor-specific SEI error handling via APEI/GHES
- Sysctl interface for runtime control of vendor SEI handling
Change log since v2:
- Fix memory leaks in hisi_sei_kill_task if task_work_add fails
Change log since v1:
- Fix memory leaks in err_pool_alloc and err_pa=0 paths in hisi_sei_kill_task
Liao Chang (1):
arm64/entry: Add support to synchronize SEI at the exception boundary
Wupeng Ma (3):
ACPI/APEI/arm64: add vendor SEI handling for firmware-first RAS
ACPI: APEI: add runtime switch for HiSilicon vendor SEI handling
arm64: openeuler_defconfig: enable CONFIG_ARM64_SYNC_SEI by default
Zheng Chuan (1):
arm64: Add boot parameter to control ESB for SEI synchronization
.../admin-guide/kernel-parameters.txt | 8 +
Documentation/admin-guide/sysctl/kernel.rst | 27 +++
arch/arm64/Kconfig | 14 ++
arch/arm64/configs/openeuler_defconfig | 1 +
arch/arm64/include/asm/acpi.h | 2 +
arch/arm64/include/asm/setup.h | 9 +
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/arm64_sync_sei.c | 45 +++++
arch/arm64/kernel/entry.S | 44 ++++
arch/arm64/kernel/traps.c | 15 ++
arch/arm64/kernel/xcall/entry.S | 2 +
drivers/acpi/apei/apei-internal.h | 2 +
drivers/acpi/apei/ghes-vendor-info.c | 191 +++++++++++++++++-
drivers/acpi/apei/ghes.c | 5 +
14 files changed, 365 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/kernel/arm64_sync_sei.c
--
2.43.0
2
6
您好!
Kernel 邀请您参加 2026-04-17 14:00 召开的WeLink会议(自动录制)
会议主题:openEuler Kernel SIG双周例会
会议发起人: LiaoTao_Wave
会议内容:
1. 进展update
2. 议题征集中,新增议题可直接填报至会议纪要看板。
会议链接:https://meeting.huaweicloud.com:36443/#/j/964248867
会议纪要&签到链接:https://etherpad.openeuler.org/p/Kernel-meetings
更多资讯尽在:https://www.openeuler.org/zh/
Hello!
Kernel invites you to attend the WeLink conference(auto recording) will be held at 2026-04-17 14:00
The subject of the conference is openEuler Kernel SIG双周例会
The sponsor of the conference is LiaoTao_Wave
Summary:
1. 进展update
2. 议题征集中,新增议题可直接填报至会议纪要看板。
You can join the meeting at https://meeting.huaweicloud.com:36443/#/j/964248867
Add topics at https://etherpad.openeuler.org/p/Kernel-meetings
More information: https://www.openeuler.org/en/
1
0
[PATCH v2 OLK-6.6 0/5] arm64/ras: Firmware-first SEI error handling with ESB synchronization
by Wupeng Ma 15 Apr '26
by Wupeng Ma 15 Apr '26
15 Apr '26
This series introduces firmware-first RAS (Reliability, Availability, and
Serviceability) error handling for ARM64 SEI (Synchronous External Interrupt)
errors, with support for ESB (Error Synchronization Barrier) synchronization.
The implementation includes:
- Boot parameter to enable/disable ESB synchronization for SEI
- Entry code patching to use ESB instruction for SEI handling
- Vendor-specific SEI error handling via APEI/GHES
- Sysctl interface for runtime control of vendor SEI handling
Change log since v1:
- Fix memory leaks in err_pool_alloc and err_pa=0 paths in hisi_sei_kill_task
Liao Chang (1):
arm64/entry: Add support to synchronize SEI at the exception boundary
Wupeng Ma (3):
ACPI/APEI/arm64: add vendor SEI handling for firmware-first RAS
ACPI: APEI: add runtime switch for HiSilicon vendor SEI handling
arm64: openeuler_defconfig: enable CONFIG_ARM64_SYNC_SEI by default
Zheng Chuan (1):
arm64: Add boot parameter to control ESB for SEI synchronization
.../admin-guide/kernel-parameters.txt | 8 +
Documentation/admin-guide/sysctl/kernel.rst | 27 +++
arch/arm64/Kconfig | 14 ++
arch/arm64/configs/openeuler_defconfig | 1 +
arch/arm64/include/asm/acpi.h | 2 +
arch/arm64/include/asm/setup.h | 9 +
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/arm64_sync_sei.c | 45 +++++
arch/arm64/kernel/entry.S | 44 +++++
arch/arm64/kernel/traps.c | 15 ++
arch/arm64/kernel/xcall/entry.S | 2 +
drivers/acpi/apei/apei-internal.h | 2 +
drivers/acpi/apei/ghes-vendor-info.c | 187 +++++++++++++++++-
drivers/acpi/apei/ghes.c | 5 +
14 files changed, 361 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/kernel/arm64_sync_sei.c
--
2.43.0
2
6
Luo Gengkun (1):
perf: Fix Wdeclaration-after-statement
Peter Zijlstra (1):
perf: Fix __perf_event_overflow() vs perf_remove_from_context() race
kernel/events/core.c | 57 ++++++++++++++++++++++++++++++++++++++++----
1 file changed, 52 insertions(+), 5 deletions(-)
--
2.34.1
2
3
14 Apr '26
From: Jeff Layton <jlayton(a)kernel.org>
mainline inclusion
from mainline-v7.0-rc5
commit 5133b61aaf437e5f25b1b396b14242a6bb0508e2
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14128
CVE: CVE-2026-31402
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
The NFSv4.0 replay cache uses a fixed 112-byte inline buffer
(rp_ibuf[NFSD4_REPLAY_ISIZE]) to store encoded operation responses.
This size was calculated based on OPEN responses and does not account
for LOCK denied responses, which include the conflicting lock owner as
a variable-length field up to 1024 bytes (NFS4_OPAQUE_LIMIT).
When a LOCK operation is denied due to a conflict with an existing lock
that has a large owner, nfsd4_encode_operation() copies the full encoded
response into the undersized replay buffer via read_bytes_from_xdr_buf()
with no bounds check. This results in a slab-out-of-bounds write of up
to 944 bytes past the end of the buffer, corrupting adjacent heap memory.
This can be triggered remotely by an unauthenticated attacker with two
cooperating NFSv4.0 clients: one sets a lock with a large owner string,
then the other requests a conflicting lock to provoke the denial.
We could fix this by increasing NFSD4_REPLAY_ISIZE to allow for a full
opaque, but that would increase the size of every stateowner, when most
lockowners are not that large.
Instead, fix this by checking the encoded response length against
NFSD4_REPLAY_ISIZE before copying into the replay buffer. If the
response is too large, set rp_buflen to 0 to skip caching the replay
payload. The status is still cached, and the client already received the
correct response on the original request.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Cc: stable(a)kernel.org
Reported-by: Nicholas Carlini <npc(a)anthropic.com>
Tested-by: Nicholas Carlini <npc(a)anthropic.com>
Signed-off-by: Jeff Layton <jlayton(a)kernel.org>
Signed-off-by: Chuck Lever <chuck.lever(a)oracle.com>
Conflicts:
fs/nfsd/nfs4xdr.c
[Commit 438f81e0e92a ("nfsd: move error choice for incorrect object types
to version-specific code.") add mapping for status.]
Signed-off-by: Li Lingfeng <lilingfeng3(a)huawei.com>
---
fs/nfsd/nfs4xdr.c | 9 +++++++--
fs/nfsd/state.h | 17 ++++++++++++-----
2 files changed, 19 insertions(+), 7 deletions(-)
diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
index 821c0ad5baa0..b8976c2f17fe 100644
--- a/fs/nfsd/nfs4xdr.c
+++ b/fs/nfsd/nfs4xdr.c
@@ -5233,9 +5233,14 @@ nfsd4_encode_operation(struct nfsd4_compoundres *resp, struct nfsd4_op *op)
int len = xdr->buf->len - post_err_offset;
so->so_replay.rp_status = op->status;
- so->so_replay.rp_buflen = len;
- read_bytes_from_xdr_buf(xdr->buf, post_err_offset,
+ if (len <= NFSD4_REPLAY_ISIZE) {
+ so->so_replay.rp_buflen = len;
+ read_bytes_from_xdr_buf(xdr->buf,
+ post_err_offset,
so->so_replay.rp_buf, len);
+ } else {
+ so->so_replay.rp_buflen = 0;
+ }
}
status:
/* Note that op->status is already in network byte order: */
diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
index 0e4f98b41364..f398effabeef 100644
--- a/fs/nfsd/state.h
+++ b/fs/nfsd/state.h
@@ -395,11 +395,18 @@ struct nfs4_client_reclaim {
struct xdr_netobj cr_princhash;
};
-/* A reasonable value for REPLAY_ISIZE was estimated as follows:
- * The OPEN response, typically the largest, requires
- * 4(status) + 8(stateid) + 20(changeinfo) + 4(rflags) + 8(verifier) +
- * 4(deleg. type) + 8(deleg. stateid) + 4(deleg. recall flag) +
- * 20(deleg. space limit) + ~32(deleg. ace) = 112 bytes
+/*
+ * REPLAY_ISIZE is sized for an OPEN response with delegation:
+ * 4(status) + 8(stateid) + 20(changeinfo) + 4(rflags) +
+ * 8(verifier) + 4(deleg. type) + 8(deleg. stateid) +
+ * 4(deleg. recall flag) + 20(deleg. space limit) +
+ * ~32(deleg. ace) = 112 bytes
+ *
+ * Some responses can exceed this. A LOCK denial includes the conflicting
+ * lock owner, which can be up to 1024 bytes (NFS4_OPAQUE_LIMIT). Responses
+ * larger than REPLAY_ISIZE are not cached in rp_ibuf; only rp_status is
+ * saved. Enlarging this constant increases the size of every
+ * nfs4_stateowner.
*/
#define NFSD4_REPLAY_ISIZE 112
--
2.52.0
2
1
Luo Gengkun (1):
perf: Fix Wdeclaration-after-statement
Peter Zijlstra (1):
perf: Fix __perf_event_overflow() vs perf_remove_from_context() race
kernel/events/core.c | 57 ++++++++++++++++++++++++++++++++++++++++----
1 file changed, 52 insertions(+), 5 deletions(-)
--
2.34.1
2
3
[PATCH OLK-6.6 0/5] arm64/ras: Firmware-first SEI error handling with ESB synchronization
by Wupeng Ma 14 Apr '26
by Wupeng Ma 14 Apr '26
14 Apr '26
This series introduces firmware-first RAS (Reliability, Availability, and
Serviceability) error handling for ARM64 SEI (Synchronous External Interrupt)
errors, with support for ESB (Error Synchronization Barrier) synchronization.
The implementation includes:
- Boot parameter to enable/disable ESB synchronization for SEI
- Entry code patching to use ESB instruction for SEI handling
- Vendor-specific SEI error handling via APEI/GHES
- Sysctl interface for runtime control of vendor SEI handling
Liao Chang (1):
arm64/entry: Add support to synchronize SEI at the exception boundary
Wupeng Ma (3):
ACPI/APEI/arm64: add vendor SEI handling for firmware-first RAS
ACPI: APEI: add runtime switch for HiSilicon vendor SEI handling
arm64: openeuler_defconfig: enable CONFIG_ARM64_SYNC_SEI by default
Zheng Chuan (1):
arm64: Add boot parameter to control ESB for SEI synchronization
.../admin-guide/kernel-parameters.txt | 8 +
Documentation/admin-guide/sysctl/kernel.rst | 27 +++
arch/arm64/Kconfig | 14 ++
arch/arm64/configs/openeuler_defconfig | 1 +
arch/arm64/include/asm/acpi.h | 2 +
arch/arm64/include/asm/setup.h | 9 +
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/arm64_sync_sei.c | 45 +++++
arch/arm64/kernel/entry.S | 44 +++++
arch/arm64/kernel/traps.c | 15 ++
arch/arm64/kernel/xcall/entry.S | 2 +
drivers/acpi/apei/apei-internal.h | 2 +
drivers/acpi/apei/ghes-vendor-info.c | 186 +++++++++++++++++-
drivers/acpi/apei/ghes.c | 5 +
14 files changed, 360 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/kernel/arm64_sync_sei.c
--
2.43.0
2
6
Introduce xSched dmem
Alexander Pavlenko (2):
xSched/dmem: introduce xsched_dmem_alloc()
xSched/dmem: introduce xsched_dmem_free()
Liu Kai (2):
xSched/dmem: add support for XPU devices to register/unregister dmem
regions
xSched/dmem: introduce xsched_dmem_cleanup()
include/linux/cgroup_dmem.h | 2 -
include/linux/xsched.h | 14 ++
include/uapi/linux/xsched/xcu_vstream.h | 9 +
kernel/xsched/Makefile | 1 +
kernel/xsched/core.c | 3 +
kernel/xsched/dmem.c | 226 ++++++++++++++++++++++++
kernel/xsched/vstream.c | 99 +++++++++++
7 files changed, 352 insertions(+), 2 deletions(-)
create mode 100644 kernel/xsched/dmem.c
--
2.34.1
2
5
[PATCH OLK-6.6] fs: aio: set VMA_DONTCOPY_BIT in mmap to fix NULL-pointer-dereference error
by Zizhi Wo 14 Apr '26
by Zizhi Wo 14 Apr '26
14 Apr '26
hulk inclusion
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8926
--------------------------------
[BUG]
Recently, our internal syzkaller testing uncovered a null pointer
dereference issue:
BUG: kernel NULL pointer dereference, address: 0000000000000000
...
[ 51.111664] filemap_read_folio+0x25/0xe0
[ 51.112410] filemap_fault+0xad7/0x1250
[ 51.113112] __do_fault+0x4b/0x460
[ 51.113699] do_pte_missing+0x5bc/0x1db0
[ 51.114250] ? __pte_offset_map+0x23/0x170
[ 51.114822] __handle_mm_fault+0x9f8/0x1680
[ 51.115408] handle_mm_fault+0x24c/0x570
[ 51.115958] do_user_addr_fault+0x226/0xa50
...
Crash analysis showed the file involved was an AIO ring file.
[CAUSE]
PARENT process CHILD process
t=0 io_setup(1, &ctx)
[access ctx addr]
fork()
io_destroy
vm_munmap // not affect child vma
percpu_ref_put
...
put_aio_ring_file
t=1 [access ctx addr] // pagefault
...
__do_fault
filemap_fault
max_idx = DIV_ROUND_UP
t=2 truncate_setsize
truncate_pagecache
t=3 filemap_get_folio // no folio
__filemap_get_folio(FGP_CREAT, ...)
filemap_read_folio(xxx->read_folio)
At t=0, the parent process calls io_setup and then fork. The child process
gets its own VMA but without any PTEs. The parent then calls io_destroy.
Before i_size is truncated to 0, at t=1 the child process accesses this AIO
ctx address and triggers a pagefault. After the max_idx check passes, at
t=2 the parent calls truncate_setsize and truncate_pagecache. At t=3 the
child fails to obtain the folio, falls into the "page_not_uptodate" path,
and hits this problem because AIO does not implement "read_folio".
[Fix]
Fix this by marking the AIO ring buffer VMA with VM_DONTCOPY so
that fork()'s dup_mmap() skips it entirely. This is the correct
semantic because:
1) The child's ioctx_table is already reset to NULL by mm_init_aio() during
fork(), so the child has no AIO context and no way to perform any AIO
operations on this mapping.
2) The AIO ring VMA is only meaningful in conjunction with its associated
kioctx, which is never inherited across fork(). So child process with no
AIO context has no legitimate reason to access the ring buffer. Delivering
SIGSEGV on such an erroneous access is preferable to a kernel crash.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Signed-off-by: Zizhi Wo <wozizhi(a)huawei.com>
---
fs/aio.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/aio.c b/fs/aio.c
index c3614193d749..8b27c02db8a6 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -392,11 +392,11 @@ static const struct vm_operations_struct aio_ring_vm_ops = {
#endif
};
static int aio_ring_mmap(struct file *file, struct vm_area_struct *vma)
{
- vm_flags_set(vma, VM_DONTEXPAND);
+ vm_flags_set(vma, VM_DONTEXPAND | VM_DONTCOPY);
vma->vm_ops = &aio_ring_vm_ops;
return 0;
}
static const struct file_operations aio_ring_fops = {
--
2.39.2
2
1
[PATCH OLK-5.10] fs: aio: set VMA_DONTCOPY_BIT in mmap to fix NULL-pointer-dereference error
by Zizhi Wo 14 Apr '26
by Zizhi Wo 14 Apr '26
14 Apr '26
hulk inclusion
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8927
--------------------------------
[BUG]
Recently, our internal syzkaller testing uncovered a null pointer
dereference issue:
BUG: kernel NULL pointer dereference, address: 0000000000000000
...
[ 51.111664] filemap_read_folio+0x25/0xe0
[ 51.112410] filemap_fault+0xad7/0x1250
[ 51.113112] __do_fault+0x4b/0x460
[ 51.113699] do_pte_missing+0x5bc/0x1db0
[ 51.114250] ? __pte_offset_map+0x23/0x170
[ 51.114822] __handle_mm_fault+0x9f8/0x1680
[ 51.115408] handle_mm_fault+0x24c/0x570
[ 51.115958] do_user_addr_fault+0x226/0xa50
...
Crash analysis showed the file involved was an AIO ring file.
[CAUSE]
PARENT process CHILD process
t=0 io_setup(1, &ctx)
[access ctx addr]
fork()
io_destroy
vm_munmap // not affect child vma
percpu_ref_put
...
put_aio_ring_file
t=1 [access ctx addr] // pagefault
...
__do_fault
filemap_fault
max_idx = DIV_ROUND_UP
t=2 truncate_setsize
truncate_pagecache
t=3 filemap_get_folio // no folio
__filemap_get_folio(FGP_CREAT, ...)
filemap_read_folio(xxx->read_folio)
At t=0, the parent process calls io_setup and then fork. The child process
gets its own VMA but without any PTEs. The parent then calls io_destroy.
Before i_size is truncated to 0, at t=1 the child process accesses this AIO
ctx address and triggers a pagefault. After the max_idx check passes, at
t=2 the parent calls truncate_setsize and truncate_pagecache. At t=3 the
child fails to obtain the folio, falls into the "page_not_uptodate" path,
and hits this problem because AIO does not implement "read_folio".
[Fix]
Fix this by marking the AIO ring buffer VMA with VM_DONTCOPY so
that fork()'s dup_mmap() skips it entirely. This is the correct
semantic because:
1) The child's ioctx_table is already reset to NULL by mm_init_aio() during
fork(), so the child has no AIO context and no way to perform any AIO
operations on this mapping.
2) The AIO ring VMA is only meaningful in conjunction with its associated
kioctx, which is never inherited across fork(). So child process with no
AIO context has no legitimate reason to access the ring buffer. Delivering
SIGSEGV on such an erroneous access is preferable to a kernel crash.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Signed-off-by: Zizhi Wo <wozizhi(a)huawei.com>
---
fs/aio.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/aio.c b/fs/aio.c
index 78aaeaf35436..4ab74699cd3d 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -366,11 +366,11 @@ static const struct vm_operations_struct aio_ring_vm_ops = {
#endif
};
static int aio_ring_mmap(struct file *file, struct vm_area_struct *vma)
{
- vma->vm_flags |= VM_DONTEXPAND;
+ vma->vm_flags |= VM_DONTEXPAND | VM_DONTCOPY;
vma->vm_ops = &aio_ring_vm_ops;
return 0;
}
static const struct file_operations aio_ring_fops = {
--
2.39.2
2
1
Luo Gengkun (1):
perf: Fix Wdeclaration-after-statement
Peter Zijlstra (1):
perf: Fix __perf_event_overflow() vs perf_remove_from_context() race
kernel/events/core.c | 57 ++++++++++++++++++++++++++++++++++++++++----
1 file changed, 52 insertions(+), 5 deletions(-)
--
2.34.1
2
3
[PATCH OLK-6.6] Bluetooth: L2CAP: Fix use-after-free in l2cap_unregister_user
by Xia Fukun 14 Apr '26
by Xia Fukun 14 Apr '26
14 Apr '26
From: Shaurya Rane <ssrane_b23(a)ee.vjti.ac.in>
stable inclusion
from stable-v6.6.130
commit 11a87dd5df428a4b79a84d2790cac7f3c73f1f0d
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14101
CVE: CVE-2026-23461
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 752a6c9596dd25efd6978a73ff21f3b592668f4a ]
After commit ab4eedb790ca ("Bluetooth: L2CAP: Fix corrupted list in
hci_chan_del"), l2cap_conn_del() uses conn->lock to protect access to
conn->users. However, l2cap_register_user() and l2cap_unregister_user()
don't use conn->lock, creating a race condition where these functions can
access conn->users and conn->hchan concurrently with l2cap_conn_del().
This can lead to use-after-free and list corruption bugs, as reported
by syzbot.
Fix this by changing l2cap_register_user() and l2cap_unregister_user()
to use conn->lock instead of hci_dev_lock(), ensuring consistent locking
for the l2cap_conn structure.
Reported-by: syzbot+14b6d57fb728e27ce23c(a)syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=14b6d57fb728e27ce23c
Fixes: ab4eedb790ca ("Bluetooth: L2CAP: Fix corrupted list in hci_chan_del")
Signed-off-by: Shaurya Rane <ssrane_b23(a)ee.vjti.ac.in>
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz(a)intel.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Xia Fukun <xiafukun(a)huawei.com>
---
net/bluetooth/l2cap_core.c | 20 ++++++++------------
1 file changed, 8 insertions(+), 12 deletions(-)
diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
index 76c52d18b9b2..09ef1ccc6960 100644
--- a/net/bluetooth/l2cap_core.c
+++ b/net/bluetooth/l2cap_core.c
@@ -1686,17 +1686,15 @@ static void l2cap_info_timeout(struct work_struct *work)
int l2cap_register_user(struct l2cap_conn *conn, struct l2cap_user *user)
{
- struct hci_dev *hdev = conn->hcon->hdev;
int ret;
/* We need to check whether l2cap_conn is registered. If it is not, we
- * must not register the l2cap_user. l2cap_conn_del() is unregisters
- * l2cap_conn objects, but doesn't provide its own locking. Instead, it
- * relies on the parent hci_conn object to be locked. This itself relies
- * on the hci_dev object to be locked. So we must lock the hci device
- * here, too. */
+ * must not register the l2cap_user. l2cap_conn_del() unregisters
+ * l2cap_conn objects under conn->lock, and we use the same lock here
+ * to protect access to conn->users and conn->hchan.
+ */
- hci_dev_lock(hdev);
+ mutex_lock(&conn->lock);
if (!list_empty(&user->list)) {
ret = -EINVAL;
@@ -1717,16 +1715,14 @@ int l2cap_register_user(struct l2cap_conn *conn, struct l2cap_user *user)
ret = 0;
out_unlock:
- hci_dev_unlock(hdev);
+ mutex_unlock(&conn->lock);
return ret;
}
EXPORT_SYMBOL(l2cap_register_user);
void l2cap_unregister_user(struct l2cap_conn *conn, struct l2cap_user *user)
{
- struct hci_dev *hdev = conn->hcon->hdev;
-
- hci_dev_lock(hdev);
+ mutex_lock(&conn->lock);
if (list_empty(&user->list))
goto out_unlock;
@@ -1735,7 +1731,7 @@ void l2cap_unregister_user(struct l2cap_conn *conn, struct l2cap_user *user)
user->remove(conn, user);
out_unlock:
- hci_dev_unlock(hdev);
+ mutex_unlock(&conn->lock);
}
EXPORT_SYMBOL(l2cap_unregister_user);
--
2.34.1
2
1
xSched dmem
Liu Kai (2):
xSched/cgroup: prevent to init xse when xcu cgroup is disabled
xSched: remove redundant lookup in alloc_ctx_from_vstream to enforce
allocation semantics
include/linux/xsched.h | 2 ++
kernel/xsched/cgroup.c | 2 +-
kernel/xsched/core.c | 3 +++
kernel/xsched/vstream.c | 4 ----
4 files changed, 6 insertions(+), 5 deletions(-)
--
2.34.1
2
3
14 Apr '26
From: Jeff Layton <jlayton(a)kernel.org>
stable inclusion
from stable-v6.6.130
commit 8afb437ea1f70cacb4bbdf11771fb5c4d720b965
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14128
CVE: CVE-2026-31402
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 5133b61aaf437e5f25b1b396b14242a6bb0508e2 ]
The NFSv4.0 replay cache uses a fixed 112-byte inline buffer
(rp_ibuf[NFSD4_REPLAY_ISIZE]) to store encoded operation responses.
This size was calculated based on OPEN responses and does not account
for LOCK denied responses, which include the conflicting lock owner as
a variable-length field up to 1024 bytes (NFS4_OPAQUE_LIMIT).
When a LOCK operation is denied due to a conflict with an existing lock
that has a large owner, nfsd4_encode_operation() copies the full encoded
response into the undersized replay buffer via read_bytes_from_xdr_buf()
with no bounds check. This results in a slab-out-of-bounds write of up
to 944 bytes past the end of the buffer, corrupting adjacent heap memory.
This can be triggered remotely by an unauthenticated attacker with two
cooperating NFSv4.0 clients: one sets a lock with a large owner string,
then the other requests a conflicting lock to provoke the denial.
We could fix this by increasing NFSD4_REPLAY_ISIZE to allow for a full
opaque, but that would increase the size of every stateowner, when most
lockowners are not that large.
Instead, fix this by checking the encoded response length against
NFSD4_REPLAY_ISIZE before copying into the replay buffer. If the
response is too large, set rp_buflen to 0 to skip caching the replay
payload. The status is still cached, and the client already received the
correct response on the original request.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Cc: stable(a)kernel.org
Reported-by: Nicholas Carlini <npc(a)anthropic.com>
Tested-by: Nicholas Carlini <npc(a)anthropic.com>
Signed-off-by: Jeff Layton <jlayton(a)kernel.org>
Signed-off-by: Chuck Lever <chuck.lever(a)oracle.com>
[ replaced `op_status_offset + XDR_UNIT` with existing `post_err_offset` variable ]
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Li Lingfeng <lilingfeng3(a)huawei.com>
---
fs/nfsd/nfs4xdr.c | 9 +++++++--
fs/nfsd/state.h | 17 ++++++++++++-----
2 files changed, 19 insertions(+), 7 deletions(-)
diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
index 3eff780fd8da..3c9d19321df5 100644
--- a/fs/nfsd/nfs4xdr.c
+++ b/fs/nfsd/nfs4xdr.c
@@ -5420,9 +5420,14 @@ nfsd4_encode_operation(struct nfsd4_compoundres *resp, struct nfsd4_op *op)
int len = xdr->buf->len - post_err_offset;
so->so_replay.rp_status = op->status;
- so->so_replay.rp_buflen = len;
- read_bytes_from_xdr_buf(xdr->buf, post_err_offset,
+ if (len <= NFSD4_REPLAY_ISIZE) {
+ so->so_replay.rp_buflen = len;
+ read_bytes_from_xdr_buf(xdr->buf,
+ post_err_offset,
so->so_replay.rp_buf, len);
+ } else {
+ so->so_replay.rp_buflen = 0;
+ }
}
status:
*p = op->status;
diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
index 5da7785609b0..d1450fe7e2b9 100644
--- a/fs/nfsd/state.h
+++ b/fs/nfsd/state.h
@@ -430,11 +430,18 @@ struct nfs4_client_reclaim {
struct xdr_netobj cr_princhash;
};
-/* A reasonable value for REPLAY_ISIZE was estimated as follows:
- * The OPEN response, typically the largest, requires
- * 4(status) + 8(stateid) + 20(changeinfo) + 4(rflags) + 8(verifier) +
- * 4(deleg. type) + 8(deleg. stateid) + 4(deleg. recall flag) +
- * 20(deleg. space limit) + ~32(deleg. ace) = 112 bytes
+/*
+ * REPLAY_ISIZE is sized for an OPEN response with delegation:
+ * 4(status) + 8(stateid) + 20(changeinfo) + 4(rflags) +
+ * 8(verifier) + 4(deleg. type) + 8(deleg. stateid) +
+ * 4(deleg. recall flag) + 20(deleg. space limit) +
+ * ~32(deleg. ace) = 112 bytes
+ *
+ * Some responses can exceed this. A LOCK denial includes the conflicting
+ * lock owner, which can be up to 1024 bytes (NFS4_OPAQUE_LIMIT). Responses
+ * larger than REPLAY_ISIZE are not cached in rp_ibuf; only rp_status is
+ * saved. Enlarging this constant increases the size of every
+ * nfs4_stateowner.
*/
#define NFSD4_REPLAY_ISIZE 112
--
2.52.0
2
1
[PATCH OLK-6.6] NFSD: Hold net reference for the lifetime of /proc/fs/nfs/exports fd
by Li Lingfeng 13 Apr '26
by Li Lingfeng 13 Apr '26
13 Apr '26
From: Chuck Lever <chuck.lever(a)oracle.com>
stable inclusion
from stable-v6.6.130
commit d1a19217995df9c7e4118f5a2820c5032fef2945
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14129
CVE: CVE-2026-31403
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit e7fcf179b82d3a3730fd8615da01b087cc654d0b upstream.
The /proc/fs/nfs/exports proc entry is created at module init
and persists for the module's lifetime. exports_proc_open()
captures the caller's current network namespace and stores
its svc_export_cache in seq->private, but takes no reference
on the namespace. If the namespace is subsequently torn down
(e.g. container destruction after the opener does setns() to a
different namespace), nfsd_net_exit() calls nfsd_export_shutdown()
which frees the cache. Subsequent reads on the still-open fd
dereference the freed cache_detail, walking a freed hash table.
Hold a reference on the struct net for the lifetime of the open
file descriptor. This prevents nfsd_net_exit() from running --
and thus prevents nfsd_export_shutdown() from freeing the cache
-- while any exports fd is open. cache_detail already stores
its net pointer (cd->net, set by cache_create_net()), so
exports_release() can retrieve it without additional per-file
storage.
Reported-by: Misbah Anjum N <misanjum(a)linux.ibm.com>
Closes: https://lore.kernel.org/linux-nfs/dcd371d3a95815a84ba7de52cef447b8@linux.ib…
Fixes: 96d851c4d28d ("nfsd: use proper net while reading "exports" file")
Cc: stable(a)vger.kernel.org
Reviewed-by: Jeff Layton <jlayton(a)kernel.org>
Reviewed-by: NeilBrown <neil(a)brown.name>
Tested-by: Olga Kornievskaia <okorniev(a)redhat.com>
Signed-off-by: Chuck Lever <chuck.lever(a)oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Li Lingfeng <lilingfeng3(a)huawei.com>
---
fs/nfsd/nfsctl.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
index b821c50b062f..f80cbd734988 100644
--- a/fs/nfsd/nfsctl.c
+++ b/fs/nfsd/nfsctl.c
@@ -151,9 +151,19 @@ static int exports_net_open(struct net *net, struct file *file)
seq = file->private_data;
seq->private = nn->svc_export_cache;
+ get_net(net);
return 0;
}
+static int exports_release(struct inode *inode, struct file *file)
+{
+ struct seq_file *seq = file->private_data;
+ struct cache_detail *cd = seq->private;
+
+ put_net(cd->net);
+ return seq_release(inode, file);
+}
+
static int exports_nfsd_open(struct inode *inode, struct file *file)
{
return exports_net_open(inode->i_sb->s_fs_info, file);
@@ -163,7 +173,7 @@ static const struct file_operations exports_nfsd_operations = {
.open = exports_nfsd_open,
.read = seq_read,
.llseek = seq_lseek,
- .release = seq_release,
+ .release = exports_release,
};
static int export_features_show(struct seq_file *m, void *v)
@@ -1476,7 +1486,7 @@ static const struct proc_ops exports_proc_ops = {
.proc_open = exports_proc_open,
.proc_read = seq_read,
.proc_lseek = seq_lseek,
- .proc_release = seq_release,
+ .proc_release = exports_release,
};
static int create_proc_exports_entry(void)
--
2.52.0
2
1
[PATCH OLK-5.10] perf: Fix __perf_event_overflow() vs perf_remove_from_context() race
by Luo Gengkun 13 Apr '26
by Luo Gengkun 13 Apr '26
13 Apr '26
From: Peter Zijlstra <peterz(a)infradead.org>
mainline inclusion
from mainline-v7.0-rc2
commit c9bc1753b3cc41d0e01fbca7f035258b5f4db0ae
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13907
CVE: CVE-2026-23271
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
----------------------------------------------------------------------
Make sure that __perf_event_overflow() runs with IRQs disabled for all
possible callchains. Specifically the software events can end up running
it with only preemption disabled.
This opens up a race vs perf_event_exit_event() and friends that will go
and free various things the overflow path expects to be present, like
the BPF program.
Fixes: 592903cdcbf6 ("perf_counter: add an event_list")
Reported-by: Simond Hu <cmdhh1767(a)gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Tested-by: Simond Hu <cmdhh1767(a)gmail.com>
Link: https://patch.msgid.link/20260224122909.GV1395416@noisy.programming.kicks-a…
Conflicts:
kernel/events/core.c
[Fix ctx conflicts and guard conflict.]
Signed-off-by: Luo Gengkun <luogengkun2(a)huawei.com>
---
kernel/events/core.c | 56 ++++++++++++++++++++++++++++++++++++++++----
1 file changed, 52 insertions(+), 4 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index d1ffd5fb9c6f..a0ac222caca0 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -8972,6 +8972,13 @@ int perf_event_overflow(struct perf_event *event,
struct perf_sample_data *data,
struct pt_regs *regs)
{
+ /*
+ * Entry point from hardware PMI, interrupts should be disabled here.
+ * This serializes us against perf_event_remove_from_context() in
+ * things like perf_event_release_kernel().
+ */
+ lockdep_assert_irqs_disabled();
+
return __perf_event_overflow(event, 1, data, regs);
}
@@ -9051,7 +9058,21 @@ static void perf_swevent_event(struct perf_event *event, u64 nr,
struct pt_regs *regs)
{
struct hw_perf_event *hwc = &event->hw;
+ unsigned long flags;
+ /*
+ * This is:
+ * - software preempt
+ * - tracepoint preempt
+ * - tp_target_task irq (ctx->lock)
+ * - uprobes preempt/irq
+ * - kprobes preempt/irq
+ * - hw_breakpoint irq
+ *
+ * Any of these are sufficient to hold off RCU and thus ensure @event
+ * exists.
+ */
+ lockdep_assert_preemption_disabled();
local64_add(nr, &event->count);
if (!regs)
@@ -9060,19 +9081,36 @@ static void perf_swevent_event(struct perf_event *event, u64 nr,
if (!is_sampling_event(event))
return;
+ /*
+ * Serialize against event_function_call() IPIs like normal overflow
+ * event handling. Specifically, must not allow
+ * perf_event_release_kernel() -> perf_remove_from_context() to make
+ * progress and 'release' the event from under us.
+ */
+ local_irq_save(flags);
+ if (event->state != PERF_EVENT_STATE_ACTIVE) {
+ local_irq_restore(flags);
+ return;
+ }
+
if ((event->attr.sample_type & PERF_SAMPLE_PERIOD) && !event->attr.freq) {
data->period = nr;
+ local_irq_restore(flags);
return perf_swevent_overflow(event, 1, data, regs);
} else
data->period = event->hw.last_period;
- if (nr == 1 && hwc->sample_period == 1 && !event->attr.freq)
+ if (nr == 1 && hwc->sample_period == 1 && !event->attr.freq) {
+ local_irq_restore(flags);
return perf_swevent_overflow(event, 1, data, regs);
+ }
- if (local64_add_negative(nr, &hwc->period_left))
+ if (local64_add_negative(nr, &hwc->period_left)) {
+ local_irq_restore(flags);
return;
-
+ }
perf_swevent_overflow(event, 0, data, regs);
+ local_irq_restore(flags);
}
static int perf_exclude_event(struct perf_event *event,
@@ -9476,6 +9514,11 @@ void perf_tp_event(u16 event_type, u64 count, void *record, int entry_size,
struct perf_sample_data data;
struct perf_event *event;
+ /*
+ * Per being a tracepoint, this runs with preemption disabled.
+ */
+ lockdep_assert_preemption_disabled();
+
struct perf_raw_record raw = {
.frag = {
.size = entry_size,
@@ -9908,6 +9951,11 @@ void perf_bp_event(struct perf_event *bp, void *data)
struct perf_sample_data sample;
struct pt_regs *regs = data;
+ /*
+ * Exception context, will have interrupts disabled.
+ */
+ lockdep_assert_irqs_disabled();
+
perf_sample_data_init(&sample, bp->attr.bp_addr, 0);
if (!bp->hw.state && !perf_exclude_event(bp, regs))
@@ -10361,7 +10409,7 @@ static enum hrtimer_restart perf_swevent_hrtimer(struct hrtimer *hrtimer)
if (regs && !perf_exclude_event(event, regs)) {
if (!(event->attr.exclude_idle && is_idle_task(current)))
- if (__perf_event_overflow(event, 1, &data, regs))
+ if (perf_event_overflow(event, &data, regs))
ret = HRTIMER_NORESTART;
}
--
2.34.1
2
1
*** BLURB HERE ***
Alan Maguire (16):
kbuild,bpf: Switch to using --btf_features for pahole v1.26 and later
kbuild, bpf: Use test-ge check for v1.25-only pahole
libbpf: Add btf__distill_base() creating split BTF with distilled base
BTF
selftests/bpf: Test distilled base, split BTF generation
libbpf: Split BTF relocation
selftests/bpf: Extend distilled BTF tests to cover BTF relocation
resolve_btfids: Handle presence of .BTF.base section
libbpf: BTF relocation followup fixing naming, loop logic
module, bpf: Store BTF base pointer in struct module
libbpf: Split field iter code into its own file kernel
libbpf,bpf: Share BTF relocate-related code with kernel
kbuild,bpf: Add module-specific pahole flags for distilled base BTF
selftests/bpf: Add kfunc_call test for simple dtor in bpf_testmod
bpf: fix build when CONFIG_DEBUG_INFO_BTF[_MODULES] is undefined
libbpf: Fix error handling in btf__distill_base()
libbpf: Fix license for btf_relocate.c
Alexander Lobakin (1):
bitops: make BYTES_TO_BITS() treewide-available
Alexei Starovoitov (2):
s390/bpf: Fix indirect trampoline generation
bpf: Introduce "volatile compare" macros
Andrii Nakryiko (13):
bpf: Emit global subprog name in verifier logs
bpf: Validate global subprogs lazily
selftests/bpf: Add lazy global subprog validation tests
libbpf: Add btf__new_split() API that was declared but not implemented
bpf: move sleepable flag from bpf_prog_aux to bpf_prog
libbpf: Add BTF field iterator
libbpf: Make use of BTF field iterator in BPF linker code
libbpf: Make use of BTF field iterator in BTF handling code
bpftool: Use BTF field iterator in btfgen
libbpf: Remove callback-based type/string BTF field visitor helpers
bpf: extract iterator argument type and name validation logic
bpf: allow passing struct bpf_iter_<type> as kfunc arguments
selftests/bpf: test passing iterator to a kfunc
Benjamin Tissoires (1):
bpf: introduce in_sleepable() helper
Christophe Leroy (2):
bpf: Remove arch_unprotect_bpf_trampoline()
bpf: Check return from set_memory_rox()
Chuyi Zhou (13):
cgroup: Prepare for using css_task_iter_*() in BPF
bpf: Introduce css_task open-coded iterator kfuncs
bpf: Introduce task open coded iterator kfuncs
bpf: Introduce css open-coded iterator kfuncs
bpf: teach the verifier to enforce css_iter and task_iter in RCU CS
bpf: Let bpf_iter_task_new accept null task ptr
selftests/bpf: rename bpf_iter_task.c to bpf_iter_tasks.c
selftests/bpf: Add tests for open-coded task and css iter
bpf: Relax allowlist for css_task iter
selftests/bpf: Add tests for css_task iter combining with cgroup iter
selftests/bpf: Add test for using css_task iter in sleepable progs
bpf: Let verifier consider {task,cgroup} is trusted in bpf_iter_reg
selftests/bpf: get trusted cgrp from bpf_iter__cgroup directly
Daniel Xu (3):
bpf: btf: Support flags for BTF_SET8 sets
bpf: btf: Add BTF_KFUNCS_START/END macro pair
bpf: treewide: Annotate BPF kfuncs in BTF
Dave Marchevsky (6):
bpf: Don't explicitly emit BTF for struct btf_iter_num
selftests/bpf: Rename bpf_iter_task_vma.c to bpf_iter_task_vmas.c
bpf: Introduce task_vma open-coded iterator kfuncs
selftests/bpf: Add tests for open-coded task_vma iter
bpf: Add __bpf_kfunc_{start,end}_defs macros
bpf: Add __bpf_hook_{start,end} macros
David Vernet (2):
bpf: Add ability to pin bpf timer to calling CPU
selftests/bpf: Test pinning bpf timer to a core
Eduard Zingerman (2):
libbpf: Make btf_parse_elf process .BTF.base transparently
selftests/bpf: Check if distilled base inherits source endianness
Geliang Tang (1):
bpf, btf: Check btf for register_bpf_struct_ops
Hou Tao (6):
bpf: Free dynamically allocated bits in bpf_iter_bits_destroy()
bpf: Add bpf_mem_alloc_check_size() helper
bpf: Check the validity of nr_words in bpf_iter_bits_new()
bpf: Use __u64 to save the bits in bits iterator
selftests/bpf: Add three test cases for bits_iter
selftests/bpf: Use -4095 as the bad address for bits iterator
Kui-Feng Lee (29):
bpf: refactory struct_ops type initialization to a function.
bpf: get type information with BTF_ID_LIST
bpf, net: introduce bpf_struct_ops_desc.
bpf: add struct_ops_tab to btf.
bpf: make struct_ops_map support btfs other than btf_vmlinux.
bpf: pass btf object id in bpf_map_info.
bpf: lookup struct_ops types from a given module BTF.
bpf: pass attached BTF to the bpf_struct_ops subsystem
bpf: hold module refcnt in bpf_struct_ops map creation and prog
verification.
bpf: validate value_type
bpf, net: switch to dynamic registration
libbpf: Find correct module BTFs for struct_ops maps and progs.
bpf: export btf_ctx_access to modules.
selftests/bpf: test case for register_bpf_struct_ops().
bpf: Fix error checks against bpf_get_btf_vmlinux().
bpf: Remove an unnecessary check.
selftests/bpf: Suppress warning message of an unused variable.
bpf: add btf pointer to struct bpf_ctx_arg_aux.
bpf: Move __kfunc_param_match_suffix() to btf.c.
bpf: Create argument information for nullable arguments.
selftests/bpf: Test PTR_MAYBE_NULL arguments of struct_ops operators.
libbpf: Set btf_value_type_id of struct bpf_map for struct_ops.
libbpf: Convert st_ops->data to shadow type.
bpftool: Generated shadow variables for struct_ops maps.
bpftool: Add an example for struct_ops map and shadow type.
selftests/bpf: Test if shadow types work correctly.
bpf, net: validate struct_ops when updating value.
bpf: struct_ops supports more than one page for trampolines.
selftests/bpf: Test struct_ops maps with a large number of struct_ops
program.
Kumar Kartikeya Dwivedi (4):
bpf: Allow calling static subprogs while holding a bpf_spin_lock
selftests/bpf: Add test for static subprog call in lock cs
bpf: Transfer RCU lock state between subprog calls
selftests/bpf: Add tests for RCU lock transfer between subprogs
Luo Gengkun (7):
bpf: Fix kabi-breakage for bpf_func_info_aux
bpf: Fix kabi-breakage for bpf_tramp_image
bpf: Fix kabi for bpf_attr
bpf_verifier: Fix kabi for bpf_verifier_env
bpf: Fix kabi for bpf_ctx_arg_aux
bpf: Fix kabi for bpf_prog_aux and bpf_prog
selftests/bpf: modify test_loader that didn't support running
bpf_prog_type_syscall programs
Martin KaFai Lau (4):
libbpf: Ensure undefined bpf_attr field stays 0
bpf: Remove unnecessary err < 0 check in
bpf_struct_ops_map_update_elem
bpf: Fix a crash when btf_parse_base() returns an error pointer
bpf: Reject struct_ops registration that uses module ptr and the
module btf_id is missing
Masahiro Yamada (1):
kbuild: avoid too many execution of scripts/pahole-flags.sh
Matthieu Baerts (1):
bpf: fix compilation error without CGROUPS
Peter Zijlstra (6):
cfi: Flip headers
x86/cfi,bpf: Fix BPF JIT call
x86/cfi,bpf: Fix bpf_callback_t CFI
x86/cfi,bpf: Fix bpf_struct_ops CFI
cfi: Add CFI_NOSEAL()
bpf: Fix dtor CFI
Pu Lehui (8):
riscv, bpf: Fix unpredictable kernel crash about RV64 struct_ops
bpf: Fix kabi breakage in struct module
riscv, bpf: Fix out-of-bounds issue when preparing trampoline image
selftests/bpf: Fix btf leak on new btf alloc failure in btf_distill
test
libbpf: Fix return zero when elf_begin failed
libbpf: Fix incorrect traversal end type ID when marking
BTF_IS_EMBEDDED
selftests/bpf: Add distilled BTF test about marking BTF_IS_EMBEDDED
selftests/bpf: Add file_read_pattern to gitignore
Song Liu (8):
bpf: Charge modmem for struct_ops trampoline
bpf: Let bpf_prog_pack_free handle any pointer
bpf: Adjust argument names of arch_prepare_bpf_trampoline()
bpf: Add helpers for trampoline image management
bpf, x86: Adjust arch_prepare_bpf_trampoline return value
bpf: Add arch_bpf_trampoline_size()
bpf: Use arch_bpf_trampoline_size
x86, bpf: Use bpf_prog_pack for bpf trampoline
T.J. Mercier (1):
bpf, docs: Fix broken link to renamed bpf_iter_task_vmas.c
Tony Ambardar (1):
libbpf: Ensure new BTF objects inherit input endianness
Yafang Shao (2):
bpf: Add bits iterator
selftests/bpf: Add selftest for bits iter
Documentation/bpf/bpf_iterators.rst | 2 +-
Documentation/bpf/kfuncs.rst | 14 +-
MAINTAINERS | 2 +-
Makefile | 4 +-
arch/arm64/kernel/bpf-rvi.c | 4 +-
arch/arm64/net/bpf_jit_comp.c | 55 +-
arch/riscv/include/asm/cfi.h | 3 +-
arch/riscv/kernel/cfi.c | 2 +-
arch/riscv/net/bpf_jit_comp64.c | 48 +-
arch/s390/net/bpf_jit_comp.c | 59 +-
arch/x86/include/asm/cfi.h | 126 ++-
arch/x86/kernel/alternative.c | 87 +-
arch/x86/kernel/cfi.c | 4 +-
arch/x86/net/bpf_jit_comp.c | 261 ++++--
block/blk-cgroup.c | 4 +-
drivers/hid/bpf/hid_bpf_dispatch.c | 12 +-
fs/proc/stat.c | 4 +-
include/asm-generic/Kbuild | 1 +
include/asm-generic/cfi.h | 5 +
include/linux/bitops.h | 2 +
include/linux/bpf.h | 130 ++-
include/linux/bpf_mem_alloc.h | 3 +
include/linux/bpf_verifier.h | 21 +-
include/linux/btf.h | 105 +++
include/linux/btf_ids.h | 21 +-
include/linux/cfi.h | 12 +
include/linux/cgroup.h | 12 +-
include/linux/filter.h | 2 +-
include/linux/module.h | 5 +
include/uapi/linux/bpf.h | 16 +-
kernel/bpf-rvi/common_kfuncs.c | 4 +-
kernel/bpf/Makefile | 8 +-
kernel/bpf/bpf_iter.c | 12 +-
kernel/bpf/bpf_struct_ops.c | 748 ++++++++++++------
kernel/bpf/bpf_struct_ops_types.h | 12 -
kernel/bpf/btf.c | 431 ++++++++--
kernel/bpf/cgroup_iter.c | 65 +-
kernel/bpf/core.c | 76 +-
kernel/bpf/cpumask.c | 18 +-
kernel/bpf/dispatcher.c | 7 +-
kernel/bpf/helpers.c | 202 ++++-
kernel/bpf/map_iter.c | 10 +-
kernel/bpf/memalloc.c | 14 +-
kernel/bpf/syscall.c | 12 +-
kernel/bpf/task_iter.c | 242 +++++-
kernel/bpf/trampoline.c | 99 ++-
kernel/bpf/verifier.c | 317 ++++++--
kernel/cgroup/cgroup.c | 18 +-
kernel/cgroup/cpuset.c | 4 +-
kernel/cgroup/rstat.c | 13 +-
kernel/events/core.c | 2 +-
kernel/module/main.c | 5 +-
kernel/sched/bpf_sched.c | 8 +-
kernel/sched/cpuacct.c | 4 +-
kernel/trace/bpf_trace.c | 12 +-
kernel/trace/trace_probe.c | 2 -
net/bpf/bpf_dummy_struct_ops.c | 72 +-
net/bpf/test_run.c | 30 +-
net/core/filter.c | 33 +-
net/core/xdp.c | 10 +-
net/ipv4/bpf_tcp_ca.c | 93 ++-
net/ipv4/fou_bpf.c | 10 +-
net/ipv4/tcp_bbr.c | 4 +-
net/ipv4/tcp_cong.c | 6 +-
net/ipv4/tcp_cubic.c | 4 +-
net/ipv4/tcp_dctcp.c | 4 +-
net/netfilter/nf_conntrack_bpf.c | 10 +-
net/netfilter/nf_nat_bpf.c | 10 +-
net/socket.c | 8 +-
net/xfrm/xfrm_interface_bpf.c | 10 +-
scripts/Makefile.btf | 33 +
scripts/Makefile.modfinal | 2 +-
scripts/pahole-flags.sh | 30 -
.../bpf/bpftool/Documentation/bpftool-gen.rst | 58 +-
tools/bpf/bpftool/gen.c | 253 +++++-
tools/bpf/resolve_btfids/main.c | 8 +
tools/include/linux/bitops.h | 2 +
tools/include/uapi/linux/bpf.h | 14 +-
tools/lib/bpf/Build | 2 +-
tools/lib/bpf/bpf.c | 4 +-
tools/lib/bpf/bpf.h | 4 +-
tools/lib/bpf/btf.c | 704 ++++++++++++-----
tools/lib/bpf/btf.h | 36 +
tools/lib/bpf/btf_iter.c | 177 +++++
tools/lib/bpf/btf_relocate.c | 519 ++++++++++++
tools/lib/bpf/libbpf.c | 86 +-
tools/lib/bpf/libbpf.map | 4 +-
tools/lib/bpf/libbpf_internal.h | 29 +-
tools/lib/bpf/libbpf_probes.c | 1 +
tools/lib/bpf/linker.c | 58 +-
tools/perf/util/probe-finder.c | 4 +-
tools/testing/selftests/bpf/.gitignore | 1 +
.../testing/selftests/bpf/bpf_experimental.h | 96 +++
.../selftests/bpf/bpf_testmod/bpf_testmod.c | 160 +++-
.../selftests/bpf/bpf_testmod/bpf_testmod.h | 61 ++
.../bpf/bpf_testmod/bpf_testmod_kfunc.h | 9 +
.../selftests/bpf/prog_tests/bpf_iter.c | 44 +-
.../selftests/bpf/prog_tests/btf_distill.c | 692 ++++++++++++++++
.../selftests/bpf/prog_tests/cgroup_iter.c | 33 +
.../testing/selftests/bpf/prog_tests/iters.c | 209 +++++
.../selftests/bpf/prog_tests/kfunc_call.c | 1 +
.../selftests/bpf/prog_tests/rcu_read_lock.c | 6 +
.../selftests/bpf/prog_tests/spin_lock.c | 2 +
.../prog_tests/test_struct_ops_maybe_null.c | 46 ++
.../bpf/prog_tests/test_struct_ops_module.c | 86 ++
.../prog_tests/test_struct_ops_multi_pages.c | 30 +
.../testing/selftests/bpf/prog_tests/timer.c | 4 +
.../selftests/bpf/prog_tests/verifier.c | 4 +
...f_iter_task_vma.c => bpf_iter_task_vmas.c} | 0
.../{bpf_iter_task.c => bpf_iter_tasks.c} | 0
tools/testing/selftests/bpf/progs/iters_css.c | 72 ++
.../selftests/bpf/progs/iters_css_task.c | 102 +++
.../testing/selftests/bpf/progs/iters_task.c | 41 +
.../selftests/bpf/progs/iters_task_failure.c | 105 +++
.../selftests/bpf/progs/iters_task_vma.c | 43 +
.../selftests/bpf/progs/iters_testmod_seq.c | 50 ++
.../selftests/bpf/progs/kfunc_call_test.c | 37 +
.../selftests/bpf/progs/rcu_read_lock.c | 120 +++
.../bpf/progs/struct_ops_maybe_null.c | 29 +
.../bpf/progs/struct_ops_maybe_null_fail.c | 24 +
.../selftests/bpf/progs/struct_ops_module.c | 37 +
.../bpf/progs/struct_ops_multi_pages.c | 102 +++
.../selftests/bpf/progs/test_global_func12.c | 4 +-
.../selftests/bpf/progs/test_spin_lock.c | 65 ++
.../selftests/bpf/progs/test_spin_lock_fail.c | 44 ++
tools/testing/selftests/bpf/progs/timer.c | 63 +-
.../selftests/bpf/progs/verifier_bits_iter.c | 232 ++++++
.../bpf/progs/verifier_global_subprogs.c | 92 +++
.../selftests/bpf/progs/verifier_spin_lock.c | 2 +-
.../bpf/progs/verifier_subprog_precision.c | 4 +-
tools/testing/selftests/bpf/test_loader.c | 10 +-
131 files changed, 7326 insertions(+), 1139 deletions(-)
create mode 100644 include/asm-generic/cfi.h
delete mode 100644 kernel/bpf/bpf_struct_ops_types.h
create mode 100644 scripts/Makefile.btf
delete mode 100755 scripts/pahole-flags.sh
create mode 100644 tools/lib/bpf/btf_iter.c
create mode 100644 tools/lib/bpf/btf_relocate.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/btf_distill.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_maybe_null.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_multi_pages.c
rename tools/testing/selftests/bpf/progs/{bpf_iter_task_vma.c => bpf_iter_task_vmas.c} (100%)
rename tools/testing/selftests/bpf/progs/{bpf_iter_task.c => bpf_iter_tasks.c} (100%)
create mode 100644 tools/testing/selftests/bpf/progs/iters_css.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_css_task.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_task.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_task_failure.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_task_vma.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_maybe_null.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_maybe_null_fail.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_module.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_multi_pages.c
create mode 100644 tools/testing/selftests/bpf/progs/verifier_bits_iter.c
create mode 100644 tools/testing/selftests/bpf/progs/verifier_global_subprogs.c
--
2.34.1
2
141
[PATCH OLK-5.10] media: dvb-core: fix wrong reinitialization of ringbuffer on reopen
by Chen Jinghuang 13 Apr '26
by Chen Jinghuang 13 Apr '26
13 Apr '26
From: Jens Axboe <axboe(a)kernel.dk>
mainline inclusion
from mainline-v7.0-rc2
commit bfbc0b5b32a8f28ce284add619bf226716a59bc0
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13873
CVE: CVE-2026-23253
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
dvb_dvr_open() calls dvb_ringbuffer_init() when a new reader opens the
DVR device. dvb_ringbuffer_init() calls init_waitqueue_head(), which
reinitializes the waitqueue list head to empty.
Since dmxdev->dvr_buffer.queue is a shared waitqueue (all opens of the
same DVR device share it), this orphans any existing waitqueue entries
from io_uring poll or epoll, leaving them with stale prev/next pointers
while the list head is reset to {self, self}.
The waitqueue and spinlock in dvr_buffer are already properly
initialized once in dvb_dmxdev_init(). The open path only needs to
reset the buffer data pointer, size, and read/write positions.
Replace the dvb_ringbuffer_init() call in dvb_dvr_open() with direct
assignment of data/size and a call to dvb_ringbuffer_reset(), which
properly resets pread, pwrite, and error with correct memory ordering
without touching the waitqueue or spinlock.
Cc: stable(a)vger.kernel.org
Fixes: 34731df288a5f ("V4L/DVB (3501): Dmxdev: use dvb_ringbuffer")
Reported-by: syzbot+ab12f0c08dd7ab8d057c(a)syzkaller.appspotmail.com
Tested-by: syzbot+ab12f0c08dd7ab8d057c(a)syzkaller.appspotmail.com
Link: https://lore.kernel.org/all/698a26d3.050a0220.3b3015.007d.GAE@google.com/
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Conflicts:
drivers/media/dvb-core/dmxdev.c
[context conflict]
Signed-off-by: Chen Jinghuang <chenjinghuang2(a)huawei.com>
---
drivers/media/dvb-core/dmxdev.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/media/dvb-core/dmxdev.c b/drivers/media/dvb-core/dmxdev.c
index 12b7f698f562..c32adebd0e16 100644
--- a/drivers/media/dvb-core/dmxdev.c
+++ b/drivers/media/dvb-core/dmxdev.c
@@ -178,7 +178,9 @@ static int dvb_dvr_open(struct inode *inode, struct file *file)
mutex_unlock(&dmxdev->mutex);
return -ENOMEM;
}
- dvb_ringbuffer_init(&dmxdev->dvr_buffer, mem, DVR_BUFFER_SIZE);
+ dmxdev->dvr_buffer.data = mem;
+ dmxdev->dvr_buffer.size = DVR_BUFFER_SIZE;
+ dvb_ringbuffer_reset(&dmxdev->dvr_buffer);
if (dmxdev->may_do_mmap)
dvb_vb2_init(&dmxdev->dvr_vb2_ctx, "dvr",
file->f_flags & O_NONBLOCK);
--
2.34.1
2
1
[PATCH OLK-5.10] media: dvb-core: fix wrong reinitialization of ringbuffer on reopen
by Chen Jinghuang 13 Apr '26
by Chen Jinghuang 13 Apr '26
13 Apr '26
From: Jens Axboe <axboe(a)kernel.dk>
mainline inclusion
from mainline-v7.0-rc2
commit bfbc0b5b32a8f28ce284add619bf226716a59bc0
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13873
CVE: CVE-2026-23253
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
dvb_dvr_open() calls dvb_ringbuffer_init() when a new reader opens the
DVR device. dvb_ringbuffer_init() calls init_waitqueue_head(), which
reinitializes the waitqueue list head to empty.
Since dmxdev->dvr_buffer.queue is a shared waitqueue (all opens of the
same DVR device share it), this orphans any existing waitqueue entries
from io_uring poll or epoll, leaving them with stale prev/next pointers
while the list head is reset to {self, self}.
The waitqueue and spinlock in dvr_buffer are already properly
initialized once in dvb_dmxdev_init(). The open path only needs to
reset the buffer data pointer, size, and read/write positions.
Replace the dvb_ringbuffer_init() call in dvb_dvr_open() with direct
assignment of data/size and a call to dvb_ringbuffer_reset(), which
properly resets pread, pwrite, and error with correct memory ordering
without touching the waitqueue or spinlock.
Cc: stable(a)vger.kernel.org
Fixes: 34731df288a5f ("V4L/DVB (3501): Dmxdev: use dvb_ringbuffer")
Reported-by: syzbot+ab12f0c08dd7ab8d057c(a)syzkaller.appspotmail.com
Tested-by: syzbot+ab12f0c08dd7ab8d057c(a)syzkaller.appspotmail.com
Link: https://lore.kernel.org/all/698a26d3.050a0220.3b3015.007d.GAE@google.com/
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Conflicts:
drivers/media/dvb-core/dmxdev.c
[context conflict]
Signed-off-by: Chen Jinghuang <chenjinghuang2(a)huawei.com>
---
drivers/media/dvb-core/dmxdev.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/media/dvb-core/dmxdev.c b/drivers/media/dvb-core/dmxdev.c
index 12b7f698f562..c32adebd0e16 100644
--- a/drivers/media/dvb-core/dmxdev.c
+++ b/drivers/media/dvb-core/dmxdev.c
@@ -178,7 +178,9 @@ static int dvb_dvr_open(struct inode *inode, struct file *file)
mutex_unlock(&dmxdev->mutex);
return -ENOMEM;
}
- dvb_ringbuffer_init(&dmxdev->dvr_buffer, mem, DVR_BUFFER_SIZE);
+ dmxdev->dvr_buffer.data = mem;
+ dmxdev->dvr_buffer.size = DVR_BUFFER_SIZE;
+ dvb_ringbuffer_reset(&dmxdev->dvr_buffer);
if (dmxdev->may_do_mmap)
dvb_vb2_init(&dmxdev->dvr_vb2_ctx, "dvr",
file->f_flags & O_NONBLOCK);
--
2.34.1
2
1
Aaron Lu (7):
[Backport] sched/fair: Task based throttle time accounting
[Backport] sched/fair: Get rid of throttled_lb_pair()
[Backport] sched/fair: Propagate load for throttled cfs_rq
[Backport] sched/fair: update_cfs_group() for throttled cfs_rqs
[Backport] sched/fair: Do not balance task to a throttled cfs_rq
[Backport] sched/fair: Prevent cfs_rq from being unthrottled with zero
runtime_remaining
[Backport] sched/fair: Do not special case tasks in throttled
hierarchy
K Prateek Nayak (1):
[Backport] sched/fair: Start a cfs_rq on throttled hierarchy with PELT
clock throttled
Valentin Schneider (3):
[Backport] sched/fair: Add related data structure for task based
throttle
[Backport] sched/fair: Implement throttle task work and related
helpers
[Backport] sched/fair: Switch to task based throttle model
Wang Tao (2):
[Huawei] sched: Fix kabi broken of struct task_struct and struct
cfs_rq
[Huawei] sched: Fix kabi broken of struct cfs_rq
Zhang Qiao (2):
[Huawei] sched/fair: Use separate throttle functions for QoS
[Huawei] sched/fair: Use separate qos_throttled and qos_throttle_count
include/linux/sched.h | 8 +
kernel/sched/core.c | 5 +-
kernel/sched/fair.c | 622 ++++++++++++++++++++++++++----------------
kernel/sched/pelt.h | 4 +-
kernel/sched/sched.h | 17 +-
5 files changed, 409 insertions(+), 247 deletions(-)
--
2.18.0
1
15
[PATCH OLK-5.10] apparmor: validate DFA start states are in bounds in unpack_pdb
by Zhao Yipeng 11 Apr '26
by Zhao Yipeng 11 Apr '26
11 Apr '26
From: Massimiliano Pellizzer <massimiliano.pellizzer(a)canonical.com>
mainline inclusion
from mainline-v7.0-rc4
commit 9063d7e2615f4a7ab321de6b520e23d370e58816
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13880
CVE: CVE-2026-23269
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
Start states are read from untrusted data and used as indexes into the
DFA state tables. The aa_dfa_next() function call in unpack_pdb() will
access dfa->tables[YYTD_ID_BASE][start], and if the start state exceeds
the number of states in the DFA, this results in an out-of-bound read.
==================================================================
BUG: KASAN: slab-out-of-bounds in aa_dfa_next+0x2a1/0x360
Read of size 4 at addr ffff88811956fb90 by task su/1097
...
Reject policies with out-of-bounds start states during unpacking
to prevent the issue.
Fixes: ad5ff3db53c6 ("AppArmor: Add ability to load extended policy")
Reported-by: Qualys Security Advisory <qsa(a)qualys.com>
Tested-by: Salvatore Bonaccorso <carnil(a)debian.org>
Reviewed-by: Georgia Garcia <georgia.garcia(a)canonical.com>
Reviewed-by: Cengiz Can <cengiz.can(a)canonical.com>
Signed-off-by: Massimiliano Pellizzer <massimiliano.pellizzer(a)canonical.com>
Signed-off-by: John Johansen <john.johansen(a)canonical.com>
Conflicts:
security/apparmor/policy_unpack.c
[The conflict is due to the commit ad596ea74e746
("apparmor: group dfa policydb unpacking") and 98b824ff8984f
("apparmor: refcount the pdb") not being merged. The first commit
change profile->policy to *policy and move to a new function:
unpack_pdb. And the second commit change *policy to *pdb. So keep
use profile-policy in this commit.]
Signed-off-by: Zhao Yipeng <zhaoyipeng5(a)huawei.com>
---
security/apparmor/policy_unpack.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
index 33fac6489077..bac5cccbf2e9 100644
--- a/security/apparmor/policy_unpack.c
+++ b/security/apparmor/policy_unpack.c
@@ -860,6 +860,12 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
if (!unpack_u32(e, &profile->policy.start[0], "start"))
/* default start state */
profile->policy.start[0] = DFA_START;
+
+ if (profile->policy.start[0] >= profile->policy.dfa->tables[YYTD_ID_BASE]->td_lolen) {
+ info = "invalid dfa start state";
+ goto fail;
+ }
+
/* setup class index */
for (i = AA_CLASS_FILE; i <= AA_CLASS_LAST; i++) {
profile->policy.start[i] =
@@ -890,6 +896,12 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
} else
profile->file.dfa = aa_get_dfa(nulldfa);
+ if (profile->file.dfa && profile->file.start >=
+ profile->file.dfa->tables[YYTD_ID_BASE]->td_lolen) {
+ info = "invalid file dfa start state";
+ goto fail;
+ }
+
if (!unpack_trans_table(e, profile)) {
info = "failed to unpack profile transition table";
goto fail;
--
2.34.1
2
1
Chen Yu (1):
sched/eevdf: Fix wakeup-preempt by checking cfs_rq->nr_running
Ingo Molnar (2):
sched/fair: Rename cfs_rq::avg_load to cfs_rq::sum_weight
sched/fair: Rename cfs_rq::avg_vruntime to ::sum_w_vruntime, and
helper functions
Peter Zijlstra (10):
sched/fair: Fix zero_vruntime tracking
sched/fair: Fix EEVDF entity placement bug causing scheduling lag
sched/fair: Adhere to place_entity() constraints
sched: Unify runtime accounting across classes
sched: Remove vruntime from trace_sched_stat_runtime()
sched: Unify more update_curr*()
sched/eevdf: Allow shorter slices to wakeup-preempt
sched/fair: Only set slice protection at pick time
sched/fair: Fix zero_vruntime tracking fix
sched/debug: Fix avg_vruntime() usage
Vincent Guittot (2):
sched/fair: Use protect_slice() instead of direct comparison
sched/fair: Fix NO_RUN_TO_PARITY case
Wang Tao (1):
sched/eevdf: Update se->vprot in reweight_entity()
Xia Fukun (1):
sched: fix kabi broken in struct sched_statistics
Zhang Qiao (1):
sched: Fix struct sched_entity kabi broken
Zicheng Qu (5):
sched: Re-evaluate scheduling when migrating queued tasks out of
throttled cgroups
sched: Fix kabi breakage of struct cfs_rq for sum_weight
sched: Fix kabi breakage of struct cfs_rq for sum_w_vruntime
sched/eevdf: Disable shorter slices to wakeup-preempt
sched/fair: Fix vruntime drift by preventing double lag scaling during
reweight
zihan zhou (1):
sched: Cancel the slice protection of the idle entity
include/linux/sched.h | 15 +-
include/trace/events/sched.h | 15 +-
kernel/sched/core.c | 4 +-
kernel/sched/deadline.c | 15 +-
kernel/sched/debug.c | 4 +-
kernel/sched/fair.c | 477 +++++++++++++++++++----------------
kernel/sched/features.h | 5 +
kernel/sched/rt.c | 15 +-
kernel/sched/sched.h | 18 +-
kernel/sched/stop_task.c | 13 +-
10 files changed, 304 insertions(+), 277 deletions(-)
--
2.34.1
2
25
From: NeilBrown <neil(a)brown.name>
stable inclusion
from stable-v5.10.248
commit ca97360860eb02e3ae4ba42c19b439a0fcecbf06
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13489
CVE: CVE-2026-22980
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 2857bd59feb63fcf40fe4baf55401baea6b4feb4 ]
Writing to v4_end_grace can race with server shutdown and result in
memory being accessed after it was freed - reclaim_str_hashtbl in
particularly.
We cannot hold nfsd_mutex across the nfsd4_end_grace() call as that is
held while client_tracking_op->init() is called and that can wait for
an upcall to nfsdcltrack which can write to v4_end_grace, resulting in a
deadlock.
nfsd4_end_grace() is also called by the landromat work queue and this
doesn't require locking as server shutdown will stop the work and wait
for it before freeing anything that nfsd4_end_grace() might access.
However, we must be sure that writing to v4_end_grace doesn't restart
the work item after shutdown has already waited for it. For this we
add a new flag protected with nn->client_lock. It is set only while it
is safe to make client tracking calls, and v4_end_grace only schedules
work while the flag is set with the spinlock held.
So this patch adds a nfsd_net field "client_tracking_active" which is
set as described. Another field "grace_end_forced", is set when
v4_end_grace is written. After this is set, and providing
client_tracking_active is set, the laundromat is scheduled.
This "grace_end_forced" field bypasses other checks for whether the
grace period has finished.
This resolves a race which can result in use-after-free.
Reported-by: Li Lingfeng <lilingfeng3(a)huawei.com>
Closes: https://lore.kernel.org/linux-nfs/20250623030015.2353515-1-neil@brown.name/…
Fixes: 7f5ef2e900d9 ("nfsd: add a v4_end_grace file to /proc/fs/nfsd")
Cc: stable(a)vger.kernel.org
Signed-off-by: NeilBrown <neil(a)brown.name>
Tested-by: Li Lingfeng <lilingfeng3(a)huawei.com>
Reviewed-by: Jeff Layton <jlayton(a)kernel.org>
Signed-off-by: Chuck Lever <chuck.lever(a)oracle.com>
[ Adjust context ]
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Conflicts:
fs/nfsd/nfs4state.c
[Commit 5aa2c4a1fe28 ("NFSD: Add a nfsd4_file_hash_remove() helper") add
declaration of nfsd4_file_hash_remove();
commit ea92c0768f98 ("nfsd: simplify nfsd_renew") change the return value;
commit 438ef64bbfe4 ("NFSD: register/unregister of nfsd-client shrinker at
nfsd startup/shutdown time") add unregister_shrinker() in
nfs4_state_shutdown_net();
commit 67ef9e5fd737 ("NFSD: add courteous server support for thread with
only delegation") move the position of declaration of laundry_wq.]
Signed-off-by: Li Lingfeng <lilingfeng3(a)huawei.com>
---
fs/nfsd/netns.h | 2 ++
fs/nfsd/nfs4state.c | 44 +++++++++++++++++++++++++++++++++++++++++---
fs/nfsd/nfsctl.c | 3 +--
fs/nfsd/state.h | 2 +-
4 files changed, 45 insertions(+), 6 deletions(-)
diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
index 8500acac09e0..ca5c386aafce 100644
--- a/fs/nfsd/netns.h
+++ b/fs/nfsd/netns.h
@@ -40,6 +40,8 @@ struct nfsd_net {
struct lock_manager nfsd4_manager;
bool grace_ended;
+ bool grace_end_forced;
+ bool client_tracking_active;
time64_t boot_time;
struct dentry *nfsd_client_dir;
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 302bbad5cdc6..ee28d5456dde 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -80,7 +80,7 @@ static u64 current_sessionid = 1;
/* forward declarations */
static bool check_for_locks(struct nfs4_file *fp, struct nfs4_lockowner *lowner);
static void nfs4_free_ol_stateid(struct nfs4_stid *stid);
-void nfsd4_end_grace(struct nfsd_net *nn);
+static void nfsd4_end_grace(struct nfsd_net *nn);
static void _free_cpntf_state_locked(struct nfsd_net *nn, struct nfs4_cpntf_state *cps);
/* Locking: */
@@ -5346,7 +5346,7 @@ nfsd4_renew(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
return status;
}
-void
+static void
nfsd4_end_grace(struct nfsd_net *nn)
{
/* do nothing if grace period already ended */
@@ -5379,6 +5379,34 @@ nfsd4_end_grace(struct nfsd_net *nn)
*/
}
+static struct workqueue_struct *laundry_wq;
+/**
+ * nfsd4_force_end_grace - forcibly end the NFSv4 grace period
+ * @nn: network namespace for the server instance to be updated
+ *
+ * Forces bypass of normal grace period completion, then schedules
+ * the laundromat to end the grace period immediately. Does not wait
+ * for the grace period to fully terminate before returning.
+ *
+ * Return values:
+ * %true: Grace termination schedule
+ * %false: No action was taken
+ */
+bool nfsd4_force_end_grace(struct nfsd_net *nn)
+{
+ if (!nn->client_tracking_ops)
+ return false;
+ spin_lock(&nn->client_lock);
+ if (nn->grace_ended || !nn->client_tracking_active) {
+ spin_unlock(&nn->client_lock);
+ return false;
+ }
+ WRITE_ONCE(nn->grace_end_forced, true);
+ mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
+ spin_unlock(&nn->client_lock);
+ return true;
+}
+
/*
* If we've waited a lease period but there are still clients trying to
* reclaim, wait a little longer to give them a chance to finish.
@@ -5388,6 +5416,8 @@ static bool clients_still_reclaiming(struct nfsd_net *nn)
time64_t double_grace_period_end = nn->boot_time +
2 * nn->nfsd4_lease;
+ if (READ_ONCE(nn->grace_end_forced))
+ return false;
if (nn->track_reclaim_completes &&
atomic_read(&nn->nr_reclaim_complete) ==
nn->reclaim_str_hashtbl_size)
@@ -5530,7 +5560,6 @@ nfs4_laundromat(struct nfsd_net *nn)
return new_timeo;
}
-static struct workqueue_struct *laundry_wq;
static void laundromat_main(struct work_struct *);
static void
@@ -7343,6 +7372,8 @@ static int nfs4_state_create_net(struct net *net)
nn->unconf_name_tree = RB_ROOT;
nn->boot_time = ktime_get_real_seconds();
nn->grace_ended = false;
+ nn->grace_end_forced = false;
+ nn->client_tracking_active = false;
nn->nfsd4_manager.block_opens = true;
INIT_LIST_HEAD(&nn->nfsd4_manager.list);
INIT_LIST_HEAD(&nn->client_lru);
@@ -7409,6 +7440,10 @@ nfs4_state_start_net(struct net *net)
return ret;
locks_start_grace(net, &nn->nfsd4_manager);
nfsd4_client_tracking_init(net);
+ /* safe for laundromat to run now */
+ spin_lock(&nn->client_lock);
+ nn->client_tracking_active = true;
+ spin_unlock(&nn->client_lock);
if (nn->track_reclaim_completes && nn->reclaim_str_hashtbl_size == 0)
goto skip_grace;
printk(KERN_INFO "NFSD: starting %lld-second grace period (net %x)\n",
@@ -7458,6 +7493,9 @@ nfs4_state_shutdown_net(struct net *net)
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
cancel_delayed_work_sync(&nn->laundromat_work);
+ spin_lock(&nn->client_lock);
+ nn->client_tracking_active = false;
+ spin_unlock(&nn->client_lock);
locks_end_grace(&nn->nfsd4_manager);
INIT_LIST_HEAD(&reaplist);
diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
index 040340080b3d..e84c8cf8e2c0 100644
--- a/fs/nfsd/nfsctl.c
+++ b/fs/nfsd/nfsctl.c
@@ -1133,9 +1133,8 @@ static ssize_t write_v4_end_grace(struct file *file, char *buf, size_t size)
case 'Y':
case 'y':
case '1':
- if (!nn->nfsd_serv)
+ if (!nfsd4_force_end_grace(nn))
return -EBUSY;
- nfsd4_end_grace(nn);
break;
default:
return -EINVAL;
diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
index 9eae11a9d21c..0e4f98b41364 100644
--- a/fs/nfsd/state.h
+++ b/fs/nfsd/state.h
@@ -683,7 +683,7 @@ static inline void get_nfs4_file(struct nfs4_file *fi)
struct nfsd_file *find_any_file(struct nfs4_file *f);
/* grace period management */
-void nfsd4_end_grace(struct nfsd_net *nn);
+bool nfsd4_force_end_grace(struct nfsd_net *nn);
/* nfs4recover operations */
extern int nfsd4_client_tracking_init(struct net *net);
--
2.52.0
2
1
From: Hongye Lin <linhongye(a)h-partners.com>
mainline inclusion
from mainline-v6.8-rc1
commit e3a649ecf8b9253cb1d05ceb085544472b06446f
category: feature
bugzilla: https://atomgit.com/openeuler/kernel/issues/8904
CVE: NA
Reference: https://web.git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/comm…
----------------------------------------------------------------------
DDI0601 2023-09 defines a new sysrem register FPMR (Floating Point Mode
Register) which configures the new FP8 features. Add a definition of this
register.
Qinxin Xia (18):
arm64/sysreg: Add definition for ID_AA64PFR2_EL1
arm64/sysreg: Add definition for ID_AA64ISAR3_EL1
arm64/sysreg: Add definition for ID_AA64FPFR0_EL1
arm64/sysreg: Add definition for FPMR
arm64/sysreg: Add EnFPM field for SCTLR_EL1 and HCRX_EL2
arm64/sysreg: Add LUT field for ID_AA64ISAR2_EL1
arm64/cpufeature: Hook new identification registers up to cpufeature
arm64/fpsimd: Support FEAT_FPMR
arm64/hwcap: Define hwcaps for 2023 DPISA features
kselftest/arm64: Add basic FPMR test
kselftest/arm64: Handle FPMR context in generic signal frame parser
kselftest/arm64: Add 2023 DPISA hwcap test coverage
arm64: Kconfig: Detect toolchain support for LSUI
arm64: cpufeature: add FEAT_LSUI
arm64/signal: Add FPMR signal handling
arm64/ptrace: Expose FPMR via ptrace
Fix kabi for thread struct with FPMR
Add support "arm64.nocnp" for start option
Yifan Wu (4):
arm64/hwcap: Add support for FEAT_CMPBR
kselftest/arm64: Add FEAT_CMPBR to the hwcap selftest
arm64/sysreg: Update ID_AA64ISAR2_EL1 for FEAT_CMPBR
selftest/arm64: Fix sve2p1_sigill() to hwcap test
Documentation/arch/arm64/elf_hwcaps.rst | 30 +++++
arch/arm64/Kconfig | 5 +
arch/arm64/include/asm/cpu.h | 3 +
arch/arm64/include/asm/cpufeature.h | 6 +
arch/arm64/include/asm/fpsimd.h | 2 +
arch/arm64/include/asm/hwcap.h | 10 ++
arch/arm64/include/asm/processor.h | 5 +
arch/arm64/include/asm/signal_common.h | 16 +++
arch/arm64/include/uapi/asm/hwcap.h | 10 ++
arch/arm64/include/uapi/asm/sigcontext.h | 8 ++
arch/arm64/kernel/cpufeature.c | 70 +++++++++-
arch/arm64/kernel/cpuinfo.c | 3 +
arch/arm64/kernel/fpsimd.c | 13 ++
arch/arm64/kernel/hwcap_str.h | 10 ++
arch/arm64/kernel/idreg-override.c | 11 ++
arch/arm64/kernel/ptrace.c | 42 ++++++
arch/arm64/kernel/signal.c | 46 +++++++
arch/arm64/tools/cpucaps | 2 +
arch/arm64/tools/sysreg | 122 +++++++++++++++++-
include/uapi/linux/elf.h | 1 +
tools/testing/selftests/arm64/abi/hwcap.c | 120 ++++++++++++++++-
.../testing/selftests/arm64/signal/.gitignore | 1 +
.../arm64/signal/testcases/fpmr_siginfo.c | 82 ++++++++++++
.../arm64/signal/testcases/testcases.c | 8 ++
.../arm64/signal/testcases/testcases.h | 1 +
25 files changed, 621 insertions(+), 6 deletions(-)
create mode 100644 tools/testing/selftests/arm64/signal/testcases/fpmr_siginfo.c
--
2.33.0
2
23
Chen Yu (1):
sched/eevdf: Fix wakeup-preempt by checking cfs_rq->nr_running
Ingo Molnar (2):
sched/fair: Rename cfs_rq::avg_load to cfs_rq::sum_weight
sched/fair: Rename cfs_rq::avg_vruntime to ::sum_w_vruntime, and
helper functions
Peter Zijlstra (10):
sched/fair: Fix zero_vruntime tracking
sched/fair: Fix EEVDF entity placement bug causing scheduling lag
sched/fair: Adhere to place_entity() constraints
sched: Unify runtime accounting across classes
sched: Remove vruntime from trace_sched_stat_runtime()
sched: Unify more update_curr*()
sched/eevdf: Allow shorter slices to wakeup-preempt
sched/fair: Only set slice protection at pick time
sched/fair: Fix zero_vruntime tracking fix
sched/debug: Fix avg_vruntime() usage
Vincent Guittot (2):
sched/fair: Use protect_slice() instead of direct comparison
sched/fair: Fix NO_RUN_TO_PARITY case
Wang Tao (1):
sched/eevdf: Update se->vprot in reweight_entity()
Zhang Qiao (1):
sched: Fix struct sched_entity kabi broken
Zicheng Qu (5):
sched: Re-evaluate scheduling when migrating queued tasks out of
throttled cgroups
sched: Fix kabi breakage of struct cfs_rq for sum_weight
sched: Fix kabi breakage of struct cfs_rq for sum_w_vruntime
sched/eevdf: Disable shorter slices to wakeup-preempt
sched/fair: Fix vruntime drift by preventing double lag scaling during
reweight
zihan zhou (1):
sched: Cancel the slice protection of the idle entity
include/linux/sched.h | 15 +-
include/trace/events/sched.h | 15 +-
kernel/sched/core.c | 4 +-
kernel/sched/deadline.c | 15 +-
kernel/sched/debug.c | 4 +-
kernel/sched/fair.c | 477 +++++++++++++++++++----------------
kernel/sched/features.h | 5 +
kernel/sched/rt.c | 15 +-
kernel/sched/sched.h | 18 +-
kernel/sched/stop_task.c | 13 +-
10 files changed, 304 insertions(+), 277 deletions(-)
--
2.34.1
2
24
10 Apr '26
Jinjiang Tu (3):
arm64: mm: hardcode domain info and add get_domain_cpumask()
arm64: mm: Track CPUs that a task has run on for TLBID optimization
arm64: tlbflush: Optimize flush_tlb_mm() by using TLBID
Marc Zyngier (2):
arm64: cpufeature: Add ID_AA64MMFR4_EL1 handling
arm64: sysreg: Add layout for ID_AA64MMFR4_EL1
Zeng Heng (1):
arm64: cpufeature: Add TLBID (Domain-based TLB Invalidation) detection
arch/arm64/Kconfig | 12 ++
arch/arm64/include/asm/cpu.h | 1 +
arch/arm64/include/asm/cpufeature.h | 6 +
arch/arm64/include/asm/mmu_context.h | 3 +
arch/arm64/include/asm/tlbflush.h | 74 ++++++++-
arch/arm64/kernel/cpufeature.c | 17 ++
arch/arm64/kernel/cpuinfo.c | 1 +
arch/arm64/mm/context.c | 224 ++++++++++++++++++++++++++-
arch/arm64/tools/cpucaps | 1 +
arch/arm64/tools/sysreg | 41 +++++
10 files changed, 375 insertions(+), 5 deletions(-)
--
2.25.1
3
8
Introduce dmem cgroup
Chen Ridong (3):
cgroup/dmem: fix NULL pointer dereference when setting max
cgroup/dmem: avoid rcu warning when unregister region
cgroup/dmem: avoid pool UAF
Friedrich Vock (1):
cgroup/dmem: Don't open-code css_for_each_descendant_pre
Geert Uytterhoeven (1):
cgroup/rdma: Drop bogus PAGE_COUNTER select
Jiapeng Chong (1):
kernel/cgroup: Remove the unused variable climit
Liu Kai (2):
cgroup/dmem: reuse SUBSYS for dmem and devices to preserve KABI
dmem: enable CONFIG_CGROUP_DMEM in arm64/x86 defconfig
Maarten Lankhorst (2):
mm/page_counter: move calculating protection values to page_counter
kernel/cgroup: Add "dmem" memory accounting cgroup
Maxime Ripard (3):
cgroup/dmem: Select PAGE_COUNTER
cgroup/dmem: Fix parameters documentation
doc/cgroup: Fix title underline length
Roman Gushchin (1):
mm: page_counters: put page_counter_calculate_protection() under
CONFIG_MEMCG
Documentation/admin-guide/cgroup-v2.rst | 58 +-
Documentation/core-api/cgroup.rst | 9 +
Documentation/core-api/index.rst | 1 +
Documentation/gpu/drm-compute.rst | 54 ++
arch/arm64/configs/openeuler_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
include/linux/cgroup_dmem.h | 71 ++
include/linux/cgroup_subsys.h | 4 +-
include/linux/device_cgroup.h | 20 +
include/linux/page_counter.h | 10 +
init/Kconfig | 10 +
kernel/cgroup/Makefile | 1 +
kernel/cgroup/dmem.c | 1025 +++++++++++++++++++++++
mm/memcontrol.c | 154 +---
mm/page_counter.c | 175 ++++
security/device_cgroup.c | 63 +-
16 files changed, 1492 insertions(+), 165 deletions(-)
create mode 100644 Documentation/core-api/cgroup.rst
create mode 100644 Documentation/gpu/drm-compute.rst
create mode 100644 include/linux/cgroup_dmem.h
create mode 100644 kernel/cgroup/dmem.c
--
2.34.1
2
15
[PATCH OLK-6.6] serial: core: fix infinite loop in handle_tx() for PORT_UNKNOWN
by Gu Bowen 09 Apr '26
by Gu Bowen 09 Apr '26
09 Apr '26
From: Jiayuan Chen <jiayuan.chen(a)shopee.com>
mainline inclusion
from mainline-v7.0-rc5
commit 455ce986fa356ff43a43c0d363ba95fa152f21d5
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14111
CVE: CVE-2026-23472
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
uart_write_room() and uart_write() behave inconsistently when
xmit_buf is NULL (which happens for PORT_UNKNOWN ports that were
never properly initialized):
- uart_write_room() returns kfifo_avail() which can be > 0
- uart_write() checks xmit_buf and returns 0 if NULL
This inconsistency causes an infinite loop in drivers that rely on
tty_write_room() to determine if they can write:
while (tty_write_room(tty) > 0) {
written = tty->ops->write(...);
// written is always 0, loop never exits
}
For example, caif_serial's handle_tx() enters an infinite loop when
used with PORT_UNKNOWN serial ports, causing system hangs.
Fix by making uart_write_room() also check xmit_buf and return 0 if
it's NULL, consistent with uart_write().
Reproducer: https://gist.github.com/mrpre/d9a694cc0e19828ee3bc3b37983fde13
Signed-off-by: Jiayuan Chen <jiayuan.chen(a)shopee.com>
Cc: stable <stable(a)kernel.org>
Link: https://patch.msgid.link/20260204074327.226165-1-jiayuan.chen@linux.dev
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Conflicts:
drivers/tty/serial/serial_core.c
[Context conflicts.]
Signed-off-by: Gu Bowen <gubowen5(a)huawei.com>
---
drivers/tty/serial/serial_core.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
index 7ce9c87750da..bc3241d47665 100644
--- a/drivers/tty/serial/serial_core.c
+++ b/drivers/tty/serial/serial_core.c
@@ -643,7 +643,10 @@ static unsigned int uart_write_room(struct tty_struct *tty)
unsigned int ret;
port = uart_port_lock(state, flags);
- ret = uart_circ_chars_free(&state->xmit);
+ if (!state->xmit.buf)
+ ret = 0;
+ else
+ ret = uart_circ_chars_free(&state->xmit);
uart_port_unlock(port, flags);
return ret;
}
--
2.43.0
2
1
Introduce dmem cgroup
Chen Ridong (3):
cgroup/dmem: fix NULL pointer dereference when setting max
cgroup/dmem: avoid rcu warning when unregister region
cgroup/dmem: avoid pool UAF
Friedrich Vock (1):
cgroup/dmem: Don't open-code css_for_each_descendant_pre
Geert Uytterhoeven (1):
cgroup/rdma: Drop bogus PAGE_COUNTER select
Jiapeng Chong (1):
kernel/cgroup: Remove the unused variable climit
Liu Kai (1):
cgroup/dmem: reuse SUBSYS for dmem and devices to preserve KABI
Maarten Lankhorst (2):
mm/page_counter: move calculating protection values to page_counter
kernel/cgroup: Add "dmem" memory accounting cgroup
Maxime Ripard (3):
cgroup/dmem: Select PAGE_COUNTER
cgroup/dmem: Fix parameters documentation
doc/cgroup: Fix title underline length
Roman Gushchin (1):
mm: page_counters: put page_counter_calculate_protection() under
CONFIG_MEMCG
Documentation/admin-guide/cgroup-v2.rst | 58 +-
Documentation/core-api/cgroup.rst | 9 +
Documentation/core-api/index.rst | 1 +
Documentation/gpu/drm-compute.rst | 54 ++
include/linux/cgroup_dmem.h | 71 ++
include/linux/cgroup_subsys.h | 4 +-
include/linux/device_cgroup.h | 20 +
include/linux/page_counter.h | 10 +
init/Kconfig | 10 +
kernel/cgroup/Makefile | 1 +
kernel/cgroup/dmem.c | 1025 +++++++++++++++++++++++
mm/memcontrol.c | 154 +---
mm/page_counter.c | 175 ++++
security/device_cgroup.c | 63 +-
14 files changed, 1490 insertions(+), 165 deletions(-)
create mode 100644 Documentation/core-api/cgroup.rst
create mode 100644 Documentation/gpu/drm-compute.rst
create mode 100644 include/linux/cgroup_dmem.h
create mode 100644 kernel/cgroup/dmem.c
--
2.34.1
2
14
From: Johan Hovold <johan(a)kernel.org>
stable inclusion
from stable-v6.6.130
commit f13100b1f5f111989f0750540a795fdef47492af
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14114
CVE: CVE-2026-23475
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit dee0774bbb2abb172e9069ce5ffef579b12b3ae9 upstream.
The controller per-cpu statistics is not allocated until after the
controller has been registered with driver core, which leaves a window
where accessing the sysfs attributes can trigger a NULL-pointer
dereference.
Fix this by moving the statistics allocation to controller allocation
while tying its lifetime to that of the controller (rather than using
implicit devres).
Fixes: 6598b91b5ac3 ("spi: spi.c: Convert statistics to per-cpu u64_stats_t")
Cc: stable(a)vger.kernel.org # 6.0
Cc: David Jander <david(a)protonic.nl>
Signed-off-by: Johan Hovold <johan(a)kernel.org>
Link: https://patch.msgid.link/20260312151817.32100-3-johan@kernel.org
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Lin Ruifeng <linruifeng4(a)huawei.com>
---
drivers/spi/spi.c | 17 ++++++++---------
1 file changed, 8 insertions(+), 9 deletions(-)
diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
index 66f694457a8b..2ca4ea45e3b2 100644
--- a/drivers/spi/spi.c
+++ b/drivers/spi/spi.c
@@ -2773,6 +2773,8 @@ static void spi_controller_release(struct device *dev)
struct spi_controller *ctlr;
ctlr = container_of(dev, struct spi_controller, dev);
+
+ free_percpu(ctlr->pcpu_statistics);
kfree(ctlr);
}
@@ -2924,6 +2926,12 @@ struct spi_controller *__spi_alloc_controller(struct device *dev,
if (!ctlr)
return NULL;
+ ctlr->pcpu_statistics = spi_alloc_pcpu_stats(NULL);
+ if (!ctlr->pcpu_statistics) {
+ kfree(ctlr);
+ return NULL;
+ }
+
device_initialize(&ctlr->dev);
INIT_LIST_HEAD(&ctlr->queue);
spin_lock_init(&ctlr->queue_lock);
@@ -3212,13 +3220,6 @@ int spi_register_controller(struct spi_controller *ctlr)
if (status)
goto del_ctrl;
}
- /* Add statistics */
- ctlr->pcpu_statistics = spi_alloc_pcpu_stats(dev);
- if (!ctlr->pcpu_statistics) {
- dev_err(dev, "Error allocating per-cpu statistics\n");
- status = -ENOMEM;
- goto destroy_queue;
- }
mutex_lock(&board_lock);
list_add_tail(&ctlr->list, &spi_controller_list);
@@ -3231,8 +3232,6 @@ int spi_register_controller(struct spi_controller *ctlr)
acpi_register_spi_devices(ctlr);
return status;
-destroy_queue:
- spi_destroy_queue(ctlr);
del_ctrl:
device_del(&ctlr->dev);
free_bus_id:
--
2.43.0
2
1
[PATCH OLK-6.6] netfilter: nf_tables: Fix null ptr dereference of nft_setelem_remove
by Dong Chenchen 09 Apr '26
by Dong Chenchen 09 Apr '26
09 Apr '26
hulk inclusion
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14141
CVE: CVE-2026-23272
--------------------------------
The initialization process of elem is missing in nft_add_set_elem(),
which lead to null-ptr-deref of nft_setelem_remove as below.
Unable to handle kernel NULL pointer dereference at virtual address 0000000000000014
Call trace:
nft_setelem_remove+0x28/0xe0 [nf_tables]
__nf_tables_abort+0x5f8/0xbe8 [nf_tables]
nf_tables_abort+0x64/0x1c8 [nf_tables]
nfnetlink_rcv_batch+0x2d8/0x850 [nfnetlink]
nfnetlink_rcv+0x168/0x1a8 [nfnetlink]
netlink_unicast_kernel+0x7c/0x160
netlink_unicast+0x1ac/0x250
netlink_sendmsg+0x21c/0x458
__sock_sendmsg+0x4c/0xa8
____sys_sendmsg+0x280/0x300
___sys_sendmsg+0x8c/0xf8
__sys_sendmsg+0x74/0xe0
__arm64_sys_sendmsg+0x2c/0x40
invoke_syscall+0x50/0x128
el0_svc_common.constprop.0+0xc8/0xf0
do_el0_svc+0x48/0x78
el0_slow_syscall+0x44/0x1b8
el0t_64_sync_handler+0x100/0x130
el0t_64_sync+0x188/0x190
Initialize elem to fix it.
Fixes: e7a6bffde0fe ("netfilter: nf_tables: unconditionally bump set->nelems before insertion")
Signed-off-by: Dong Chenchen <dongchenchen2(a)huawei.com>
---
net/netfilter/nf_tables_api.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
index d8057efc777d..ef4f6f8c7a3f 100644
--- a/net/netfilter/nf_tables_api.c
+++ b/net/netfilter/nf_tables_api.c
@@ -7144,6 +7144,7 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
goto err_element_clash;
}
+ nft_trans_elem(trans) = elem;
nft_trans_commit_list_add_tail(ctx->net, trans);
return set_full ? -ENFILE : 0;
--
2.25.1
2
1
[PATCH OLK-6.6] wifi: mac80211: always free skb on ieee80211_tx_prepare_skb() failure
by Yi Yang 09 Apr '26
by Yi Yang 09 Apr '26
09 Apr '26
From: Felix Fietkau <nbd(a)nbd.name>
mainline inclusion
from mainline-v7.0-rc5
commit d5ad6ab61cbd89afdb60881f6274f74328af3ee9
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14084
CVE: CVE-2026-23444
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
ieee80211_tx_prepare_skb() has three error paths, but only two of them
free the skb. The first error path (ieee80211_tx_prepare() returning
TX_DROP) does not free it, while invoke_tx_handlers() failure and the
fragmentation check both do.
Add kfree_skb() to the first error path so all three are consistent,
and remove the now-redundant frees in callers (ath9k, mt76,
mac80211_hwsim) to avoid double-free.
Document the skb ownership guarantee in the function's kdoc.
Signed-off-by: Felix Fietkau <nbd(a)nbd.name>
Link: https://patch.msgid.link/20260314065455.2462900-1-nbd@nbd.name
Fixes: 06be6b149f7e ("mac80211: add ieee80211_tx_prepare_skb() helper function")
Signed-off-by: Johannes Berg <johannes.berg(a)intel.com>
Conflicts:
drivers/net/wireless/mediatek/mt76/scan.c
[Commit 31083e38548f ("ifi: mt76: add code for emulating hardware
scanning") was not merged. No problematic function is introduced.]
include/net/mac80211.h
[Commit 0e9824e0d59b2 ("wifi: mac80211: Add missing return value
documentation") was not merged. Context conflicts.]
Signed-off-by: Yi Yang <yiyang13(a)huawei.com>
---
drivers/net/wireless/ath/ath9k/channel.c | 6 ++----
drivers/net/wireless/virtual/mac80211_hwsim.c | 1 -
include/net/mac80211.h | 4 ++++
net/mac80211/tx.c | 4 +++-
4 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/drivers/net/wireless/ath/ath9k/channel.c b/drivers/net/wireless/ath/ath9k/channel.c
index 571062f2e82a..ba8ec5112afe 100644
--- a/drivers/net/wireless/ath/ath9k/channel.c
+++ b/drivers/net/wireless/ath/ath9k/channel.c
@@ -1011,7 +1011,7 @@ static void ath_scan_send_probe(struct ath_softc *sc,
skb_set_queue_mapping(skb, IEEE80211_AC_VO);
if (!ieee80211_tx_prepare_skb(sc->hw, vif, skb, band, NULL))
- goto error;
+ return;
txctl.txq = sc->tx.txq_map[IEEE80211_AC_VO];
if (ath_tx_start(sc->hw, skb, &txctl))
@@ -1124,10 +1124,8 @@ ath_chanctx_send_vif_ps_frame(struct ath_softc *sc, struct ath_vif *avp,
skb->priority = 7;
skb_set_queue_mapping(skb, IEEE80211_AC_VO);
- if (!ieee80211_tx_prepare_skb(sc->hw, vif, skb, band, &sta)) {
- dev_kfree_skb_any(skb);
+ if (!ieee80211_tx_prepare_skb(sc->hw, vif, skb, band, &sta))
return false;
- }
break;
default:
return false;
diff --git a/drivers/net/wireless/virtual/mac80211_hwsim.c b/drivers/net/wireless/virtual/mac80211_hwsim.c
index 1214e7dcc812..bf12ff0ab06a 100644
--- a/drivers/net/wireless/virtual/mac80211_hwsim.c
+++ b/drivers/net/wireless/virtual/mac80211_hwsim.c
@@ -2892,7 +2892,6 @@ static void hw_scan_work(struct work_struct *work)
hwsim->tmp_chan->band,
NULL)) {
rcu_read_unlock();
- kfree_skb(probe);
continue;
}
diff --git a/include/net/mac80211.h b/include/net/mac80211.h
index adaa1b2323d2..85d785060e76 100644
--- a/include/net/mac80211.h
+++ b/include/net/mac80211.h
@@ -7032,6 +7032,10 @@ void ieee80211_report_wowlan_wakeup(struct ieee80211_vif *vif,
* @band: the band to transmit on
* @sta: optional pointer to get the station to send the frame to
*
+ * Return: %true if the skb was prepared, %false otherwise.
+ * On failure, the skb is freed by this function; callers must not
+ * free it again.
+ *
* Note: must be called under RCU lock
*/
bool ieee80211_tx_prepare_skb(struct ieee80211_hw *hw,
diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
index 7eddcb6f9645..2a708132320c 100644
--- a/net/mac80211/tx.c
+++ b/net/mac80211/tx.c
@@ -1911,8 +1911,10 @@ bool ieee80211_tx_prepare_skb(struct ieee80211_hw *hw,
struct ieee80211_tx_data tx;
struct sk_buff *skb2;
- if (ieee80211_tx_prepare(sdata, &tx, NULL, skb) == TX_DROP)
+ if (ieee80211_tx_prepare(sdata, &tx, NULL, skb) == TX_DROP) {
+ kfree_skb(skb);
return false;
+ }
info->band = band;
info->control.vif = vif;
--
2.25.1
2
1
---
tools/testing/sharepool/Makefile | 36 +
tools/testing/sharepool/libs/Makefile | 8 +
.../sharepool/libs/default_args_main.c | 87 ++
tools/testing/sharepool/libs/default_main.c | 85 ++
tools/testing/sharepool/libs/sem_use.c | 116 ++
tools/testing/sharepool/libs/sem_use.h | 13 +
tools/testing/sharepool/libs/sharepool_lib.c | 218 ++++
tools/testing/sharepool/libs/sharepool_lib.h | 534 ++++++++
tools/testing/sharepool/libs/test_lib.h | 324 +++++
tools/testing/sharepool/module/Makefile | 14 +
.../sharepool/module/check_sharepool_alloc.c | 132 ++
.../sharepool/module/check_sharepool_fault.c | 63 +
.../testing/sharepool/module/sharepool_dev.c | 1130 +++++++++++++++++
.../testing/sharepool/module/sharepool_dev.h | 149 +++
tools/testing/sharepool/test.sh | 55 +
tools/testing/sharepool/test_end.sh | 8 +
tools/testing/sharepool/test_loop.sh | 35 +
tools/testing/sharepool/test_prepare.sh | 8 +
tools/testing/sharepool/testcase/Makefile | 12 +
.../sharepool/testcase/api_test/Makefile | 14 +
.../sharepool/testcase/api_test/api_test.sh | 15 +
.../api_test/is_sharepool_addr/Makefile | 13 +
.../test_is_sharepool_addr.c | 90 ++
.../testcase/api_test/sp_alloc/Makefile | 13 +
.../api_test/sp_alloc/test_sp_alloc.c | 543 ++++++++
.../api_test/sp_alloc/test_sp_alloc2.c | 131 ++
.../api_test/sp_alloc/test_sp_alloc3.c | 147 +++
.../api_test/sp_alloc/test_spa_error.c | 109 ++
.../api_test/sp_alloc_nodemask/Makefile | 13 +
.../sp_alloc_nodemask/start_vm_test_16.sh | 49 +
.../sp_alloc_nodemask/start_vm_test_4.sh | 25 +
.../sp_alloc_nodemask/test_nodemask.c | 782 ++++++++++++
.../api_test/sp_config_dvpp_range/Makefile | 13 +
.../test_sp_config_dvpp_range.c | 367 ++++++
.../test_sp_multi_numa_node.c | 289 +++++
.../testcase/api_test/sp_free/Makefile | 13 +
.../testcase/api_test/sp_free/test_sp_free.c | 127 ++
.../api_test/sp_group_add_task/Makefile | 13 +
.../test_sp_group_add_task.c | 568 +++++++++
.../test_sp_group_add_task2.c | 254 ++++
.../test_sp_group_add_task3.c | 250 ++++
.../test_sp_group_add_task4.c | 148 +++
.../test_sp_group_add_task5.c | 113 ++
.../api_test/sp_group_del_task/Makefile | 13 +
.../test_sp_group_del_task.c | 1083 ++++++++++++++++
.../api_test/sp_group_id_by_pid/Makefile | 13 +
.../test_sp_group_id_by_pid.c | 179 +++
.../test_sp_group_id_by_pid2.c | 318 +++++
.../api_test/sp_id_of_current/Makefile | 13 +
.../sp_id_of_current/test_sp_id_of_current.c | 112 ++
.../api_test/sp_make_share_k2u/Makefile | 13 +
.../test_sp_make_share_k2u.c | 624 +++++++++
.../test_sp_make_share_k2u2.c | 361 ++++++
.../api_test/sp_make_share_u2k/Makefile | 13 +
.../test_sp_make_share_u2k.c | 307 +++++
.../testcase/api_test/sp_numa_maps/Makefile | 13 +
.../api_test/sp_numa_maps/test_sp_numa_maps.c | 164 +++
.../testcase/api_test/sp_reg_hpage/Makefile | 13 +
.../api_test/sp_reg_hpage/test_sp_hpage_reg.c | 44 +
.../test_sp_hpage_reg_after_alloc.c | 84 ++
.../sp_reg_hpage/test_sp_hpage_reg_exec.c | 82 ++
.../testcase/api_test/sp_unshare/Makefile | 13 +
.../api_test/sp_unshare/test_sp_unshare.c | 394 ++++++
.../sp_walk_page_range_and_free/Makefile | 13 +
.../test_sp_walk_page_range_and_free.c | 339 +++++
.../testcase/dts_bugfix_test/Makefile | 15 +
.../dts_bugfix_test/dts_bugfix_test.sh | 43 +
.../test_01_coredump_k2u_alloc.c | 603 +++++++++
.../dts_bugfix_test/test_02_spg_not_alive.c | 166 +++
.../dts_bugfix_test/test_03_hugepage_rsvd.c | 84 ++
.../dts_bugfix_test/test_04_spg_add_del.c | 100 ++
.../dts_bugfix_test/test_05_cgroup_limit.c | 76 ++
.../testcase/dts_bugfix_test/test_06_clone.c | 176 +++
.../dts_bugfix_test/test_08_addr_offset.c | 156 +++
.../dts_bugfix_test/test_09_spg_del_exit.c | 150 +++
.../test_10_walk_page_range_AA_lock.c | 124 ++
.../dts_bugfix_test/test_dvpp_readonly.c | 71 ++
.../sharepool/testcase/function_test/Makefile | 36 +
.../testcase/function_test/function_test.sh | 32 +
.../test_alloc_free_two_process.c | 303 +++++
.../function_test/test_alloc_readonly.c | 588 +++++++++
.../function_test/test_dvpp_multi_16G_alloc.c | 690 ++++++++++
.../test_dvpp_multi_16G_k2task.c | 604 +++++++++
.../function_test/test_dvpp_pass_through.c | 191 +++
.../function_test/test_dvpp_readonly.c | 147 +++
.../test_hugetlb_alloc_hugepage.c | 113 ++
.../testcase/function_test/test_k2u.c | 804 ++++++++++++
.../test_mm_mapped_to_multi_groups.c | 435 +++++++
.../function_test/test_non_dvpp_group.c | 167 +++
.../testcase/function_test/test_sp_ro.c | 719 +++++++++++
.../function_test/test_two_user_process.c | 626 +++++++++
.../testcase/function_test/test_u2k.c | 490 +++++++
.../sharepool/testcase/generate_list.sh | 46 +
.../testcase/performance_test/Makefile | 17 +
.../performance_test/performance_test.sh | 5 +
.../performance_test/test_perf_process_kill.c | 174 +++
.../performance_test/test_perf_sp_add_group.c | 375 ++++++
.../performance_test/test_perf_sp_alloc.c | 618 +++++++++
.../performance_test/test_perf_sp_k2u.c | 860 +++++++++++++
.../testcase/reliability_test/Makefile | 11 +
.../reliability_test/coredump/Makefile | 13 +
.../reliability_test/coredump/test_coredump.c | 581 +++++++++
.../coredump/test_coredump2.c | 202 +++
.../coredump/test_coredump_k2u_alloc.c | 562 ++++++++
.../reliability_test/fragment/Makefile | 13 +
.../fragment/test_external_fragmentation.c | 37 +
.../test_external_fragmentation_trigger.c | 58 +
.../reliability_test/k2u_u2k/Makefile | 13 +
.../k2u_u2k/test_k2u_and_kill.c | 276 ++++
.../k2u_u2k/test_k2u_unshare.c | 188 +++
.../k2u_u2k/test_malloc_u2k.c | 187 +++
.../k2u_u2k/test_u2k_and_kill.c | 155 +++
.../reliability_test/kthread/Makefile | 13 +
.../kthread/test_add_strange_task.c | 46 +
.../kthread/test_del_kthread.c | 61 +
.../testcase/reliability_test/others/Makefile | 13 +
.../reliability_test/others/test_judge_addr.c | 104 ++
.../others/test_kill_sp_process.c | 430 +++++++
.../reliability_test/others/test_kthread.c | 195 +++
.../others/test_mmap_sp_address.c | 223 ++++
.../others/test_notifier_block.c | 101 ++
.../reliability_test/reliability_test.sh | 51 +
.../reliability_test/sp_add_group/Makefile | 13 +
.../sp_add_group/test_add_exiting_task.c | 61 +
.../sp_add_group/test_add_group1.c | 118 ++
.../sp_add_group/test_add_strange_task.c | 46 +
.../reliability_test/sp_unshare/Makefile | 13 +
.../sp_unshare/test_unshare1.c | 325 +++++
.../sp_unshare/test_unshare2.c | 202 +++
.../sp_unshare/test_unshare3.c | 243 ++++
.../sp_unshare/test_unshare4.c | 516 ++++++++
.../sp_unshare/test_unshare5.c | 185 +++
.../sp_unshare/test_unshare6.c | 93 ++
.../sp_unshare/test_unshare7.c | 159 +++
.../sp_unshare/test_unshare_kill.c | 150 +++
.../testing/sharepool/testcase/remove_list.sh | 22 +
.../sharepool/testcase/scenario_test/Makefile | 15 +
.../testcase/scenario_test/scenario_test.sh | 45 +
.../test_auto_check_statistics.c | 338 +++++
.../scenario_test/test_dfx_heavy_load.c | 143 +++
.../scenario_test/test_dvpp_16g_limit.c | 68 +
.../testcase/scenario_test/test_failure.c | 630 +++++++++
.../testcase/scenario_test/test_hugepage.c | 231 ++++
.../scenario_test/test_hugepage_setting.sh | 51 +
.../scenario_test/test_max_50000_groups.c | 138 ++
.../testcase/scenario_test/test_oom.c | 135 ++
.../scenario_test/test_proc_sp_group_state.c | 170 +++
.../scenario_test/test_vmalloc_cgroup.c | 65 +
.../sharepool/testcase/stress_test/Makefile | 15 +
.../stress_test/sp_ro_fault_injection.sh | 21 +
.../testcase/stress_test/stress_test.sh | 47 +
.../stress_test/test_alloc_add_and_kill.c | 347 +++++
.../stress_test/test_alloc_free_two_process.c | 303 +++++
.../stress_test/test_concurrent_debug.c | 359 ++++++
.../testcase/stress_test/test_mult_u2k.c | 514 ++++++++
.../test_sharepool_enhancement_stress_cases.c | 692 ++++++++++
.../stress_test/test_u2k_add_and_kill.c | 358 ++++++
.../sharepool/testcase/test_all/Makefile | 8 +
.../sharepool/testcase/test_all/test_all.c | 285 +++++
.../testcase/test_mult_process/Makefile | 16 +
.../mult_add_group_test/Makefile | 13 +
.../test_add_multi_cases.c | 255 ++++
.../test_alloc_add_and_kill.c | 347 +++++
.../test_max_group_per_process.c | 94 ++
.../test_mult_alloc_and_add_group.c | 138 ++
.../test_mult_process_thread_exit.c | 498 ++++++++
.../test_mult_thread_add_group.c | 220 ++++
.../test_u2k_add_and_kill.c | 358 ++++++
.../mult_debug_test/Makefile | 13 +
.../test_add_group_and_print.c | 182 +++
.../mult_debug_test/test_concurrent_debug.c | 359 ++++++
.../mult_debug_test/test_debug_loop.c | 43 +
.../test_proc_interface_process.c | 636 ++++++++++
.../mult_debug_test/test_statistics_stress.c | 302 +++++
.../test_mult_process/mult_k2u_test/Makefile | 13 +
.../mult_k2u_test/test_mult_k2u.c | 855 +++++++++++++
.../mult_k2u_test/test_mult_pass_through.c | 405 ++++++
.../mult_k2u_test/test_mult_thread_k2u.c | 197 +++
.../test_mult_process/mult_u2k_test/Makefile | 13 +
.../mult_u2k_test/test_mult_u2k.c | 514 ++++++++
.../mult_u2k_test/test_mult_u2k3.c | 314 +++++
.../mult_u2k_test/test_mult_u2k4.c | 310 +++++
.../test_mult_process/stress_test/Makefile | 13 +
.../stress_test/test_alloc_free_two_process.c | 303 +++++
.../test_mult_process/stress_test/test_fuzz.c | 543 ++++++++
.../stress_test/test_mult_proc_interface.c | 701 ++++++++++
.../test_mult_process/test_mult_process.sh | 53 +
.../test_mult_process/test_proc_interface.sh | 19 +
188 files changed, 39522 insertions(+)
create mode 100644 tools/testing/sharepool/Makefile
create mode 100644 tools/testing/sharepool/libs/Makefile
create mode 100644 tools/testing/sharepool/libs/default_args_main.c
create mode 100644 tools/testing/sharepool/libs/default_main.c
create mode 100644 tools/testing/sharepool/libs/sem_use.c
create mode 100644 tools/testing/sharepool/libs/sem_use.h
create mode 100644 tools/testing/sharepool/libs/sharepool_lib.c
create mode 100644 tools/testing/sharepool/libs/sharepool_lib.h
create mode 100644 tools/testing/sharepool/libs/test_lib.h
create mode 100644 tools/testing/sharepool/module/Makefile
create mode 100644 tools/testing/sharepool/module/check_sharepool_alloc.c
create mode 100644 tools/testing/sharepool/module/check_sharepool_fault.c
create mode 100644 tools/testing/sharepool/module/sharepool_dev.c
create mode 100644 tools/testing/sharepool/module/sharepool_dev.h
create mode 100755 tools/testing/sharepool/test.sh
create mode 100755 tools/testing/sharepool/test_end.sh
create mode 100755 tools/testing/sharepool/test_loop.sh
create mode 100755 tools/testing/sharepool/test_prepare.sh
create mode 100644 tools/testing/sharepool/testcase/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/Makefile
create mode 100755 tools/testing/sharepool/testcase/api_test/api_test.sh
create mode 100644 tools/testing/sharepool/testcase/api_test/is_sharepool_addr/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/is_sharepool_addr/test_is_sharepool_addr.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_alloc/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc2.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc3.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_alloc/test_spa_error.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/Makefile
create mode 100755 tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/start_vm_test_16.sh
create mode 100755 tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/start_vm_test_4.sh
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/test_nodemask.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/test_sp_config_dvpp_range.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/test_sp_multi_numa_node.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_free/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_free/test_sp_free.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_add_task/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task2.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task3.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task4.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task5.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_del_task/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_del_task/test_sp_group_del_task.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/test_sp_group_id_by_pid.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/test_sp_group_id_by_pid2.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_id_of_current/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_id_of_current/test_sp_id_of_current.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/test_sp_make_share_k2u.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/test_sp_make_share_k2u2.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_make_share_u2k/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_make_share_u2k/test_sp_make_share_u2k.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_numa_maps/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_numa_maps/test_sp_numa_maps.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_reg_hpage/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg_after_alloc.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg_exec.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_unshare/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_unshare/test_sp_unshare.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_walk_page_range_and_free/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_walk_page_range_and_free/test_sp_walk_page_range_and_free.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/Makefile
create mode 100755 tools/testing/sharepool/testcase/dts_bugfix_test/dts_bugfix_test.sh
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_01_coredump_k2u_alloc.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_02_spg_not_alive.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_03_hugepage_rsvd.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_04_spg_add_del.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_05_cgroup_limit.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_06_clone.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_08_addr_offset.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_09_spg_del_exit.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_10_walk_page_range_AA_lock.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_dvpp_readonly.c
create mode 100644 tools/testing/sharepool/testcase/function_test/Makefile
create mode 100755 tools/testing/sharepool/testcase/function_test/function_test.sh
create mode 100644 tools/testing/sharepool/testcase/function_test/test_alloc_free_two_process.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_alloc_readonly.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_dvpp_multi_16G_alloc.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_dvpp_multi_16G_k2task.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_dvpp_pass_through.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_dvpp_readonly.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_hugetlb_alloc_hugepage.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_k2u.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_mm_mapped_to_multi_groups.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_non_dvpp_group.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_sp_ro.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_two_user_process.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_u2k.c
create mode 100755 tools/testing/sharepool/testcase/generate_list.sh
create mode 100644 tools/testing/sharepool/testcase/performance_test/Makefile
create mode 100755 tools/testing/sharepool/testcase/performance_test/performance_test.sh
create mode 100644 tools/testing/sharepool/testcase/performance_test/test_perf_process_kill.c
create mode 100644 tools/testing/sharepool/testcase/performance_test/test_perf_sp_add_group.c
create mode 100644 tools/testing/sharepool/testcase/performance_test/test_perf_sp_alloc.c
create mode 100644 tools/testing/sharepool/testcase/performance_test/test_perf_sp_k2u.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/Makefile
create mode 100644 tools/testing/sharepool/testcase/reliability_test/coredump/Makefile
create mode 100644 tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump2.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump_k2u_alloc.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/fragment/Makefile
create mode 100644 tools/testing/sharepool/testcase/reliability_test/fragment/test_external_fragmentation.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/fragment/test_external_fragmentation_trigger.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/k2u_u2k/Makefile
create mode 100644 tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_k2u_and_kill.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_k2u_unshare.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_malloc_u2k.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_u2k_and_kill.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/kthread/Makefile
create mode 100644 tools/testing/sharepool/testcase/reliability_test/kthread/test_add_strange_task.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/kthread/test_del_kthread.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/others/Makefile
create mode 100644 tools/testing/sharepool/testcase/reliability_test/others/test_judge_addr.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/others/test_kill_sp_process.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/others/test_kthread.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/others/test_mmap_sp_address.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/others/test_notifier_block.c
create mode 100755 tools/testing/sharepool/testcase/reliability_test/reliability_test.sh
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_add_group/Makefile
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_exiting_task.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_group1.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_strange_task.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_unshare/Makefile
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare1.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare2.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare3.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare4.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare5.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare6.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare7.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare_kill.c
create mode 100755 tools/testing/sharepool/testcase/remove_list.sh
create mode 100644 tools/testing/sharepool/testcase/scenario_test/Makefile
create mode 100755 tools/testing/sharepool/testcase/scenario_test/scenario_test.sh
create mode 100644 tools/testing/sharepool/testcase/scenario_test/test_auto_check_statistics.c
create mode 100644 tools/testing/sharepool/testcase/scenario_test/test_dfx_heavy_load.c
create mode 100644 tools/testing/sharepool/testcase/scenario_test/test_dvpp_16g_limit.c
create mode 100644 tools/testing/sharepool/testcase/scenario_test/test_failure.c
create mode 100644 tools/testing/sharepool/testcase/scenario_test/test_hugepage.c
create mode 100755 tools/testing/sharepool/testcase/scenario_test/test_hugepage_setting.sh
create mode 100644 tools/testing/sharepool/testcase/scenario_test/test_max_50000_groups.c
create mode 100644 tools/testing/sharepool/testcase/scenario_test/test_oom.c
create mode 100644 tools/testing/sharepool/testcase/scenario_test/test_proc_sp_group_state.c
create mode 100644 tools/testing/sharepool/testcase/scenario_test/test_vmalloc_cgroup.c
create mode 100644 tools/testing/sharepool/testcase/stress_test/Makefile
create mode 100644 tools/testing/sharepool/testcase/stress_test/sp_ro_fault_injection.sh
create mode 100755 tools/testing/sharepool/testcase/stress_test/stress_test.sh
create mode 100644 tools/testing/sharepool/testcase/stress_test/test_alloc_add_and_kill.c
create mode 100644 tools/testing/sharepool/testcase/stress_test/test_alloc_free_two_process.c
create mode 100644 tools/testing/sharepool/testcase/stress_test/test_concurrent_debug.c
create mode 100644 tools/testing/sharepool/testcase/stress_test/test_mult_u2k.c
create mode 100644 tools/testing/sharepool/testcase/stress_test/test_sharepool_enhancement_stress_cases.c
create mode 100644 tools/testing/sharepool/testcase/stress_test/test_u2k_add_and_kill.c
create mode 100644 tools/testing/sharepool/testcase/test_all/Makefile
create mode 100644 tools/testing/sharepool/testcase/test_all/test_all.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/Makefile
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/Makefile
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_add_multi_cases.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_alloc_add_and_kill.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_max_group_per_process.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_alloc_and_add_group.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_process_thread_exit.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_thread_add_group.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_u2k_add_and_kill.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/Makefile
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_add_group_and_print.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_concurrent_debug.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_debug_loop.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_proc_interface_process.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_statistics_stress.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/Makefile
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_k2u.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_pass_through.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_thread_k2u.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/Makefile
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k3.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k4.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/stress_test/Makefile
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/stress_test/test_alloc_free_two_process.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/stress_test/test_fuzz.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/stress_test/test_mult_proc_interface.c
create mode 100755 tools/testing/sharepool/testcase/test_mult_process/test_mult_process.sh
create mode 100755 tools/testing/sharepool/testcase/test_mult_process/test_proc_interface.sh
diff --git a/tools/testing/sharepool/Makefile b/tools/testing/sharepool/Makefile
new file mode 100644
index 000000000000..aeb3181d1e69
--- /dev/null
+++ b/tools/testing/sharepool/Makefile
@@ -0,0 +1,36 @@
+ARCH?=arm64
+
+KERNEL_DIR?=
+
+TOOL_BIN_DIR?=$(shell realpath ./install)
+export ARCH KERNEL_DIR TOOL_BIN_DIR
+export KBUILD_MODPOST_WARN=1
+
+SHARELIB_DIR:=$(shell realpath libs)
+DEV_INC:=$(shell realpath module)
+MODULEDIR=module libs testcase
+
+sharepool_extra_ccflags:=-I$(SHARELIB_DIR) \
+ -I$(DEV_INC) \
+ -Wno-pointer-to-int-cast \
+ -Wno-int-to-pointer-cast \
+ -Wno-int-conversion
+
+sharepool_lib_ccflags:=-L$(SHARELIB_DIR) \
+ -lsharepool_lib
+
+export sharepool_extra_ccflags
+export sharepool_lib_ccflags
+
+.PHONY: all tooldir install clean
+
+all:tooldir
+tooldir:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n; done
+install:
+ mkdir -p $(TOOL_BIN_DIR)
+ cp test.sh test_loop.sh test_prepare.sh test_end.sh $(TOOL_BIN_DIR)
+ for n in $(MODULEDIR); do $(MAKE) -C $$n install; done
+clean:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n clean; done
+ rm -rf install
diff --git a/tools/testing/sharepool/libs/Makefile b/tools/testing/sharepool/libs/Makefile
new file mode 100644
index 000000000000..510fd0b4d919
--- /dev/null
+++ b/tools/testing/sharepool/libs/Makefile
@@ -0,0 +1,8 @@
+libsharepool_lib.so: sharepool_lib.c sharepool_lib.h sem_use.c sem_use.h test_lib.h
+ $(CROSS_COMPILE)gcc sharepool_lib.c sem_use.c -shared -fPIC -o libsharepool_lib.so -I../module
+
+install: libsharepool_lib.so
+ cp $^ $(TOOL_BIN_DIR)
+
+clean:
+ rm -rf libsharepool_lib.so
diff --git a/tools/testing/sharepool/libs/default_args_main.c b/tools/testing/sharepool/libs/default_args_main.c
new file mode 100644
index 000000000000..1395ed571295
--- /dev/null
+++ b/tools/testing/sharepool/libs/default_args_main.c
@@ -0,0 +1,87 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri May 14 02:15:29 2021
+ */
+
+/*
+ * 提供一个默认main函数实现,使用者需要定义两个变量
+ * - dev_fd 设备文件的fd
+ * - testcases 测试用例数组
+ */
+
+#define STRLENGTH 500
+
+static int run_testcase(struct testcase_s *tc)
+{
+ int ret = 0;
+
+ printf(">>>> start testcase: %s", tc->name);
+ if (!tc->expect_ret)
+ printf(", expecting error info\n");
+ else
+ printf("\n");
+
+ if (tc->run_as_child) {
+ pid_t pid;
+
+ FORK_CHILD_ARGS(pid, tc->func());
+
+ if (!tc->exit_signal_check)
+ WAIT_CHILD_STATUS(pid, out);
+ else
+ WAIT_CHILD_SIGNAL(pid, tc->signal, out);
+ } else
+ ret = tc->func();
+
+out:
+ printf("<<<< end testcase: %s, result: %s\n", tc->name, ret != 0 ? "failed" : "passed");
+
+ return ret;
+}
+
+int main(int argc, char *argv[])
+{
+ int num = -1;
+ int passed = 0, failed = 0;
+ int ret;
+
+ dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ ret = parse_opt(argc, argv);
+ if (ret) {
+ pr_info("parse opt failed!");
+ return -1;
+ } else
+ pr_info("parse opt finished.");
+
+ if (num >= 0 && num < ARRAY_SIZE(testcases)) {
+ if (run_testcase(testcases + num))
+ failed++;
+ else
+ passed++;
+ get_filename();
+ printf("-------------------------");
+ printf("%s testcase%d finished, %s", test_group.name, num + 1, passed ? "passed" : "failed" );
+ printf("-------------------------\n");
+ } else {
+ for (num = 0; num < ARRAY_SIZE(testcases); num++) {
+ if (testcases[num].manual)
+ continue;
+ if (run_testcase(testcases + num))
+ failed++;
+ else
+ passed++;
+ }
+ get_filename();
+ printf("-------------------------");
+ printf("%s All %d testcases finished, passing: %d, failing: %d", test_group.name, passed + failed, passed, failed);
+ printf("-------------------------\n");
+ }
+
+
+ close_device(dev_fd);
+ return failed ? -1 : 0;
+}
diff --git a/tools/testing/sharepool/libs/default_main.c b/tools/testing/sharepool/libs/default_main.c
new file mode 100644
index 000000000000..26f42ac39db9
--- /dev/null
+++ b/tools/testing/sharepool/libs/default_main.c
@@ -0,0 +1,85 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri May 14 02:15:29 2021
+ */
+
+/*
+ * 提供一个默认main函数实现,使用者需要定义两个变量
+ * - dev_fd 设备文件的fd
+ * - testcases 测试用例数组
+ */
+#include <stdlib.h>
+#define STRLENGTH 500
+
+static int run_testcase(struct testcase_s *tc)
+{
+ int ret = 0;
+
+ printf("\n======================================================\n");
+ printf(">>>> START TESTCASE: %s\n", tc->name);
+ printf("测试点:%s\n", tc->comment);
+
+ if (tc->run_as_child) {
+ pid_t pid;
+
+ FORK_CHILD_ARGS(pid, tc->func());
+
+ if (!tc->exit_signal_check)
+ WAIT_CHILD_STATUS(pid, out);
+ else
+ WAIT_CHILD_SIGNAL(pid, tc->signal, out);
+ } else
+ ret = tc->func();
+
+out:
+ printf("<<<< END TESTCASE: %s, RESULT: %s\n", tc->name, ret != 0 ? "FAILED" : "PASSED");
+ printf("======================================================\n");
+ return ret;
+}
+
+int main(int argc, char *argv[])
+{
+ int num = -1;
+ int passed = 0, failed = 0;
+
+ dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ if (argc > 1)
+ num = atoi(argv[1]) - 1;
+
+#ifdef pre_hook
+ pre_hook();
+#endif
+ if (num >= 0 && num < ARRAY_SIZE(testcases)) {
+ if (run_testcase(testcases + num))
+ failed++;
+ else
+ passed++;
+ get_filename();
+ printf("-------------------------");
+ printf("%s TESTCASE%d FINISHED, %s", test_group.name, num + 1, passed ? "passed" : "failed" );
+ printf("-------------------------\n\n");
+ } else {
+ for (num = 0; num < ARRAY_SIZE(testcases); num++) {
+ if (testcases[num].manual)
+ continue;
+ if (run_testcase(testcases + num))
+ failed++;
+ else
+ passed++;
+ }
+ get_filename();
+ printf("-------------------------");
+ printf("%s ALL %d TESTCASES FINISHED, passing: %d, failing: %d", test_group.name, passed + failed, passed, failed);
+ printf("-------------------------\n\n");
+ }
+#ifdef post_hook
+ post_hook();
+#endif
+
+ close_device(dev_fd);
+ return failed ? -1 : 0;
+}
diff --git a/tools/testing/sharepool/libs/sem_use.c b/tools/testing/sharepool/libs/sem_use.c
new file mode 100644
index 000000000000..da4dc98d8849
--- /dev/null
+++ b/tools/testing/sharepool/libs/sem_use.c
@@ -0,0 +1,116 @@
+#include "sem_use.h"
+#include <sys/sem.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <errno.h>
+#include <string.h>
+
+int sem_set_value(int semid, short val)
+{
+ int ret;
+ if (val) {
+ if (val < 0) {
+ printf("sem can not be set negative value.\n");
+ return -1;
+ }
+ union semun {
+ int val; /* Value for SETVAL */
+ struct semid_ds *buf; /* Buffer for IPC_STAT, IPC_SET */
+ unsigned short *array; /* Array for GETALL, SETALL */
+ struct seminfo *__buf; /* Buffer for IPC_INFO
+ (Linux-specific) */
+ };
+ union semun su;
+ su.val = val;
+ ret = semctl(semid, 0, SETVAL, su);
+ } else {
+ ret = semctl(semid, 0, SETVAL, 0);
+ }
+
+ return ret;
+}
+
+int sem_get_value(int semid)
+{
+ int ret;
+ ret = semctl(semid, 0, GETVAL);
+ return ret;
+}
+
+int sem_dec_by_one(int semid)
+{
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = -1,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+ return 0;
+}
+
+int sem_inc_by_one(int semid)
+{
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = 1,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+ return 0;
+}
+
+int sem_dec_by_val(int semid, short val)
+{
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = -val,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+ return 0;
+}
+
+int sem_inc_by_val(int semid, short val)
+{
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = val,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+ return 0;
+}
+
+int sem_check_zero(int semid)
+{
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = 0,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+ return 0;
+}
+
+int sem_create(key_t semkey, char *name)
+{
+ // I suppose a normal semid will not be -1
+ int semid = semget(semkey, 1, IPC_CREAT);
+ if (semid < 0) {
+ printf("open semaphore %s failed, errno: %s\n", name, strerror(errno));
+ return -1;
+ }
+ sem_set_value(semid, 0);
+
+ return semid;
+}
+
+int sem_close(int semid)
+{
+ if (semctl(semid, 0, IPC_RMID) != 0) {
+ printf("sem remove fail, errno: %s\n", strerror(errno));
+ return -1;
+ }
+ return 0;
+}
diff --git a/tools/testing/sharepool/libs/sem_use.h b/tools/testing/sharepool/libs/sem_use.h
new file mode 100644
index 000000000000..7109f4597586
--- /dev/null
+++ b/tools/testing/sharepool/libs/sem_use.h
@@ -0,0 +1,13 @@
+
+#include <sys/sem.h>
+
+int sem_set_value(int semid, short val);
+int sem_get_value(int semid);
+int sem_dec_by_one(int semid);
+int sem_inc_by_one(int semid);
+int sem_dec_by_val(int semid, short val);
+int sem_inc_by_val(int semid, short val);
+int sem_check_zero(int semid);
+int sem_create(key_t semkey, char *name);
+
+
diff --git a/tools/testing/sharepool/libs/sharepool_lib.c b/tools/testing/sharepool/libs/sharepool_lib.c
new file mode 100644
index 000000000000..4a0b1e0c01e5
--- /dev/null
+++ b/tools/testing/sharepool/libs/sharepool_lib.c
@@ -0,0 +1,218 @@
+/*
+ * compile: gcc sharepool_lib.c -shared -fPIC -o sharepool_lib.so
+ */
+
+#include <fcntl.h>
+#include <sys/ioctl.h>
+#include <sys/types.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <errno.h>
+
+#include "sharepool_lib.h"
+
+int open_device()
+{
+ int fd;
+
+ fd = open(DEVICE_FILE_NAME, O_RDWR);
+ if (fd < 0) {
+ printf("open device %s failed\n", DEVICE_FILE_NAME);
+ }
+
+ return fd;
+}
+
+void close_device(int fd)
+{
+ if (fd > 0) {
+ close(fd);
+ }
+}
+
+int ioctl_add_group(int fd, struct sp_add_group_info *info)
+{
+ int ret;
+
+ /* Do not allow invalid flags, because the original testcase define the info in stack
+ * and may not initialize the new flag members */
+ if (info->flag & ~SPG_FLAG_NON_DVPP)
+ info->flag = 0;
+
+ ret = ioctl(fd, SP_IOCTL_ADD_GROUP, info);
+ if (ret < 0) {
+ //printf("ioctl SP_IOCTL_ADD_GROUP failed, errno is %d\n", errno);
+ }
+
+ return ret;
+}
+
+int ioctl_alloc(int fd, struct sp_alloc_info *info)
+{
+ int ret;
+
+ ret = ioctl(fd, SP_IOCTL_ALLOC, info);
+ if (ret < 0) {
+ printf("ioctl SP_IOCTL_ALLOC failed, errno is %d\n", errno);
+ }
+
+ return ret;
+}
+
+int ioctl_free(int fd, struct sp_alloc_info *info)
+{
+ int ret;
+
+ ret = ioctl(fd, SP_IOCTL_FREE, info);
+ if (ret < 0) {
+ //printf("ioctl SP_IOCTL_FREE failed, errno is %d\n", errno);
+ }
+
+ return ret;
+}
+
+int ioctl_u2k(int fd, struct sp_make_share_info *info)
+{
+ int ret;
+
+ ret = ioctl(fd, SP_IOCTL_U2K, info);
+ if (ret < 0) {
+ //printf("ioctl SP_IOCTL_U2K failed, errno is %d\n", errno);
+ }
+
+ return ret;
+}
+
+int ioctl_k2u(int fd, struct sp_make_share_info *info)
+{
+ int ret;
+
+ ret = ioctl(fd, SP_IOCTL_K2U, info);
+ if (ret < 0) {
+ //printf("ioctl SP_IOCTL_K2U failed, errno is %d\n", errno);
+ }
+
+ return ret;
+}
+
+int ioctl_unshare(int fd, struct sp_make_share_info *info)
+{
+ int ret;
+
+ ret = ioctl(fd, SP_IOCTL_UNSHARE, info);
+ if (ret < 0) {
+ //printf("ioctl SP_IOCTL_UNSHARE failed, errno is %d\n", errno);
+ }
+
+ return ret;
+}
+
+int ioctl_find_group_by_pid(int fd, struct sp_group_id_by_pid_info *info)
+{
+ int ret;
+
+ ret = ioctl(fd, SP_IOCTL_FIND_GROUP_BY_PID, info);
+ if (ret < 0) {
+ //printf("ioctl SP_IOCTL_FIND_GROUP_BY_PID failed, errno is %d\n", errno);
+ }
+
+ return ret;
+}
+
+int ioctl_judge_addr(int fd, unsigned long addr)
+{
+ /* return true or false */
+ return ioctl(fd, SP_IOCTL_JUDGE_ADDR, &addr);
+}
+
+int ioctl_vmalloc(int fd, struct vmalloc_info *info)
+{
+ int ret;
+
+ ret = ioctl(fd, SP_IOCTL_VMALLOC, info);
+ if (ret < 0) {
+ //printf("ioctl SP_IOCTL_VMALLOC failed, errno is %d\n", errno);
+ }
+
+ return ret;
+}
+
+int ioctl_vmalloc_hugepage(int fd, struct vmalloc_info *info)
+{
+ int ret;
+
+ ret = ioctl(fd, SP_IOCTL_VMALLOC_HUGEPAGE, info);
+ if (ret < 0) {
+ //printf("ioctl SP_IOCTL_VMALLOC failed, errno is %d\n", errno);
+ }
+
+ return ret;
+}
+
+int ioctl_vfree(int fd, struct vmalloc_info *info)
+{
+ /* in fact, no return value */
+ return ioctl(fd, SP_IOCTL_VFREE, info);
+}
+
+int ioctl_karea_access(int fd, struct karea_access_info *info)
+{
+ return ioctl(fd, SP_IOCTL_KACCESS, info);
+}
+
+int ioctl_walk_page_range(int fd, struct sp_walk_page_range_info *info)
+{
+ return ioctl(fd, SP_IOCTL_WALK_PAGE_RANGE, info);
+}
+
+int ioctl_walk_page_free(int fd, struct sp_walk_page_range_info *info)
+{
+ return ioctl(fd, SP_IOCTL_WALK_PAGE_FREE, info);
+}
+
+int ioctl_config_dvpp_range(int fd, struct sp_config_dvpp_range_info *info)
+{
+ // return ioctl(fd, SP_IOCTL_CONFIG_DVPP_RANGE, info);
+ return 0;
+}
+
+int ioctl_register_notifier_block(int fd, struct sp_notifier_block_info *info)
+{
+ return ioctl(fd, SP_IOCTL_REGISTER_NOTIFIER_BLOCK, info);
+}
+
+int ioctl_unregister_notifier_block(int fd, struct sp_notifier_block_info *info)
+{
+ return ioctl(fd, SP_IOCTL_UNREGISTER_NOTIFIER_BLOCK, info);
+}
+
+int ioctl_del_from_group(int fd, struct sp_del_from_group_info *info)
+{
+ return ioctl(fd, SP_IOCTL_DEL_FROM_GROUP, info);
+}
+
+int ioctl_id_of_current(int fd, struct sp_id_of_curr_info *info)
+{
+ return ioctl(fd, SP_IOCTL_ID_OF_CURRENT, info);
+}
+
+/*test for sp_walk_data == NULL*/
+int ioctl_walk_page_range_null(int fd, struct sp_walk_page_range_info *info)
+{
+ return ioctl(fd, SP_IOCTL_WALK_PAGE_RANGE_NULL, info);
+}
+
+int ioctl_hpage_reg_test_suite(int fd, void *info)
+{
+ return ioctl(fd, SP_IOCTL_HPAGE_REG_TESTSUITE, info);
+}
+
+int ioctl_hpage_reg_after_alloc(int fd, void *info)
+{
+ return ioctl(fd, SP_IOCTL_HPAGE_REG_AFTER_ALLOC, info);
+}
+
+int ioctl_hpage_reg_test_exec(int fd, void *info)
+{
+ return ioctl(fd, SP_IOCTL_HPAGE_REG_EXEC, info);
+}
\ No newline at end of file
diff --git a/tools/testing/sharepool/libs/sharepool_lib.h b/tools/testing/sharepool/libs/sharepool_lib.h
new file mode 100644
index 000000000000..0754353d4962
--- /dev/null
+++ b/tools/testing/sharepool/libs/sharepool_lib.h
@@ -0,0 +1,534 @@
+#include <sys/mman.h> // for PROT_READ and PROT_WRITE
+#include <sys/types.h>
+#include <sys/wait.h> // for wait
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <sys/resource.h>
+#include <string.h>
+#include <sys/ioctl.h>
+
+#include "sharepool_dev.h"
+#include "test_lib.h"
+
+#define SP_HUGEPAGE (1 << 0)
+#define SP_HUGEPAGE_ONLY (1 << 1)
+#define SP_DVPP (1 << 2)
+#define SP_PROT_RO (1 << 16)
+#define SP_PROT_FOCUS (1 << 17)
+#define SP_SPEC_NODE_ID (1 << 3)
+
+#define NODES_SHIFT 10UL
+#define DEVICE_ID_BITS 4UL
+#define DEVICE_ID_MASK ((1UL << DEVICE_ID_BITS) - 1UL)
+#define DEVICE_ID_SHIFT 32UL
+#define NODE_ID_BITS NODES_SHIFT
+#define NODE_ID_MASK ((1UL << NODE_ID_BITS) - 1UL)
+#define NODE_ID_SHIFT (DEVICE_ID_SHIFT + DEVICE_ID_BITS)
+
+#define SPG_ID_DEFAULT 0
+#define SPG_ID_MIN 1
+#define SPG_ID_MAX 99999
+#define SPG_ID_AUTO_MIN 100000
+#define SPG_ID_AUTO_MAX 199999
+#define SPG_ID_AUTO 200000
+
+#define SPG_ID_NONE (-1)
+#define SPG_FLAG_NON_DVPP 1UL
+
+#define DVPP_16G 0x400000000UL
+#define DVPP_BASE 0x100000000000ULL
+#define DVPP_END (DVPP_BASE + DVPP_16G * 64)
+
+#define MMAP_SHARE_POOL_NORMAL_START 0xe80000000000UL
+#define MMAP_SHARE_POOL_RO_SIZE 0x1000000000UL
+#define MMAP_SHARE_POOL_DVPP_START 0xf00000000000UL
+#define MMAP_SHARE_POOL_RO_START (MMAP_SHARE_POOL_DVPP_START - MMAP_SHARE_POOL_RO_SIZE)
+
+int open_device();
+void close_device(int fd);
+
+int ioctl_add_group(int fd, struct sp_add_group_info *info);
+int ioctl_alloc(int fd, struct sp_alloc_info *info);
+int ioctl_free(int fd, struct sp_alloc_info *info);
+int ioctl_u2k(int fd, struct sp_make_share_info *info);
+int ioctl_k2u(int fd, struct sp_make_share_info *info);
+int ioctl_unshare(int fd, struct sp_make_share_info *info);
+int ioctl_find_group_by_pid(int fd, struct sp_group_id_by_pid_info *info);
+int ioctl_judge_addr(int fd, unsigned long addr);
+int ioctl_vmalloc(int fd, struct vmalloc_info *info);
+int ioctl_vmalloc_hugepage(int fd, struct vmalloc_info *info);
+int ioctl_vfree(int fd, struct vmalloc_info *info);
+int ioctl_karea_access(int fd, struct karea_access_info *info);
+int ioctl_walk_page_range(int fd, struct sp_walk_page_range_info *info);
+int ioctl_walk_page_free(int fd, struct sp_walk_page_range_info *info);
+int ioctl_config_dvpp_range(int fd, struct sp_config_dvpp_range_info *info);
+int ioctl_register_notifier_block(int fd, struct sp_notifier_block_info *info);
+int ioctl_unregister_notifier_block(int fd, struct sp_notifier_block_info *info);
+int ioctl_del_from_group(int fd, struct sp_del_from_group_info *info);
+/*for error handling path test*/
+int ioctl_walk_page_range_null(int fd, struct sp_walk_page_range_info *info);
+
+static int inline ioctl_find_first_group(int fd, int pid)
+{
+ int spg_id, num = 1, ret;
+ struct sp_group_id_by_pid_info info = {
+ .pid = pid,
+ .spg_ids = &spg_id,
+ .num = &num,
+ };
+ ret = ioctl_find_group_by_pid(fd, &info);
+
+ return ret < 0 ? ret : spg_id;
+}
+
+#define KAREA_ACCESS_CHECK(val, address, sz, out)\
+ do { \
+ struct karea_access_info __karea_info = { \
+ .mod = KAREA_CHECK, \
+ .value = val, \
+ .addr = address, \
+ .size = sz, \
+ }; \
+ ret = ioctl_karea_access(dev_fd, &__karea_info);\
+ if (ret < 0) { \
+ pr_info("karea check failed, errno%d", errno);\
+ goto out; \
+ } \
+ } while (0)
+
+#define KAREA_ACCESS_SET(val, address, sz, out) \
+ do { \
+ struct karea_access_info __karea_info = { \
+ .mod = KAREA_SET, \
+ .value = val, \
+ .addr = address, \
+ .size = sz, \
+ }; \
+ ret = ioctl_karea_access(dev_fd, &__karea_info);\
+ if (ret < 0) { \
+ pr_info("karea set failed, errno%d", errno);\
+ goto out; \
+ } \
+ } while (0)
+
+static int dev_fd;
+
+static inline unsigned long wrap_vmalloc(unsigned long size, bool ishuge)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = size,
+ };
+ if (ishuge) {
+ ret = ioctl_vmalloc_hugepage(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc_hugepage failed, errno: %d", errno);
+ return 0;
+ }
+ } else {
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return 0;
+ }
+ }
+ return ka_info.addr;
+}
+
+static inline void wrap_vfree(unsigned long addr)
+{
+ struct vmalloc_info ka_info = {
+ .addr = addr,
+ };
+ ioctl_vfree(dev_fd, &ka_info);
+}
+
+static inline int wrap_add_group_flag(int pid, int prot, int spg_id, unsigned long flag)
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = prot,
+ .spg_id = spg_id,
+ .flag = flag,
+ };
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ ret = ag_info.spg_id;
+out:
+ return ret;
+}
+
+static inline int wrap_add_group(int pid, int prot, int spg_id)
+{
+ return wrap_add_group_flag(pid, prot, spg_id, 0);
+}
+
+static inline int wrap_add_group_non_dvpp(int pid, int prot, int spg_id)
+{
+ return wrap_add_group_flag(pid, prot, spg_id, SPG_FLAG_NON_DVPP);
+}
+
+static inline unsigned long wrap_k2u(unsigned long kva, unsigned long size, int spg_id, unsigned long sp_flags)
+{
+ int ret;
+ unsigned long uva = 0;
+ struct sp_make_share_info k2u_infos = {
+ .kva = kva,
+ .size = size,
+ .spg_id = spg_id,
+ .sp_flags = sp_flags,
+ };
+ TEST_CHECK(ioctl_k2u(dev_fd, &k2u_infos), out);
+ uva = k2u_infos.addr;
+
+out:
+ return uva;
+}
+
+static inline unsigned long wrap_u2k(unsigned long uva, unsigned long size)
+{
+ int ret;
+
+ struct sp_make_share_info u2k_info = {
+ .uva = uva,
+ .size = size,
+ };
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return 0;
+ }
+
+ return u2k_info.addr;
+}
+
+static inline int wrap_walk_page_range(unsigned long uva, unsigned long size)
+{
+ int ret = 0;
+ struct sp_walk_page_range_info wpr_info = {
+ .uva = uva,
+ .size = size,
+ };
+ TEST_CHECK(ioctl_walk_page_range(dev_fd, &wpr_info), out);
+out:
+ return ret;
+}
+
+static inline int wrap_unshare(unsigned long addr, unsigned long size)
+{
+ int ret;
+ struct sp_make_share_info info = {
+ .addr = addr,
+ .size = size,
+ };
+
+ TEST_CHECK(ioctl_unshare(dev_fd, &info), out);
+out:
+ return ret;
+}
+
+/*
+ * return:
+ * 0: alloc fails, see errno for reason.
+ * non-zero value: success.
+ */
+static inline void *wrap_sp_alloc(int spg_id, unsigned long size, unsigned long flag)
+{
+ int ret;
+ unsigned long addr = 0;
+ struct sp_alloc_info info = {
+ .spg_id = spg_id,
+ .flag = flag,
+ .size = size,
+ };
+ TEST_CHECK(ioctl_alloc(dev_fd, &info), out);
+ addr = info.addr;
+
+out:
+ /* if fail, return 0. because we cannot tell a errno from a normal
+ * addr. On the other side, the va will never be 0.
+ */
+ return ret < 0? (void *)-1: (void *)addr;
+}
+
+static inline int wrap_sp_free(void *addr)
+{
+ int ret;
+ struct sp_alloc_info info = {
+ .addr = (unsigned long)addr,
+ .spg_id = SPG_ID_DEFAULT,
+ };
+ TEST_CHECK(ioctl_free(dev_fd, &info), out);
+
+out:
+ return ret;
+}
+
+static inline int wrap_sp_free_by_id(void *addr, int spg_id)
+{
+ int ret = 0;
+ struct sp_alloc_info info = {
+ .addr = (unsigned long)addr,
+ .spg_id = spg_id,
+ };
+ TEST_CHECK(ioctl_free(dev_fd, &info), out);
+
+out:
+ return ret;
+}
+
+static inline int wrap_sp_group_id_by_pid(pid_t pid, int spg_id[], int *num)
+{
+ int ret;
+ struct sp_group_id_by_pid_info info = {
+ .pid = pid,
+ .spg_ids = spg_id,
+ .num = num,
+ };
+ TEST_CHECK(ioctl_find_group_by_pid(dev_fd, &info), out);
+
+out:
+ return ret;
+}
+
+static inline int wrap_del_from_group(pid_t pid, int spg_id)
+{
+ int ret;
+ struct sp_del_from_group_info info = {
+ .pid = pid,
+ .spg_id = spg_id,
+ };
+ TEST_CHECK(ioctl_del_from_group(dev_fd, &info), out);
+
+out:
+ return ret;
+}
+
+static inline int wrap_sp_id_of_current()
+{
+ int ret;
+ struct sp_id_of_curr_info info;
+ ret = ioctl(dev_fd, SP_IOCTL_ID_OF_CURRENT, &info);
+ if (ret < 0)
+ return -errno;
+
+ return info.spg_id;
+}
+
+static inline int sharepool_log(char *log_title)
+{
+ printf("%s", log_title);
+
+ char *logname[2] = {"sp_proc_log", "sp_spa_log"};
+ char *procname[2] = {"/proc/sharepool/proc_stat", "/proc/sharepool/spa_stat"};
+
+ read_proc(procname[0], logname[0], SIZE, 0);
+ read_proc(procname[1], logname[1], SIZE, 0);
+
+ return 0;
+}
+
+static inline int sharepool_print()
+{
+ printf("\n%20s", " ****** ");
+ printf("sharepool_print");
+ printf("%-20s\n", " ****** ");
+
+ char *logname[5] = {
+ "1.log",
+ "2.log",
+ "3.log",
+ };
+
+ char *procname[5] = {
+ "/proc/sharepool/proc_stat",
+ "/proc/sharepool/proc_overview",
+ "/proc/sharepool/spa_stat"
+ };
+
+ read_proc(procname[0], logname[0], SIZE, 1);
+ read_proc(procname[1], logname[1], SIZE, 1);
+ read_proc(procname[2], logname[2], SIZE, 1);
+
+ return 0;
+}
+
+static inline int spa_stat()
+{
+ printf("\n%20s", " ****** ");
+ printf("cat /proc/sharepool/spa_stat");
+ printf("%-20s\n", " ****** ");
+
+ char *logname = "spa_stat_log";
+ char *procname = "/proc/sharepool/spa_stat";
+
+ read_proc(procname, logname, SIZE, 1);
+
+ return 0;
+}
+
+static inline int proc_stat()
+{
+ printf("\n%20s", " ****** ");
+ printf("cat /proc/sharepool/proc_stat");
+ printf("%-20s\n", " ****** ");
+
+ char *logname = "proc_stat_log";
+ char *procname = "/proc/sharepool/proc_stat";
+
+ read_proc(procname, logname, SIZE, 1);
+
+ return 0;
+}
+
+static inline int proc_overview()
+{
+ printf("\n%20s", " ****** ");
+ printf("cat /proc/sharepool/proc_overview");
+ printf("%-20s\n", " ****** ");
+
+ char *logname = "proc_overview_log";
+ char *procname = "/proc/sharepool/proc_overview";
+
+ read_proc(procname, logname, SIZE, 1);
+
+ return 0;
+}
+
+static inline int cat_attr(char *attr)
+{
+ printf("\n%20s", " ****** ");
+ printf("cat %s", attr);
+ printf("%-20s\n", " ****** ");
+
+ char *logname = "attr_log";
+
+ read_proc(attr, logname, SIZE, 1);
+
+ return 0;
+}
+
+static int create_multi_groups(int pid, int group_num, int *group_ids)
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ };
+ for (int i = 0; i < group_num; i++) {
+ group_ids[i] = i + 1;
+ ag_info.spg_id = group_ids[i];
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("proc%d add group%d failed", pid, group_ids[i]);
+ return -1;
+ }
+ }
+
+ return ret;
+}
+
+static int add_multi_groups(int pid, int group_num, int *group_ids)
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ };
+ for (int i = 0; i < group_num; i++) {
+ ag_info.spg_id = group_ids[i];
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("proc%d add group%d failed", pid, group_ids[i]);
+ return -1;
+ }
+ }
+
+ return ret;
+}
+
+static inline void *ioctl_alloc_huge_memory(int nid, int flags, unsigned long addr, unsigned long size)
+{
+ int ret;
+ struct alloc_huge_memory alloc_info = {
+ .nid = nid,
+ .flags = flags,
+ .addr = addr,
+ .size = size,
+ };
+
+ ret = ioctl(dev_fd, SP_IOCTL_ALLOC_HUGE_MEMORY, &alloc_info);
+ if (ret < 0)
+ return NULL;
+
+ return (void *)alloc_info.addr;
+}
+
+static inline int ioctl_check_memory_node(unsigned long uva, unsigned long len, int node)
+{
+ int ret;
+ struct check_memory_node info = {
+ .uva = uva,
+ .len = len,
+ .node = node,
+ };
+
+ ret = ioctl(dev_fd, SP_IOCTL_CHECK_MEMORY_NODE, &info);
+ if (ret < 0)
+ return -errno;
+
+ return 0;
+}
+
+static inline int ioctl_kthread_start(int fd, struct sp_kthread_info *info)
+{
+ int ret;
+
+ pr_info("we are kthread");
+ ret = ioctl(fd, SP_IOCTL_KTHREAD_START, info);
+ if (ret < 0) {
+ pr_info("ioctl ktread failed errno: %d", ret);
+ }
+ return ret;
+}
+
+static inline int ioctl_kthread_end(int fd, struct sp_kthread_info *info)
+{
+ int ret;
+
+ pr_info("we are kthread end");
+ ret = ioctl(fd, SP_IOCTL_KTHREAD_END, &info);
+ if (ret < 0) {
+ pr_info("ioctl ktread failed errno: %d", ret);
+ }
+ return ret;
+}
+
+static inline int ioctl_kmalloc(int fd, struct vmalloc_info *info)
+{
+ int ret;
+
+ pr_info("we are kmalloc");
+ ret = ioctl(fd, SP_IOCTL_KMALLOC, info);
+ if (ret < 0) {
+ pr_info("ioctl ktread failed errno: %d", ret);
+ }
+ return ret;
+}
+
+static inline int ioctl_kfree(int fd, struct vmalloc_info *info)
+{
+ int ret;
+
+ pr_info("we are kfree");
+ ret = ioctl(fd, SP_IOCTL_KFREE, info);
+ if (ret < 0) {
+ pr_info("ioctl ktread failed errno: %d", ret);
+ }
+ return ret;
+}
diff --git a/tools/testing/sharepool/libs/test_lib.h b/tools/testing/sharepool/libs/test_lib.h
new file mode 100644
index 000000000000..161390c73380
--- /dev/null
+++ b/tools/testing/sharepool/libs/test_lib.h
@@ -0,0 +1,324 @@
+#include <string.h>
+#include <errno.h>
+#include <sys/mman.h> // for PROT_READ and PROT_WRITE
+#include <sys/types.h>
+#include <sys/wait.h> // for wait
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+#include <stdbool.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <sys/resource.h>
+#include <stdlib.h>
+
+#define PMD_SIZE 0x200000
+#define PAGE_SIZE 4096UL
+
+#define MAX_ERRNO 4095
+#define IS_ERR_VALUE(x) ((unsigned long)(void *)(x) >= (unsigned long)-MAX_ERRNO)
+#define COMMENT_LEN 1000 // testcase cmt
+#define SIZE 5000 // size for cat output
+
+struct testcase_s {
+ int (*func)(void);
+ const char *name;
+ bool run_as_child;
+ bool exit_signal_check;
+ bool manual;
+ const char *expect_ret;
+ int signal;
+ const char *comment;
+};
+
+#define TEST_ADD(_num, cmt, expc) \
+ struct testcase_s test##_num = { \
+ .name = "tc"#_num, \
+ .func = testcase##_num, \
+ .comment = cmt, \
+ .expect_ret = expc, \
+ }
+
+#define STRLENGTH 500
+struct test_group {
+ char name[STRLENGTH];
+ struct testcase_s *testcases;
+};
+
+char* extract_filename(char *filename, char filepath[])
+{
+
+ int i = 0, last_slash = -1, real_len = 0;
+ while (i < STRLENGTH && filepath[i] != '\0') {
+ if (filepath[i] == '/')
+ last_slash = i;
+ i++;
+ real_len++;
+ };
+
+ if (real_len >= STRLENGTH) {
+ printf("file path too long");
+ return NULL;
+ }
+
+ for (int j = last_slash + 1; j <= real_len; j++)
+ filename[j - last_slash - 1] = filepath[j];
+
+ return filename;
+}
+
+static int inline testcase_stub_pass(void) { return 0; }
+
+#define TESTCASE(tc, cmt) { .func = tc, .name = #tc, .manual = false, .run_as_child = false, .exit_signal_check = false, .comment = cmt},
+#define TESTCASE_CHILD(tc, cmt) { .func = tc, .name = #tc, .manual = false, .run_as_child = true, .exit_signal_check = false, .comment = cmt},
+#define TESTCASE_CHILD_MANUAL(tc, cmt) { .func = tc, .name = #tc, .manual = true, .run_as_child = true, .exit_signal_check = false, .comment = cmt},
+#define TESTCASE_CHILD_SIGNAL(tc, sig, cmt) { .func = tc, .name = #tc, .manual = false, .run_as_child = true, .exit_signal_check = true, .signal = sig, .comment = cmt},
+#define TESTCASE_STUB(tc, cmt) { .func = testcase_stub_pass, .name = #tc, .manual = false, .run_as_child = false, .exit_signal_check = false, .comment = cmt},
+
+
+#define ARRAY_SIZE(array) (sizeof(array) / sizeof(array[0]))
+
+#define pr_info(fmt, args...) \
+ printf("[file:%s, func:%s, line:%d] " fmt "\n", __FILE__, __func__, __LINE__, ##args)
+
+#define SEM_INIT(sync, idx) \
+ do { \
+ char sem_name[256]; \
+ snprintf(sem_name, 256, "/%s%d_%d", __FILE__, __LINE__, idx); \
+ sync = sem_open(sem_name, O_CREAT, O_RDWR, 0); \
+ if (sync == SEM_FAILED) { \
+ pr_info("sem_open failed"); \
+ return -1; \
+ } \
+ sem_unlink(sem_name); \
+ } while (0)
+
+#define SEM_WAIT(sync) \
+ do { \
+ do { \
+ ret = sem_wait(sync); \
+ } while (ret && errno == EINTR); \
+ } while (0)
+
+#define FORK_CHILD_ARGS(pid, child) \
+ do { \
+ pid = fork(); \
+ if (pid < 0) { \
+ pr_info("fork failed"); \
+ return -1; \
+ } else if (pid == 0) \
+ exit(child); \
+ } while (0)
+
+static inline int deadloop_child(void)
+{
+ while (1);
+ return -1;
+}
+
+static inline int sleep_child(void)
+{
+ pause();
+ return -1;
+}
+
+#define FORK_CHILD_DEADLOOP(pid) FORK_CHILD_ARGS(pid, deadloop_child())
+#define FORK_CHILD_SLEEP(pid) FORK_CHILD_ARGS(pid, sleep_child())
+
+#define TEST_CHECK(fn, out) \
+ do { \
+ ret = fn; \
+ if (ret < 0) { \
+ pr_info(#fn " failed, errno: %d", errno); \
+ goto out; \
+ } \
+ } while (0)
+
+#define TEST_CHECK_FAIL(fn, err, out) \
+ do { \
+ ret = fn; \
+ if (!(ret < 0 && errno == err)) { \
+ pr_info(#fn " failed, ret: %d, errno: %d", ret, errno); \
+ ret = -1; \
+ goto out; \
+ } else \
+ ret = 0; \
+ } while (0)
+
+#define KILL_CHILD(pid) \
+ do { \
+ kill(pid, SIGKILL); \
+ waitpid(pid, NULL, 0); \
+ } while (0)
+
+#define CHECK_CHILD_STATUS(pid) \
+ do { \
+ int __status; \
+ waitpid(pid, &__status, 0);\
+ if (!(WIFEXITED(__status) && !WEXITSTATUS(__status))) { \
+ if (WIFSIGNALED(__status)) { \
+ pr_info("child pid %d, killed by signal %d", pid, WTERMSIG(__status)); \
+ } else { \
+ pr_info("child, pid: %d, exit unexpected, status: %d", pid, __status);\
+ } \
+ ret = -1; \
+ } else { \
+ pr_info("child pid %d exit normal.", pid); \
+ } \
+ } while (0)
+
+#define WAIT_CHILD_STATUS(pid, out) \
+ do { \
+ int __status; \
+ waitpid(pid, &__status, 0);\
+ if (!(WIFEXITED(__status) && !WEXITSTATUS(__status))) { \
+ if (WIFSIGNALED(__status)) { \
+ pr_info("child pid %d, killed by signal %d", pid, WTERMSIG(__status)); \
+ } else { \
+ pr_info("child pid: %d, exit unexpected, status: %d", pid, WEXITSTATUS(__status));\
+ } \
+ ret = -1; \
+ goto out; \
+ } \
+ } while (0)
+
+#define WAIT_CHILD_SIGNAL(pid, sig, out)\
+ do { \
+ int __status; \
+ waitpid(pid, &__status, 0); \
+ if (!WIFSIGNALED(__status)) { \
+ pr_info("child, pid: %d, exit unexpected, status: %d", pid, __status);\
+ ret = -1; \
+ goto out; \
+ } else if (WTERMSIG(__status) != sig) { \
+ pr_info("child, pid: %d, killed by unexpected sig:%d, expected:%d ", pid, WTERMSIG(__status), sig);\
+ ret = -1; \
+ goto out; \
+ } \
+ } while (0)
+
+
+static inline int setCore()
+{
+ struct rlimit core_lim;
+ if (getrlimit(RLIMIT_CORE, &core_lim)) {
+ printf("getrlimit failed, err: %s\n", strerror(errno));
+ return -1;
+ } else
+ printf("current rlimit for RLIMIT_CORE is: %lx, %lx\n", core_lim.rlim_cur, core_lim.rlim_max);
+ core_lim.rlim_cur = RLIM_INFINITY;
+ if (setrlimit(RLIMIT_CORE, &core_lim)) {
+ printf("setrlimit failed, err: %s\n", strerror(errno));
+ return -1;
+ } else
+ printf("setrlimit for RLIMIT_CORE to unlimited\n");
+ return 0;
+}
+
+static inline int generateCoredump()
+{
+ char *a = NULL;
+ *a = 5; /* SIGSEGV */
+ return 0;
+}
+
+static inline void read_proc(char procname[], char logname[], int size, int print_or_log)
+{
+ FILE *proc, *log;
+ char str[SIZE];
+
+ log = fopen(logname, "a");
+ if (!log) {
+ printf("open %s failed.\n", logname);
+ return;
+ }
+
+ proc = fopen(procname, "r");
+ if (!proc) {
+ printf("open %s failed.\n", procname);
+ return;
+ }
+
+ // read information into a string
+ if (print_or_log)
+ printf("\n ----- %s -----\n", procname);
+
+ while (fgets(str, size, proc) != NULL) {
+ if (print_or_log)
+ printf("%s", str);
+ fputs(str, log);
+ }
+
+ fclose(proc);
+ fclose(log);
+
+ return;
+}
+
+static inline void read_attr(char procname[], char result[], int size)
+{
+ FILE *proc;
+ char str[SIZE];
+
+ proc = fopen(procname, "r");
+ if (!proc) {
+ printf("open %s failed.\n", procname);
+ return;
+ }
+
+ while (fgets(str, size, proc) != NULL)
+ strcpy(result, str);
+
+ fclose(proc);
+ return;
+}
+
+static int get_attr(char *attr, char** result, int row_size, int col_size,
+ int *row_real)
+{
+ // get attr, put result into result array
+ FILE *proc;
+ char str[SIZE];
+ int row = 0;
+
+ *row_real = 0;
+
+ proc = fopen(attr, "r");
+ if (!proc) {
+ printf("open %s failed.\n", attr);
+ return -1;
+ }
+
+ while (fgets(str, SIZE, proc) != NULL) {
+ if (strlen(str) > col_size) {
+ printf("get_attr %s failed, column size %d < strlen %d\n",
+ attr, col_size, strlen(str));
+ fclose(proc);
+ return -1;
+ }
+
+ if (row >= row_size) {
+ printf("get_attr %s failed, row limit %d too small\n",
+ attr, row_size);
+ fclose(proc);
+ return -1;
+ }
+
+ strcat(result[row++], str);
+ (*row_real)++;
+ }
+
+ fclose(proc);
+ return 0;
+}
+
+static inline int mem_show()
+{
+ char *meminfo = "/proc/meminfo";
+ char *logname = "meminfo_log";
+
+ read_proc(meminfo, logname, SIZE, 1);
+
+ return 0;
+}
+
diff --git a/tools/testing/sharepool/module/Makefile b/tools/testing/sharepool/module/Makefile
new file mode 100644
index 000000000000..aaf49e10112d
--- /dev/null
+++ b/tools/testing/sharepool/module/Makefile
@@ -0,0 +1,14 @@
+ifneq ($(KERNELRELEASE),)
+obj-m += sharepool_dev.o
+obj-m += check_sharepool_fault.o
+obj-m += check_sharepool_alloc.o
+else
+all:
+ make -C $(KERNEL_DIR) M=$$PWD modules
+
+clean:
+ make -C $(KERNEL_DIR) M=$$PWD clean
+
+install:
+ cp *.ko $(TOOL_BIN_DIR)
+endif
diff --git a/tools/testing/sharepool/module/check_sharepool_alloc.c b/tools/testing/sharepool/module/check_sharepool_alloc.c
new file mode 100644
index 000000000000..e6e9f7e806a2
--- /dev/null
+++ b/tools/testing/sharepool/module/check_sharepool_alloc.c
@@ -0,0 +1,132 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2019.All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Jan 25 07:38:03 2021
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/kprobes.h>
+#include <linux/version.h>
+#include <linux/namei.h>
+#include <linux/stacktrace.h>
+#include <linux/delay.h>
+
+static void check(unsigned long addr) {
+ unsigned long long pfn;
+ struct mm_struct *mm;
+ struct vm_area_struct *vma;
+ unsigned long long val;
+ unsigned long long vf;
+ pgd_t *pgdp;
+ pgd_t pgd;
+
+ p4d_t *p4dp, p4d;
+ pud_t *pudp, pud;
+ pmd_t *pmdp, pmd;
+ pte_t *ptep, pte;
+
+ mm = current->active_mm;
+ vma = find_vma(mm, addr);
+
+ vf = VM_NORESERVE | VM_SHARE_POOL | VM_DONTCOPY | VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
+
+ WARN((vma->vm_flags & vf) != vf, "vma->flags of sharepool is not expected");
+ pgdp = pgd_offset(mm, addr);
+ pgd = READ_ONCE(*pgdp);
+
+ do {
+ if (pgd_none(pgd) || pgd_bad(pgd))
+ break;
+
+ p4dp = p4d_offset(pgdp, addr);
+ p4d = READ_ONCE(*p4dp);
+ if (p4d_none(p4d) || p4d_bad(p4d))
+ break;
+
+ pudp = pud_offset(p4dp, addr);
+ pud = READ_ONCE(*pudp);
+ if (pud_none(pud) || pud_bad(pud))
+ break;
+
+ pmdp = pmd_offset(pudp, addr);
+ pmd = READ_ONCE(*pmdp);
+ val = pmd_val(pmd);
+ pfn = pmd_pfn(pmd);
+ if (pmd_none(pmd) || pmd_bad(pmd))
+ break;
+
+ ptep = pte_offset_map(pmdp, addr);
+ pte = READ_ONCE(*ptep);
+ val = pte_val(pte);
+ pfn = pte_pfn(pte);
+ pte_unmap(ptep);
+ } while(0);
+
+ if (vma->vm_flags & VM_MAYWRITE) {
+ if (val & PTE_RDONLY)
+ WARN(1, "Pte(0x%llx) has PTE_RDONLY(0x%llx)\n",
+ val, PTE_RDONLY);
+ if (!(val & PTE_DIRTY))
+ WARN(1, "Pte(0x%llx) has no PTE_DIRTY(0x%llx)\n",
+ val, PTE_DIRTY);
+ }
+
+ if (!(val & PTE_AF))
+ WARN(1, "Pte(0x%llx) no has PTE_AF(0x%llx)\n",
+ val, PTE_AF);
+/* 静态大页lru会指向hstat->activelist
+ struct folio *folio;
+ folio = pfn_folio(pfn);
+ if (list_empty(&folio->lru)) {
+ WARN(1, "folio->lru is not empty\n");
+ }
+
+ if (folio_test_ksm(folio) || folio_test_anon(folio) || folio->mapping) {
+ WARN(1, "folio has rmap\n");
+ }
+*/
+}
+
+static int ret_handler(struct kretprobe_instance *ri, struct pt_regs *regs)
+{
+ unsigned long addr = regs->regs[0];
+ if (!IS_ERR_OR_NULL((void *)addr))
+ check(addr);
+
+ return 0;
+}
+
+static struct kretprobe krp = {
+ .handler =ret_handler,
+ // .entry_handler = entry_handler,
+};
+
+static int kretprobe_init(void)
+{
+ int ret;
+ krp.kp.symbol_name = "__mg_sp_alloc_nodemask";
+ ret = register_kretprobe(&krp);
+ pr_err("register_kretprobe\n");
+
+ if (ret < 0) {
+ printk(KERN_INFO "register_kretprobe failed, returned %d\n", ret);
+ }
+
+ return ret;
+}
+
+int mg_sp_alloc_nodemask_init(void)
+{
+ kretprobe_init();
+ return 0;
+}
+
+void mg_sp_alloc_nodemask_exit(void)
+{
+ unregister_kretprobe(&krp);
+}
+
+module_init(mg_sp_alloc_nodemask_init);
+module_exit(mg_sp_alloc_nodemask_exit);
+MODULE_LICENSE("GPL");
diff --git a/tools/testing/sharepool/module/check_sharepool_fault.c b/tools/testing/sharepool/module/check_sharepool_fault.c
new file mode 100644
index 000000000000..b4729c326ccd
--- /dev/null
+++ b/tools/testing/sharepool/module/check_sharepool_fault.c
@@ -0,0 +1,63 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2019.All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Jan 25 07:38:03 2021
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/kprobes.h>
+#include <linux/version.h>
+#include <linux/namei.h>
+#include <linux/stacktrace.h>
+
+static int handler_pre(struct kprobe *p, struct pt_regs *regs)
+{
+ struct vm_area_struct *vma = (void *)regs->regs[0];
+ unsigned long address = regs->regs[1];
+
+ if (vma->vm_flags & VM_SHARE_POOL) {
+ WARN(1, "fault of sharepool at 0x%lx\n", address);
+ }
+
+ return 0;
+}
+
+static struct kprobe kp = {
+ .pre_handler = handler_pre,
+};
+
+static int kprobe_init(void)
+{
+ int ret;
+ kp.symbol_name = "handle_mm_fault";
+ ret = register_kprobe(&kp);
+
+ if (ret < 0) {
+ printk(KERN_INFO "%d register_kprobe failed, returned %d\n",
+ __LINE__, ret);
+ }
+
+ return ret;
+}
+
+static void kprobe_uninit(void)
+{
+ unregister_kprobe(&kp);
+}
+
+int sharepool_fault_debug_init(void)
+{
+ kprobe_init();
+
+ return 0;
+}
+
+void sharepool_fault_debug_exit(void)
+{
+ kprobe_uninit();
+}
+
+module_init(sharepool_fault_debug_init);
+module_exit(sharepool_fault_debug_exit);
+MODULE_LICENSE("GPL");
diff --git a/tools/testing/sharepool/module/sharepool_dev.c b/tools/testing/sharepool/module/sharepool_dev.c
new file mode 100644
index 000000000000..d7eac14cd52c
--- /dev/null
+++ b/tools/testing/sharepool/module/sharepool_dev.c
@@ -0,0 +1,1130 @@
+/*
+ * sharepool_dev.c - Create an input/output character device for share pool
+ */
+#define pr_fmt(fmt) "sp_test: " fmt
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/uaccess.h>
+#include <linux/vmalloc.h>
+#include <linux/printk.h>
+#include <linux/ratelimit.h>
+#include <linux/notifier.h>
+#include <linux/mm.h>
+#include <linux/hugetlb.h>
+#include <linux/kthread.h>
+#include <linux/share_pool.h>
+
+#include "sharepool_dev.h"
+
+static int dev_open(struct inode *inode, struct file *file)
+{
+ return 0;
+}
+struct task_struct *ktask;
+struct sp_kthread_info kt_info;
+int sp_kthread(void *index)
+{
+ int ret;
+ unsigned size = 4096;
+ unsigned long flag = 0;
+ int spg_id = 1;
+ void *addr;
+
+ addr = mg_sp_alloc(size, flag, spg_id);
+ if (IS_ERR(addr)) {
+ pr_info("alloc failed as expected\n");
+ }
+
+ ret = mg_sp_free(0, 1);
+ if (ret < 0) {
+ pr_info("free failed as expected\n");
+ }
+
+ ret = mg_sp_unshare(0, 0, 1);
+ if (ret < 0) {
+ pr_info("unshare failed as expected\n");
+ }
+
+ addr = mg_sp_make_share_k2u(0, 0, 0, 1, 1);
+ if (IS_ERR(addr)) {
+ pr_info("k2u failed as expected\n");
+ }
+
+ addr = mg_sp_make_share_u2k(0, 0, 1);
+ if (IS_ERR(addr)) {
+ pr_info("k2u failed as expected\n");
+ }
+
+ return 0;
+}
+
+int sp_free_kthread(void *arg)
+{
+ int ret;
+ struct sp_kthread_info *info = (struct sp_kthread_info *)arg;
+
+ pr_info("in sp_free_kthread\n");
+ current->flags &= (~PF_KTHREAD);
+ ret = mg_sp_free(info->addr, info->spg_id);
+ if (ret < 0) {
+ pr_info("kthread free failed\n");
+ return ret;
+ }
+ return 0;
+}
+
+int sp_unshare_kthread(void *arg)
+{
+ int ret;
+ struct sp_kthread_info *info = (struct sp_kthread_info *)arg;
+
+ pr_info("in sp_unshare_kthread\n");
+ current->flags &= (~PF_KTHREAD);
+ ret = mg_sp_unshare(info->addr, info->size, info->spg_id);
+ if (ret < 0) {
+ pr_info("kthread unshare failed\n");
+ return ret;
+ }
+ return 0;
+}
+
+static int dev_sp_kthread_start(unsigned long __user *arg)
+{
+ int ret = 0;
+ pr_info("dev_sp_kthread\n");
+
+ ret = copy_from_user(&kt_info, (void __user *)arg, sizeof(struct sp_kthread_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ if (kt_info.type == 0) {
+ ktask = kthread_run(sp_kthread, NULL, "kthread test");
+ if (!ktask) {
+ pr_info("kthread run fail\n");
+ return -ECHILD;
+ }
+ }
+
+ if (kt_info.type == 1) {
+ ktask = kthread_run(sp_free_kthread, &kt_info, "kthread free test");
+ if (!ktask) {
+ pr_info("kthread run fail\n");
+ return -ECHILD;
+ }
+
+ }
+
+ if (kt_info.type == 2) {
+ ktask = kthread_run(sp_unshare_kthread, &kt_info, "kthread unshare test");
+ if (!ktask) {
+ pr_info("kthread run fail\n");
+ return -ECHILD;
+ }
+ }
+
+ return 0;
+}
+
+static int dev_sp_kthread_end(unsigned long __user *arg)
+{
+ if (ktask) {
+ pr_info("we are going to end the kthread\n");
+ kthread_stop(ktask);
+ ktask = NULL;
+ }
+
+ return 0;
+}
+
+static long dev_sp_add_group(unsigned long __user *arg)
+{
+ struct sp_add_group_info info;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct sp_add_group_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ ret = mg_sp_group_add_task(info.pid, info.prot, info.spg_id);
+ if (ret < 0) {
+ pr_err_ratelimited("%s sp_group_add_task failed: %d, pid is %d, spg_id is %d\n",
+ __func__, ret, info.pid, info.spg_id);
+ return ret;
+ }
+
+ info.spg_id = ret;
+ ret = copy_to_user((void __user *)arg, &info, sizeof(struct sp_add_group_info));
+ if (ret) {
+ pr_err("%s copy to user failed: %d\n", __func__, ret);
+ }
+
+ return ret;
+}
+
+static long dev_sp_alloc(unsigned long __user *arg)
+{
+ struct sp_alloc_info info;
+ void *addr;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct sp_alloc_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ addr = mg_sp_alloc(info.size, info.flag, info.spg_id);
+ if (IS_ERR(addr)) {
+ pr_err_ratelimited("%s sp_alloc failed: %ld\n", __func__, PTR_ERR(addr));
+ return PTR_ERR(addr);
+ }
+
+ info.addr = (unsigned long)addr;
+ ret = copy_to_user((void __user *)arg, &info, sizeof(struct sp_alloc_info));
+ if (ret) {
+ pr_err("%s copy to user failed: %d\n", __func__, ret);
+ ret = mg_sp_free(info.addr, SPG_ID_DEFAULT);
+ if (ret < 0)
+ pr_err("%s sp_free failed: %d\n", __func__, ret);
+ }
+
+ return ret;
+}
+
+static long dev_sp_free(unsigned long __user *arg)
+{
+ struct sp_alloc_info info;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct sp_alloc_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ ret = mg_sp_free(info.addr, info.spg_id);
+ if (ret < 0)
+ pr_err_ratelimited("%s sp_free failed: %d\n", __func__, ret);
+
+ return ret;
+}
+
+static long dev_sp_u2k(unsigned long __user *arg)
+{
+ struct sp_make_share_info info;
+ void *addr;
+ char *kva;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct sp_make_share_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ addr = mg_sp_make_share_u2k(info.uva, info.size, info.pid);
+ if (IS_ERR(addr)) {
+ pr_err_ratelimited("%s u2k failed: %ld\n", __func__, PTR_ERR(addr));
+ return PTR_ERR(addr);
+ }
+
+ /* a small, easy checker */
+ if (info.u2k_checker) {
+ kva = (char *)addr;
+ if (kva[0] != 'd' || kva[PAGE_SIZE - 1] != 'c' ||
+ kva[PAGE_SIZE] != 'b' || kva[PAGE_SIZE * 2 - 1] != 'a') {
+ pr_err("%s u2k check normal page failed\n", __func__);
+ return -EFAULT;
+ }
+ }
+ if (info.u2k_hugepage_checker) {
+ kva = (char *)addr;
+ if (kva[0] != 'd' || kva[PMD_SIZE - 1] != 'c' ||
+ kva[PMD_SIZE] != 'b' || kva[PMD_SIZE * 2 - 1] != 'a') {
+ pr_err("%s u2k check hugepage failed\n", __func__);
+ return -EFAULT;
+ }
+ }
+
+ info.addr = (unsigned long)addr;
+ ret = copy_to_user((void __user *)arg, &info, sizeof(struct sp_make_share_info));
+ if (ret) {
+ pr_err("%s copy to user failed: %d\n", __func__, ret);
+ ret = mg_sp_unshare(info.addr, info.size, SPG_ID_DEFAULT);
+ if (ret < 0)
+ pr_err("%s sp_unshare failed: %d\n", __func__, ret);
+ }
+
+ return ret;
+}
+
+static long dev_sp_k2u(unsigned long __user *arg)
+{
+ struct sp_make_share_info info;
+ void *addr;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct sp_make_share_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ addr = mg_sp_make_share_k2u(info.kva, info.size, info.sp_flags, info.pid, info.spg_id);
+ if (IS_ERR(addr)) {
+ pr_err_ratelimited("%s k2u failed: %ld\n", __func__, PTR_ERR(addr));
+ return PTR_ERR(addr);
+ }
+
+ info.addr = (unsigned long)addr;
+ ret = copy_to_user((void __user *)arg, &info, sizeof(struct sp_make_share_info));
+ if (ret) {
+ pr_err("%s copy to user failed: %d\n", __func__, ret);
+ ret = mg_sp_unshare(info.addr, info.size, SPG_ID_DEFAULT);
+ if (ret < 0)
+ pr_err("%s sp_unshare failed: %d\n", __func__, ret);
+ }
+
+ return ret;
+}
+
+static long dev_sp_unshare(unsigned long __user *arg)
+{
+ struct sp_make_share_info info;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct sp_make_share_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ ret = mg_sp_unshare(info.addr, info.size, info.spg_id);
+
+ if (ret < 0)
+ pr_err_ratelimited("%s sp_unshare failed: %d\n", __func__, ret);
+
+ return ret;
+}
+
+static long dev_sp_find_group_by_pid(unsigned long __user *arg)
+{
+ struct sp_group_id_by_pid_info info;
+ int ret = 0, num;
+ int spg_ids[1000]; // 假设用户只加不超过1000个组
+ int *pspg_ids = spg_ids;
+ int *pnum = #
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ if (info.num) {
+ if (get_user(num, info.num)) {
+ pr_err("get num from user failed\n");
+ return -EFAULT;
+ }
+ } else
+ pnum = NULL;
+
+ if (!info.spg_ids)
+ pspg_ids = NULL;
+
+ ret = mg_sp_group_id_by_pid(info.pid, pspg_ids, pnum);
+ if (ret < 0) {
+ pr_err_ratelimited("%s sp_group_id_by_pid failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ ret = copy_to_user(info.spg_ids, spg_ids, sizeof(int) * num);
+ if (ret) {
+ pr_err("copy spg_ids to user failed\n");
+ return -EFAULT;
+ }
+
+ if (put_user(num, info.num)) {
+ pr_err("put num to user failed\n");
+ return -EFAULT;
+ }
+
+ return ret;
+}
+
+static long dev_sp_walk_page_range(unsigned long __user *arg)
+{
+ int ret = 0;
+ struct sp_walk_data wdata;
+ struct sp_walk_page_range_info wpinfo;
+
+ ret = copy_from_user(&wpinfo, arg, sizeof(wpinfo));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ ret = mg_sp_walk_page_range(wpinfo.uva, wpinfo.size, current, &wdata);
+ if (ret < 0) {
+ pr_err_ratelimited("%s sp_walk_page_range failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ wpinfo.pages = wdata.pages;
+ wpinfo.page_count = wdata.page_count;
+ wpinfo.uva_aligned = wdata.uva_aligned;
+ wpinfo.page_size = wdata.page_size;
+ wpinfo.is_hugepage = wdata.is_hugepage;
+ ret = copy_to_user(arg, &wpinfo, sizeof(wpinfo));
+ if (ret < 0) {
+ pr_err("%s copy to user failed, %d", __func__, ret);
+ mg_sp_walk_page_free(&wdata);
+ }
+
+ return ret;
+}
+
+static long dev_sp_walk_page_free(unsigned long __user *arg)
+{
+ struct sp_walk_page_range_info wpinfo;
+ struct sp_walk_data wdata;
+ int ret = copy_from_user(&wpinfo, arg, sizeof(wpinfo));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ wdata.pages = wpinfo.pages,
+ wdata.page_count = wpinfo.page_count,
+ wdata.uva_aligned = wpinfo.uva_aligned,
+ wdata.page_size = wpinfo.page_size,
+ wdata.is_hugepage = wpinfo.is_hugepage,
+ mg_sp_walk_page_free(&wdata);
+
+ return 0;
+}
+
+static long dev_check_memory_node(unsigned long arg)
+{
+ int ret, i;
+ struct sp_walk_data wdata;
+ struct check_memory_node info;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ ret = mg_sp_walk_page_range(info.uva, info.len, current, &wdata);
+ if (ret < 0) {
+ pr_err_ratelimited("%s sp_walk_page_range failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ for (i = 0; i < wdata.page_count; i++) {
+ struct page *page = wdata.pages[i];
+
+ if (page_to_nid(page) != info.node) {
+ pr_err("check nid failed, i:%d, expect:%d, actual:%d\n",
+ i, info.node, page_to_nid(page));
+ ret = -EINVAL;
+ goto out;
+ }
+ }
+
+out:
+ mg_sp_walk_page_free(&wdata);
+
+ return ret;
+}
+
+static long dev_sp_judge_addr(unsigned long __user *arg)
+{
+ unsigned long addr;
+ int ret = 0;
+
+ ret = copy_from_user(&addr, (void __user *)arg, sizeof(unsigned long));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ ret = mg_is_sharepool_addr(addr);
+
+ return ret;
+}
+
+static long dev_vmalloc(unsigned long __user *arg)
+{
+ struct vmalloc_info info;
+ char *addr;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct vmalloc_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ addr = vmalloc_user(info.size);
+ if (IS_ERR(addr)) {
+ pr_err("%s k2u failed: %ld\n", __func__, PTR_ERR(addr));
+ return PTR_ERR(addr);
+ }
+ /* be convenient for k2u, we set some values in the first two page */
+ if (info.size >= PAGE_SIZE) {
+ addr[0] = 'a';
+ addr[PAGE_SIZE - 1] = 'b';
+ }
+ if (info.size >= 2 * PAGE_SIZE) {
+ addr[PAGE_SIZE] = 'c';
+ addr[2 * PAGE_SIZE - 1] = 'd';
+ }
+
+ info.addr = (unsigned long)addr;
+ ret = copy_to_user((void __user *)arg, &info, sizeof(struct vmalloc_info));
+ if (ret) {
+ pr_err("%s copy to user failed: %d\n", __func__, ret);
+ vfree(addr);
+ }
+
+ return ret;
+}
+
+static long dev_kmalloc(unsigned long __user *arg)
+{
+ struct vmalloc_info info;
+ char *addr;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct vmalloc_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ addr = kmalloc(info.size, GFP_KERNEL);
+ if (IS_ERR(addr)) {
+ pr_err("%s kmalloc failed: %ld\n", __func__, PTR_ERR(addr));
+ return PTR_ERR(addr);
+ }
+
+ info.addr = (unsigned long)addr;
+ ret = copy_to_user((void __user *)arg, &info, sizeof(struct vmalloc_info));
+ if (ret) {
+ pr_err("%s copy to user failed: %d\n", __func__, ret);
+ vfree(addr);
+ }
+
+ return ret;
+}
+
+static long dev_vmalloc_hugepage(unsigned long __user *arg)
+{
+ struct vmalloc_info info;
+ char *addr;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct vmalloc_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ addr = vmalloc_hugepage_user(info.size);
+ if (IS_ERR(addr)) {
+ pr_err("%s k2u failed: %ld\n", __func__, PTR_ERR(addr));
+ return PTR_ERR(addr);
+ }
+ /* be convenient for k2u, we set some values in the first two hugepage */
+ if (info.size >= PMD_SIZE) {
+ addr[0] = 'a';
+ addr[PMD_SIZE - 1] = 'b';
+ }
+ if (info.size >= 2 * PMD_SIZE) {
+ addr[PMD_SIZE] = 'c';
+ addr[2 * PMD_SIZE - 1] = 'd';
+ }
+
+
+ info.addr = (unsigned long)addr;
+ ret = copy_to_user((void __user *)arg, &info, sizeof(struct vmalloc_info));
+ if (ret) {
+ pr_err("%s copy to user failed: %d\n", __func__, ret);
+ vfree(addr);
+ }
+
+ return ret;
+}
+
+static long dev_vfree(unsigned long __user *arg)
+{
+ struct vmalloc_info info;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct vmalloc_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ vfree((void *)info.addr);
+
+ return 0;
+}
+
+static long dev_kfree(unsigned long __user *arg)
+{
+ struct vmalloc_info info;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct vmalloc_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ kfree((void *)info.addr);
+
+ return 0;
+}
+
+static int kern_addr_check(unsigned long addr)
+{
+ /* TODO: */
+ if (!addr)
+ return 0;
+
+ return 1;
+}
+
+static long dev_karea_access(const void __user *arg)
+{
+ int ret, i;
+ struct karea_access_info info;
+
+ ret = copy_from_user(&info, arg, sizeof(info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ if (!kern_addr_check(info.addr)) {
+ pr_err("%s kaddr check failed\n", __func__);
+ return -EFAULT;
+ }
+
+ if (info.mod == KAREA_CHECK) {
+ for (i = 0; i < info.size; i++)
+ if (*(char *)info.addr != info.value)
+ return -EINVAL;
+ } else
+ memset((void *)info.addr, info.value, info.size);
+
+ return 0;
+}
+
+static long dev_sp_config_dvpp_range(const void __user *arg)
+{
+ struct sp_config_dvpp_range_info info;
+
+ int ret = copy_from_user(&info, arg, sizeof(info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ return mg_sp_config_dvpp_range(info.start, info.size, info.device_id, info.pid) ? 0 : 1;
+}
+
+int func1(struct notifier_block *nb, unsigned long action, void *data)
+{
+ pr_info("%s is triggered", __FUNCTION__);
+ return 0;
+}
+
+int func2(struct notifier_block *nb, unsigned long action, void *data)
+{
+ pr_info("%s is triggered", __FUNCTION__);
+
+ /*
+ if (spg->id == 2)
+ pr_info("sp group 2 exits.");
+ else
+ pr_info("sp group %d exits, skipped by %s", spg->id, __FUNCTION__);
+*/
+ return 0;
+}
+
+static struct notifier_block nb1 = {
+ .notifier_call = func1,
+};
+static struct notifier_block nb2 = {
+ .notifier_call = func2,
+};
+
+static long dev_register_notifier_block(unsigned long __user *arg)
+{
+ return 0;
+#if 0
+ // 注册一个notifier block chain
+ struct sp_notifier_block_info info;
+
+ int ret = copy_from_user(&info, arg, sizeof(info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ if (info.i == 1) {
+ ret = sp_register_notifier(&nb1);
+ if (ret != 0)
+ pr_info("register func1 failed, errno is %d", ret);
+ else
+ pr_info("register notifier for func 1 success.");
+ }
+ else if (info.i == 2) {
+ ret = sp_register_notifier(&nb2);
+ if (ret != 0)
+ pr_info("register func2 failed, errno is %d", ret);
+ else
+ pr_info("register notifier for func 2 success.");
+ } else {
+ pr_info("not valid user arg");
+ ret = -1;
+ }
+
+ return ret;
+#endif
+}
+
+static long dev_unregister_notifier_block(unsigned long __user *arg)
+{
+ return 0;
+#if 0
+ // 取消注册一个notifier block chain
+ struct sp_notifier_block_info info;
+
+ int ret = copy_from_user(&info, arg, sizeof(info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+ if (info.i == 1) {
+ ret = sp_unregister_notifier(&nb1);
+ if (ret != 0)
+ pr_info("unregister func1 failed, errno is %d", ret);
+ else
+ pr_info("unregister func1 success.");
+ }
+ else if (info.i == 2) {
+ ret = sp_unregister_notifier(&nb2);
+ if (ret != 0)
+ pr_info("unregister func2 failed, errno is %d", ret);
+ else
+ pr_info("unregister func2 success.");
+ }
+ else {
+ pr_info("not valid user arg");
+ ret = -1;
+ }
+
+ return ret;
+#endif
+}
+
+static long dev_sp_del_from_group(unsigned long __user *arg)
+{
+ return 0;
+#if 0
+ struct sp_del_from_group_info info;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct sp_del_from_group_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ ret = mg_sp_group_del_task(info.pid, info.spg_id);
+ if (ret < 0) {
+ pr_err_ratelimited("%s sp_group_del_task failed: %d, pid is %d, spg_id is %d\n",
+ __func__, ret, info.pid, info.spg_id);
+ return ret;
+ }
+
+ ret = copy_to_user((void __user *)arg, &info, sizeof(struct sp_del_from_group_info));
+ if (ret) {
+ pr_err("%s copy to user failed: %d\n", __func__, ret);
+ }
+
+ return ret;
+#endif
+}
+
+static long dev_sp_id_of_curr(unsigned long __user *arg)
+{
+ int ret, spg_id;
+ struct sp_id_of_curr_info info;
+
+ spg_id = mg_sp_id_of_current();
+ if (spg_id <= 0) {
+ pr_err("get id of current failed %d\n", spg_id);
+ return spg_id;
+ }
+
+ info.spg_id = spg_id;
+ ret = copy_to_user((void __user *)arg, &info, sizeof(struct sp_id_of_curr_info));
+ if (ret)
+ pr_err("%s copy to user failed: %d\n", __func__, ret);
+
+ return ret;
+}
+
+/*
+ * 底软调用场景:
+ * 1. A进程调用mmap,申请地址空间,指定驱动的设备文件
+ * 2. B进程(host进程)下发任务给A进程
+ * 3. 由B触发,在device侧用一worker线程给A分配内存和建立页表(mm是在mmap内核回调中保存的),没有
+ * 在mmap中申请内存是因为到这一点才会真正使用到内存。
+ * 4. 内存的释放时单独提供了一个ioctl接口
+ *
+ * 申请大页并映射到用户态进程
+ * 1. 如果没有指定addr(NULL),则调用vm_mmap为进程申请内存,否则用指定的addr,用户自己指定地址时需要保证2M对齐
+ * 2. remote,是否在当前进程中调用内存申请和建立页表
+ */
+static long dev_alloc_huge_memory(struct file *file, void __user *arg)
+{
+ return -EFAULT;
+#if 0
+ int ret;
+ unsigned long addr, off, size;
+ struct vm_area_struct *vma;
+ struct alloc_huge_memory alloc_info;
+ struct page **pages;
+
+ ret = copy_from_user(&alloc_info, arg, sizeof(alloc_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return -EFAULT;
+ }
+
+ size = ALIGN(alloc_info.size, PMD_SIZE);
+ // TODO: 入参检查
+ if (!alloc_info.addr) {
+ /*
+ * 对用户态做大页映射的时候必须要2M对齐,所以对申请的虚拟地址多申请了2M,方便后面做对齐处理
+ */
+ addr = vm_mmap(file, 0, size + PMD_SIZE, PROT_READ | PROT_WRITE,
+ MAP_SHARED, 0);
+ if ((long)addr < 0) {
+ pr_err("vm_mmap failed, %ld\n", (long)addr);
+ return addr;
+ }
+ addr = ALIGN(addr, PMD_SIZE);
+ } else
+ addr = alloc_info.addr;
+
+ vma = find_vma(current->mm, addr);
+ if (!range_in_vma(vma, addr, addr + size)) {
+ pr_err("invalid address\n");
+ return -EINVAL;
+ }
+
+ pages = kzalloc((size / PMD_SIZE + 1) * sizeof(*pages), GFP_KERNEL);
+ if (!pages) {
+ pr_err("alloc vma private pages array failed\n");
+ return -ENOMEM;
+ }
+ vma->vm_private_data = pages;
+
+ for (off = 0; off < size; off += PMD_SIZE) {
+ struct page *page;
+
+ page = hugetlb_alloc_hugepage(alloc_info.nid, alloc_info.flags);
+ if (!page) {
+ pr_err("alloc hugepage failed, nid:%d, flags:%d\n", alloc_info.nid, alloc_info.flags);
+ return -ENOMEM; // TODO: 资源清理
+ }
+ *(pages++) = page;
+
+ ret = hugetlb_insert_hugepage_pte_by_pa(current->mm, addr + off,
+ __pgprot(pgprot_val(vma->vm_page_prot) & ~PTE_RDONLY), page_to_phys(page));
+ if (ret < 0) {
+ pr_err("insert hugepage failed, %d\n", ret);
+ return ret; // TODO: 资源清理
+ }
+ // 没有下面一行会导致用户进程退出的时候报BadRss错误,原因是走进程默认的释放页表
+ // 的流程会减去此处建立的大页映射,加上下面这一行会导致用例在5.10上面ko插入失败,
+ // 因为有符号没有导出
+ //add_mm_counter(current->mm, mm_counter_file(page), HPAGE_PMD_NR);
+ }
+
+ if (!alloc_info.addr) {
+ alloc_info.addr = addr;
+ if (copy_to_user(arg, &alloc_info, sizeof(alloc_info))) {
+ pr_err("copy_to_user failed\n");
+ return -EFAULT; // TODO: 资源清理
+ }
+ }
+
+ return 0;
+#endif
+}
+
+static unsigned long test_alloc_hugepage(unsigned long size, int nid,
+ nodemask_t *nodemask)
+{
+ pr_err_ratelimited("test_alloc_hugepage: execute succ.\n");
+ return -ENOMEM;
+}
+
+static long dev_hpage_reg_test_suite(void *arg)
+{
+ int ret;
+
+ ret = sp_register_hugepage_allocator(NULL);
+ if (ret != -EINVAL) {
+ pr_err_ratelimited("%s expect return -EINVAL, but return %d\n", __func__, ret);
+ return -EINVAL;
+ }
+
+ /* First call succeeds */
+ ret = sp_register_hugepage_allocator(test_alloc_hugepage);
+ if (ret != 0) {
+ pr_err_ratelimited("%s expect return 0, but return %d\n", __func__, ret);
+ return -EINVAL;
+ }
+
+ /* Second call fails with -EBUSY */
+ ret = sp_register_hugepage_allocator(test_alloc_hugepage);
+ if (ret != -EBUSY) {
+ pr_err_ratelimited("%s expect return -EBUSY, but return %d\n", __func__, ret);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static long dev_hpage_reg_after_alloc(void *arg)
+{
+ int ret;
+
+ ret = sp_register_hugepage_allocator(test_alloc_hugepage);
+ if (ret != -EBUSY) {
+ pr_err_ratelimited("%s expect return -EBUSY, but return %d\n", __func__, ret);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static long dev_hpage_reg_exec(void *arg)
+{
+ int ret;
+
+ ret = sp_register_hugepage_allocator(test_alloc_hugepage);
+ if (ret != 0) {
+ pr_err_ratelimited("%s expect return 0, but return %d\n", __func__, ret);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/*int this func wdata == NULL*/
+static long dev_sp_walk_page_range_null(unsigned long __user *arg)
+{
+ int ret = 0;
+ struct sp_walk_page_range_info wpinfo;
+
+ ret = copy_from_user(&wpinfo, arg, sizeof(wpinfo));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ ret = mg_sp_walk_page_range(wpinfo.uva, wpinfo.size, current, NULL);
+ if (ret < 0) {
+ pr_err_ratelimited("%s sp_walk_page_range failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ return ret;
+}
+
+long dev_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+{
+ int ret = 0;
+
+ if (!arg)
+ return -EINVAL;
+
+ switch (cmd) {
+ case SP_IOCTL_ADD_GROUP:
+ ret = dev_sp_add_group((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_ALLOC:
+ ret = dev_sp_alloc((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_FREE:
+ ret = dev_sp_free((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_U2K:
+ ret = dev_sp_u2k((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_K2U:
+ ret = dev_sp_k2u((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_UNSHARE:
+ ret = dev_sp_unshare((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_FIND_GROUP_BY_PID:
+ ret = dev_sp_find_group_by_pid((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_WALK_PAGE_RANGE:
+ ret = dev_sp_walk_page_range((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_WALK_PAGE_FREE:
+ ret = dev_sp_walk_page_free((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_CHECK_MEMORY_NODE:
+ ret = dev_check_memory_node(arg);
+ break;
+ case SP_IOCTL_JUDGE_ADDR:
+ ret = dev_sp_judge_addr((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_VMALLOC:
+ ret = dev_vmalloc((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_VMALLOC_HUGEPAGE:
+ ret = dev_vmalloc_hugepage((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_VFREE:
+ ret = dev_vfree((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_KACCESS:
+ ret = dev_karea_access((void __user *)arg);
+ break;
+ case SP_IOCTL_CONFIG_DVPP_RANGE:
+ ret = dev_sp_config_dvpp_range((void __user *)arg);
+ break;
+ case SP_IOCTL_REGISTER_NOTIFIER_BLOCK:
+ ret = dev_register_notifier_block((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_UNREGISTER_NOTIFIER_BLOCK:
+ ret = dev_unregister_notifier_block((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_DEL_FROM_GROUP:
+ ret = dev_sp_del_from_group((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_ID_OF_CURRENT:
+ ret = dev_sp_id_of_curr((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_ALLOC_HUGE_MEMORY:
+ ret = dev_alloc_huge_memory(file, (unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_WALK_PAGE_RANGE_NULL:
+ ret = dev_sp_walk_page_range_null((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_KTHREAD_START:
+ ret = dev_sp_kthread_start((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_KTHREAD_END:
+ ret = dev_sp_kthread_end((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_KMALLOC:
+ ret = dev_kmalloc((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_KFREE:
+ ret = dev_kfree((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_HPAGE_REG_TESTSUITE:
+ ret = dev_hpage_reg_test_suite((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_HPAGE_REG_AFTER_ALLOC:
+ ret = dev_hpage_reg_after_alloc((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_HPAGE_REG_EXEC:
+ ret = dev_hpage_reg_exec((unsigned long __user *)arg);
+ break;
+ default:
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+#if 0
+static void sp_vm_close(struct vm_area_struct *vma)
+{
+ struct page **pages = vma->vm_private_data;
+ if (!pages)
+ return;
+
+ for (; *pages; pages++)
+ put_page(*pages);
+
+ kfree(vma->vm_private_data);
+ vma->vm_private_data = NULL;
+}
+
+static struct vm_operations_struct sp_vm_ops = {
+ .close = sp_vm_close,
+};
+
+#define VM_HUGE_SPECIAL 0x800000000
+static int dev_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ pr_info("wws: vma range: [%#lx, %#lx], vm_mm:%pK\n", vma->vm_start, vma->vm_end, vma->vm_mm);
+ // 标记这个VMA建立的是大页映射,在4.19上面使用,在u2k流程中walk_page_range的时候使用到
+ vma->vm_flags |= VM_HUGE_SPECIAL;
+ vma->vm_flags |= VM_DONTCOPY; // mmap 的设备内存不需要fork,否则需要特殊处理
+ vma->vm_ops = &sp_vm_ops;
+ return 0;
+}
+#endif
+
+struct file_operations fops = {
+ .owner = THIS_MODULE,
+ .open = dev_open,
+ .unlocked_ioctl = dev_ioctl,
+// .mmap = dev_mmap,
+};
+
+/*
+ * before userspace use: mknod DEVICE_FILE_NAME c MAJOR_NUM 0
+ * e.g. insmod sharepool_dev.ko && mknod sharepool_dev c 100 0
+ */
+static int __init dev_init_module(void)
+{
+ int ret;
+
+ ret = register_chrdev(MAJOR_NUM, DEVICE_FILE_NAME, &fops);
+
+ if (ret < 0) {
+ pr_err("error in sharepool_init_module: %d\n", ret);
+ return ret;
+ }
+
+ pr_info("register share pool device success. the major device number is %d\n", MAJOR_NUM);
+ return 0;
+}
+
+static void __exit dev_cleanup_module(void)
+{
+ unregister_chrdev(MAJOR_NUM, DEVICE_FILE_NAME);
+}
+
+module_init(dev_init_module);
+module_exit(dev_cleanup_module);
+
+MODULE_DESCRIPTION("share pool device driver");
+MODULE_LICENSE("GPL v2");
+
diff --git a/tools/testing/sharepool/module/sharepool_dev.h b/tools/testing/sharepool/module/sharepool_dev.h
new file mode 100644
index 000000000000..769d1cc12f57
--- /dev/null
+++ b/tools/testing/sharepool/module/sharepool_dev.h
@@ -0,0 +1,149 @@
+/*
+ * sharepool_dev.h - the header file with the ioctl definitions.
+ *
+ * The declarations here have to be in a header file, because
+ * they need to be known both to the kernel module
+ * (in sharepool_dev.c) and the process / shared lib calling ioctl.
+ */
+
+#ifndef SHAREPOOL_DEV_H
+#define SHAREPOOL_DEV_H
+
+#include <linux/ioctl.h>
+
+/* major num can be changed when necessary */
+#define MAJOR_NUM 100
+
+#define DEVICE_FILE_NAME "sharepool_dev"
+struct sp_kthread_info {
+ unsigned long addr;
+ unsigned long size;
+ int spg_id;
+ int type;
+};
+
+struct sp_add_group_info {
+ int pid;
+ int prot;
+ int spg_id;
+ unsigned long flag;
+};
+
+struct sp_alloc_info {
+ unsigned long addr; /* return value */
+ unsigned long size;
+ unsigned long flag;
+ int spg_id;
+};
+
+struct sp_make_share_info {
+ unsigned long addr; /* return value */
+ unsigned long uva; /* for u2k */
+ unsigned long kva; /* for k2u */
+ unsigned long size;
+ unsigned long sp_flags; /* for k2u */
+ int pid;
+ int spg_id;
+ bool u2k_checker;
+ bool u2k_hugepage_checker;
+};
+
+struct sp_walk_page_range_info {
+ unsigned long uva;
+ unsigned long size;
+ struct page **pages;
+ unsigned int page_count;
+ unsigned long uva_aligned;
+ unsigned long page_size;
+ bool is_hugepage;
+};
+
+/* for vmalloc_user and vmalloc_hugepage_user */
+struct vmalloc_info {
+ unsigned long addr;
+ unsigned long size;
+};
+
+#define KAREA_CHECK 0
+#define KAREA_SET 1
+struct karea_access_info {
+ int mod;
+ char value;
+ unsigned long addr;
+ unsigned long size;
+};
+
+struct sp_config_dvpp_range_info {
+ unsigned long start;
+ unsigned long size;
+ int device_id; // must be zero
+ int pid;
+};
+
+struct sp_group_id_by_pid_info {
+ int pid;
+ int *spg_ids;
+ int *num;
+};
+
+struct sp_notifier_block_info {
+ int i;
+};
+
+struct sp_del_from_group_info {
+ int pid;
+ int spg_id;
+};
+
+struct sp_id_of_curr_info {
+ int spg_id;
+};
+
+#define HUGETLB_ALLOC_NONE 0x00
+#define HUGETLB_ALLOC_NORMAL 0x01 /* normal hugepage */
+#define HUGETLB_ALLOC_BUDDY 0x02 /* buddy hugepage */
+
+struct alloc_huge_memory {
+ int nid; // nid
+ int flags; // 申请大页的类型,0,1,2
+ unsigned long addr; // 返回地址
+ unsigned long size; // 申请大小
+};
+
+struct check_memory_node {
+ unsigned long uva;
+ unsigned long len;
+ int node;
+};
+
+#define SP_IOCTL_ADD_GROUP _IOWR(MAJOR_NUM, 0, struct sp_add_group_info *)
+#define SP_IOCTL_ALLOC _IOWR(MAJOR_NUM, 1, struct sp_alloc_info *)
+#define SP_IOCTL_FREE _IOW(MAJOR_NUM, 2, struct sp_alloc_info *)
+#define SP_IOCTL_U2K _IOWR(MAJOR_NUM, 3, struct sp_u2k_info *)
+#define SP_IOCTL_K2U _IOWR(MAJOR_NUM, 4, struct sp_k2u_info *)
+#define SP_IOCTL_UNSHARE _IOW(MAJOR_NUM, 5, struct sp_unshare_info *)
+#define SP_IOCTL_FIND_GROUP_BY_PID _IOWR(MAJOR_NUM, 6, struct sp_group_id_by_pid_info *)
+#define SP_IOCTL_WALK_PAGE_RANGE _IOWR(MAJOR_NUM, 7, struct sp_walk_page_range_info *)
+#define SP_IOCTL_WALK_PAGE_FREE _IOW(MAJOR_NUM, 8, struct sp_walk_page_range_info *)
+#define SP_IOCTL_JUDGE_ADDR _IOW(MAJOR_NUM, 9, unsigned long)
+#define SP_IOCTL_VMALLOC _IOWR(MAJOR_NUM, 10, struct vmalloc_info *)
+#define SP_IOCTL_VMALLOC_HUGEPAGE _IOWR(MAJOR_NUM, 11, struct vmalloc_info *)
+#define SP_IOCTL_VFREE _IOW(MAJOR_NUM, 12, struct vmalloc_info *)
+#define SP_IOCTL_KACCESS _IOW(MAJOR_NUM, 13, struct karea_access_info *)
+#define SP_IOCTL_CONFIG_DVPP_RANGE _IOW(MAJOR_NUM, 14, struct sp_config_dvpp_range_info *)
+#define SP_IOCTL_REGISTER_NOTIFIER_BLOCK _IOWR(MAJOR_NUM, 15, struct sp_notifier_block_info *)
+#define SP_IOCTL_DEL_FROM_GROUP _IOWR(MAJOR_NUM, 16, struct sp_del_from_group_info *)
+#define SP_IOCTL_ID_OF_CURRENT _IOW(MAJOR_NUM, 17, struct sp_id_of_curr_info *)
+#define SP_IOCTL_ALLOC_HUGE_MEMORY _IOWR(MAJOR_NUM, 18, struct alloc_huge_memory)
+#define SP_IOCTL_CHECK_MEMORY_NODE _IOW(MAJOR_NUM, 19, struct check_memory_node)
+#define SP_IOCTL_UNREGISTER_NOTIFIER_BLOCK _IOWR(MAJOR_NUM, 20, struct sp_notifier_block_info *)
+#define SP_IOCTL_WALK_PAGE_RANGE_NULL _IOWR(MAJOR_NUM, 21, struct sp_walk_page_range_info *)
+#define SP_IOCTL_KTHREAD_START _IOWR(MAJOR_NUM, 22, struct sp_kthread_info *)
+#define SP_IOCTL_KTHREAD_END _IOWR(MAJOR_NUM, 23, struct sp_kthread_info *)
+#define SP_IOCTL_KMALLOC _IOWR(MAJOR_NUM, 24, struct vmalloc_info *)
+#define SP_IOCTL_KFREE _IOW(MAJOR_NUM, 25, struct vmalloc_info *)
+#define SP_IOCTL_HPAGE_REG_TESTSUITE _IOW(MAJOR_NUM, 26, void *)
+#define SP_IOCTL_HPAGE_REG_AFTER_ALLOC _IOW(MAJOR_NUM, 27, void *)
+#define SP_IOCTL_HPAGE_REG_EXEC _IOW(MAJOR_NUM, 28, void *)
+#endif
+
diff --git a/tools/testing/sharepool/test.sh b/tools/testing/sharepool/test.sh
new file mode 100755
index 000000000000..0f00c8b21248
--- /dev/null
+++ b/tools/testing/sharepool/test.sh
@@ -0,0 +1,55 @@
+#!/bin/sh
+
+set -x
+
+trap "rmmod sharepool_dev && rm -f sharepool_dev && echo 0 > /proc/sys/vm/nr_overcommit_hugepages" EXIT
+
+insmod sharepool_dev.ko
+if [ $? -ne 0 ] ;then
+ echo "insmod failed"
+ exit 1
+fi
+mknod sharepool_dev c 100 0
+
+if uname -r | grep "^6.6" > /dev/null ; then
+ echo 10000000 > /proc/sys/vm/nr_overcommit_hugepages
+fi
+export LD_LIBRARY_PATH=.
+
+# 运行维测监控进程
+./test_mult_process/test_debug_loop > debug.log &
+
+testlist="test_all api_test.sh function_test.sh scenario_test.sh dts_bugfix_test.sh test_mult_process.sh"
+
+while getopts "as" opt; do
+ case $opt in
+ a)
+ # -a 全量测试(含压力测试)
+ testlist="$testlist stress_test.sh"
+ ;;
+ s)
+ # -s 仅压力测试
+ testlist="stress_test.sh"
+ ;;
+ \?)
+ echo "用法: $0 [-a] [-s]"
+ exit 1
+ ;;
+ esac
+done
+
+for line in $testlist
+do
+ ./$line
+ if [ $? -ne 0 ] ;then
+ echo "testcase $line failed"
+ killall test_debug_loop
+ exit 1
+ fi
+done
+
+killall test_debug_loop
+echo ">>>> SHAREPOOL ALL TESTCASES FINISH <<<<"
+
+# 异常调用场景用例
+#./reliability_test.sh
diff --git a/tools/testing/sharepool/test_end.sh b/tools/testing/sharepool/test_end.sh
new file mode 100755
index 000000000000..6231046db778
--- /dev/null
+++ b/tools/testing/sharepool/test_end.sh
@@ -0,0 +1,8 @@
+#!/bin/sh
+set -x
+rmmod sharepool_dev.ko
+rm -rf sharepool_dev
+
+if uname -r | grep "^6.6" > /dev/null ; then
+ echo 0 > /proc/sys/vm/nr_overcommit_hugepages
+fi
diff --git a/tools/testing/sharepool/test_loop.sh b/tools/testing/sharepool/test_loop.sh
new file mode 100755
index 000000000000..94789c720755
--- /dev/null
+++ b/tools/testing/sharepool/test_loop.sh
@@ -0,0 +1,35 @@
+#!/bin/sh
+
+let i=1
+
+while true
+do
+ echo ================= TEST NO. $i ===============
+ let i++
+
+ ./test.sh
+ if [ $? -ne 0 ]
+ then
+ echo test failed
+ exit 1
+ fi
+
+ sleep 3 # dropping spa and spg may have latency
+ free -m
+
+ ret=`cat /proc/sharepool/spa_stat | wc -l`
+ if [ $ret -ge 15 ]
+ then
+ cat /proc/sharepool/spa_stat
+ echo spa_stat not clean
+ exit 1
+ fi
+
+ ret=`cat /proc/sharepool/proc_stat | wc -l`
+ if [ $ret -ge 15 ]
+ then
+ cat /proc/sharepool/proc_stat
+ echo proc_stat not clean
+ exit 1
+ fi
+done
diff --git a/tools/testing/sharepool/test_prepare.sh b/tools/testing/sharepool/test_prepare.sh
new file mode 100755
index 000000000000..72392a5b0aff
--- /dev/null
+++ b/tools/testing/sharepool/test_prepare.sh
@@ -0,0 +1,8 @@
+#!/bin/sh
+set -x
+mknod sharepool_dev c 100 0
+insmod sharepool_dev.ko
+
+if uname -r | grep "^6.6" > /dev/null ; then
+ echo 10000000 > /proc/sys/vm/nr_overcommit_hugepages
+fi
diff --git a/tools/testing/sharepool/testcase/Makefile b/tools/testing/sharepool/testcase/Makefile
new file mode 100644
index 000000000000..d40d1b325142
--- /dev/null
+++ b/tools/testing/sharepool/testcase/Makefile
@@ -0,0 +1,12 @@
+MODULEDIR:=test_all test_mult_process api_test function_test reliability_test performance_test dts_bugfix_test scenario_test stress_test
+
+export CC:=$(CROSS_COMPILE)gcc $(sharepool_extra_ccflags)
+
+all:tooldir
+
+tooldir:
+ for n in $(MODULEDIR); do $(MAKE) CC="$(CC) $(sharepool_extra_ccflags)" -C $$n; done
+install:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n install; done
+clean:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n clean; done
diff --git a/tools/testing/sharepool/testcase/api_test/Makefile b/tools/testing/sharepool/testcase/api_test/Makefile
new file mode 100644
index 000000000000..664007daa9e6
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/Makefile
@@ -0,0 +1,14 @@
+MODULEDIR:=is_sharepool_addr sp_alloc sp_config_dvpp_range sp_free sp_group_add_task sp_group_id_by_pid sp_make_share_k2u sp_make_share_u2k sp_unshare sp_walk_page_range_and_free sp_id_of_current sp_numa_maps
+
+manual_test:=test_sp_config_dvpp_range
+
+all:tooldir
+
+tooldir:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n; done
+install:
+ mkdir -p $(TOOL_BIN_DIR)/api_test && cp api_test.sh $(TOOL_BIN_DIR)
+ for n in $(MODULEDIR); do $(MAKE) -C $$n install; done
+ mkdir -p $(TOOL_BIN_DIR)/api_test/manual_test && cd $(TOOL_BIN_DIR)/api_test/ && mv $(manual_test) manual_test && cd -
+clean:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n clean; done
diff --git a/tools/testing/sharepool/testcase/api_test/api_test.sh b/tools/testing/sharepool/testcase/api_test/api_test.sh
new file mode 100755
index 000000000000..29db2e449dea
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/api_test.sh
@@ -0,0 +1,15 @@
+#!/bin/sh
+
+set -x
+
+echo 0 > /proc/sys/vm/sharepool_ac_mode
+
+ls ./api_test | grep -v manual_test | while read line
+do
+ ./api_test/$line
+ if [ $? -ne 0 ] ;then
+ echo "testcase api_test/$line failed"
+ exit 1
+ fi
+ cat /proc/meminfo
+done
diff --git a/tools/testing/sharepool/testcase/api_test/is_sharepool_addr/Makefile b/tools/testing/sharepool/testcase/api_test/is_sharepool_addr/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/is_sharepool_addr/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/is_sharepool_addr/test_is_sharepool_addr.c b/tools/testing/sharepool/testcase/api_test/is_sharepool_addr/test_is_sharepool_addr.c
new file mode 100644
index 000000000000..3ba63605ed51
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/is_sharepool_addr/test_is_sharepool_addr.c
@@ -0,0 +1,90 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Sat Dec 19 11:29:06 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdbool.h>
+
+#include "sharepool_lib.h"
+
+#define MMAP_SHARE_POOL_START 0xe80000000000UL
+#define MMAP_SHARE_POOL_END 0xf80000000000UL
+
+enum TEST_RESULT {
+ SUCCESS,
+ FAIL
+};
+
+/*
+ * testcase1: 边界值测试:MMAP_SHARE_POOL_START,MMAP_SHARE_POOL_END,预期成功。
+ */
+
+static int testcase1(void)
+{
+ bool judge_ret = 0;
+ judge_ret = ioctl_judge_addr(dev_fd, MMAP_SHARE_POOL_START);
+ if (judge_ret != true)
+ return -1;
+
+ judge_ret = ioctl_judge_addr(dev_fd, MMAP_SHARE_POOL_END);
+ if (judge_ret != false)
+ return -1;
+
+ return 0;
+}
+
+/*
+ * testcase1: DVPP地址测试:dvpp_start, dvpp_start + 10,预期成功。
+ */
+
+static int testcase2(void)
+{
+ struct sp_config_dvpp_range_info cdr_info = {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = DVPP_16G - 1,
+ .device_id = 0,
+ .pid = getpid(),
+ };
+ int ret;
+ bool judge_ret = 0;
+
+ ret = ioctl_config_dvpp_range(dev_fd, &cdr_info);
+
+ if (ret < 0) {
+ pr_info("dvpp config failed. errno: %d", errno);
+ return ret;
+ } else
+ pr_info("dvpp config success.");
+
+ judge_ret = ioctl_judge_addr(dev_fd, cdr_info.start);
+ if (judge_ret != true)
+ return -1;
+
+ judge_ret = ioctl_judge_addr(dev_fd, cdr_info.start + cdr_info.size - 1);
+ if (judge_ret != true)
+ return -1;
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "边界值测试:MMAP_SHARE_POOL_START,MMAP_SHARE_POOL_END,预期成功。")
+ TESTCASE(testcase2, "DVPP地址测试:dvpp_start, dvpp_end - 1,预期成功")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_alloc/Makefile b/tools/testing/sharepool/testcase/api_test/sp_alloc/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_alloc/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc.c b/tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc.c
new file mode 100644
index 000000000000..a6cc430f18e6
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc.c
@@ -0,0 +1,543 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+#define ALIGN_UP(x, align_to) (((x) + ((align_to)-1)) & ~((align_to)-1))
+#define CMD_LEN 100
+#define UNIT 1024
+#define PAGE_NUM 100
+#define HGPAGE_NUM 10
+#define LARGE_PAGE_NUM 1000000
+#define ATOMIC_TEST_SIZE (1024UL * 1024UL * 1024UL) // 1G
+/*
+ * 前置条件:进程先加组。
+ * testcase1: 申请共享组内存,flag为0。预期申请成功。
+ * testcase2: 申请共享组内存,flag为HP,size不对齐。预期申请到对齐大页大小size的内存。
+ * testcase3: 申请共享组内存,flag为HPONLY。预期申请成功,或空间不足时返回ENOMEM。
+ * testcase4: 申请共享组内存,flag为DVPP。预期申请成功,虚拟地址空间范围在dvpp地址空间。
+ * testcase5: 申请共享组内存,flag为DVPP|HP。预期申请到对齐大页大小size的内存,虚拟地址空间范围在dvpp地址空间。
+ * testcase6: 申请共享组内存,flag为DVPP|ONLY。预期申请成功,虚拟地址空间范围在dvpp地址空间,或空间不足时返回ENOMEM。
+ * testcase7: 申请共享组内存,flag非法。预期申请失败,返回EINVAL。
+ * testcase8: size指定为0,远超出物理内存大小。预期申请失败。
+ * testcase9: spg_id为NONE,未使用过的正常范围内数值,DVPP,或超出范围。预期申请失败。
+ * testcase10: 申请指定memory node节点内存,flag中包含用户传入的device id,对应memory node
+ * testcase11: 单进程申请8G,观察维测结构打印是否正常
+ */
+
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int cleanup(struct sp_alloc_info *alloc_info)
+{
+ int ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ //pr_info("testcase1 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+
+ return cleanup(&alloc_info);
+}
+
+static int testcase2(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_HUGEPAGE,
+ .size = (HGPAGE_NUM * PMD_SIZE - 1),
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ pr_info("testcase2 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ if (alloc_info.addr != ALIGN_UP(alloc_info.addr, PMD_SIZE)) {
+ pr_info("testcase2 ioctl_alloc addr = %p is not aligned", alloc_info.addr);
+ }
+
+ return cleanup(&alloc_info);
+}
+
+static int testcase3(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_HUGEPAGE,
+ .size = HGPAGE_NUM * PMD_SIZE,
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0 && errno == ENOMEM) {
+ //pr_info("testcase3 ioctl_alloc failed as expected, errno: ENOMEM");
+ return 0;
+ } else if (ret != 0) {
+ //pr_info("testcase3 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+
+ return cleanup(&alloc_info);
+}
+
+static int testcase4(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_DVPP,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ };
+
+ unsigned long dvpp_size1;
+ unsigned long dvpp_size2;
+ char cmd[CMD_LEN] = "cat /proc/sharepool/spa_stat | grep \"dvpp size\" | awk '{print $4}'";
+ FILE *p_file1 = NULL;
+ FILE *p_file2 = NULL;
+ if ((p_file1 = popen(cmd, "r")) == NULL) {
+ pr_info("testcase4 popen1 failed");
+ return -1;
+ }
+ fscanf(p_file1, "%lu", &dvpp_size1);
+ pclose(p_file1);
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ pr_info("testcase4 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+
+ if ((p_file2 = popen(cmd, "r")) == NULL) {
+ pr_info("testcase4 popen2 failed");
+ return -1;
+ }
+ fscanf(p_file2, "%lu", &dvpp_size2);
+ pclose(p_file2);
+ if ((dvpp_size2 - dvpp_size1) * UNIT != alloc_info.size) {
+ pr_info("testcase4 dvpp_size check failed, dvpp_size1 %lu, dvpp_size2 %lu", dvpp_size1, dvpp_size2);
+ return -1;
+ }
+
+ return cleanup(&alloc_info);
+}
+
+static int testcase5(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_DVPP | SP_HUGEPAGE,
+ .size = (HGPAGE_NUM * PMD_SIZE - 1),
+ .spg_id = 1,
+ };
+
+ unsigned long dvpp_size1;
+ unsigned long dvpp_size2;
+ char cmd[CMD_LEN] = "cat /proc/sharepool/spa_stat | grep \"dvpp size\" | awk '{print $4}'";
+ FILE *p_file1 = NULL;
+ FILE *p_file2 = NULL;
+ if ((p_file1 = popen(cmd, "r")) == NULL) {
+ pr_info("testcase5 popen1 failed");
+ return -1;
+ }
+ fscanf(p_file1, "%lu", &dvpp_size1);
+ pclose(p_file1);
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ pr_info("testcase5 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ if (alloc_info.addr != ALIGN_UP(alloc_info.addr, PMD_SIZE)) {
+ pr_info("testcase5 ioctl_alloc addr = %p is not aligned", alloc_info.addr);
+ }
+
+ if ((p_file2 = popen(cmd, "r")) == NULL) {
+ pr_info("testcase5 popen2 failed");
+ return -1;
+ }
+ fscanf(p_file2, "%lu", &dvpp_size2);
+ pclose(p_file2);
+ if ((dvpp_size2 - dvpp_size1) * UNIT != (alloc_info.size + 1)) {
+ pr_info("testcase5 dvpp_size check failed, dvpp_size1 %lu, dvpp_size2 %lu", dvpp_size1, dvpp_size2);
+ return -1;
+ }
+
+ return cleanup(&alloc_info);
+}
+
+static int testcase6(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_DVPP,
+ .size = HGPAGE_NUM * PMD_SIZE,
+ .spg_id = 1,
+ };
+
+ unsigned long dvpp_size1;
+ unsigned long dvpp_size2;
+ char cmd[CMD_LEN] = "cat /proc/sharepool/spa_stat | grep \"dvpp size\" | awk '{print $4}'";
+ FILE *p_file1 = NULL;
+ FILE *p_file2 = NULL;
+ if ((p_file1 = popen(cmd, "r")) == NULL) {
+ pr_info("testcase6 popen1 failed");
+ return -1;
+ }
+ fscanf(p_file1, "%lu", &dvpp_size1);
+ pclose(p_file1);
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0 && errno == ENOMEM) {
+ pr_info("testcase6 ioctl_alloc failed as expected, errno: ENOMEM");
+ return 0;
+ } else if (ret != 0) {
+ pr_info("testcase6 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ if ((p_file2 = popen(cmd, "r")) == NULL) {
+ pr_info("testcase6 popen2 failed");
+ return -1;
+ }
+ fscanf(p_file2, "%lu", &dvpp_size2);
+ pclose(p_file2);
+ if ((dvpp_size2 - dvpp_size1) * UNIT != alloc_info.size) {
+ pr_info("testcase6 dvpp_size check failed, dvpp_size1 %lu, dvpp_size2 %lu", dvpp_size1, dvpp_size2);
+ return -1;
+ }
+
+ return cleanup(&alloc_info);
+}
+
+static int testcase7(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = (1 << 3),
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .flag = -1,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ };
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_infos[i]);
+ if (ret != 0 && errno == EINVAL) {
+ pr_info("testcase7 ioctl_alloc %d failed as expected", i);
+ ret = 0;
+ } else if (ret != 0) {
+ pr_info("testcase7 ioctl_alloc %d failed unexpected, errno: %d", i, errno);
+ } else {
+ if (i == 0)
+ continue;
+ pr_info("testcase7 ioctl_alloc %d success unexpected", i);
+ cleanup(&alloc_infos[i]);
+ ret = -1;
+ }
+ }
+
+ return ret;
+}
+
+static int testcase8(void)
+{
+ // 用例会触发oom,暂时注掉
+ return 0;
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = 0,
+ .size = 0,
+ .spg_id = 1,
+ },
+ {
+ .flag = 0,
+ .size = LARGE_PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ };
+ int errs[] = {EINVAL, EOVERFLOW};
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_infos[i]);
+ if (ret != 0 && errno == errs[i]) {
+ //pr_info("testcase8 ioctl_alloc %d failed as expected", i);
+ } else if (ret != 0) {
+ //pr_info("testcase8 ioctl_alloc %d failed unexpected, errno: %d", i, errno);
+ return ret;
+ } else {
+ //pr_info("testcase8 ioctl_alloc %d success unexpected", i);
+ cleanup(&alloc_infos[i]);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase9(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = 0,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = -1,
+ },
+ {
+ .flag = 0,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = SPG_ID_AUTO_MIN,
+ },
+ };
+ int errs[] = {EINVAL, ENODEV};
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ errno = 0;
+ ret = ioctl_alloc(dev_fd, &alloc_infos[i]);
+ if (ret != 0 && errno == errs[i]) {
+ pr_info("testcase9 ioctl_alloc %d failed as expected", i);
+ } else if (ret != 0) {
+ pr_info("testcase9 ioctl_alloc %d failed unexpected, errno: %d", i, errno);
+ return ret;
+ } else {
+ pr_info("testcase9 ioctl_alloc %d success unexpected", i);
+ cleanup(&alloc_infos[i]);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase10(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = 0,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .flag = 2,
+ .size = PAGE_NUM * PMD_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .flag = 0x100000000,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .flag = 0x100000002,
+ .size = PAGE_NUM * PMD_SIZE,
+ .spg_id = 1,
+ },
+ };
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_infos[i]);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ } else {
+ sleep(3);
+ cleanup(&alloc_infos[i]);
+ }
+ }
+ return 0;
+}
+
+static int alloc_large_repeat(bool hugepage)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = hugepage ? 1 : 0,
+ .size = ATOMIC_TEST_SIZE,
+ .spg_id = 1,
+ };
+
+ sharepool_print();
+
+ pr_info("start to alloc...");
+ for (int i = 0; i < 5; i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ pr_info("alloc %s failed. errno %d",
+ hugepage ? "huge page" : "normal page",
+ errno);
+ return ret;
+ } else {
+ pr_info("alloc %s success %d time.",
+ hugepage ? "huge page" : "normal page",
+ i + 1);
+ }
+ sharepool_print();
+ mem_show();
+ }
+ return 0;
+}
+
+static int testcase11(void)
+{
+ return alloc_large_repeat(false);
+}
+
+static int testcase12(void)
+{
+ return alloc_large_repeat(true);
+}
+
+int semid;
+static int testcase13(void)
+{
+ int ret = 0;
+ int friend = 0;
+ int spg_id = 1;
+ void *addr;
+
+ semid = sem_create(1234, "sem");
+ friend = fork();
+ if (friend == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id);
+ if (ret < 0) {
+ pr_info("add group failed!");
+ exit(-1);
+ }
+ sleep(1);
+ sem_inc_by_one(semid);
+ while (1) {
+
+ }
+ }
+
+ sem_dec_by_one(semid);
+ addr = wrap_sp_alloc(spg_id, 4096, 0);
+ if (addr == (void *)-1) {
+ pr_info("alloc from other group failed as expected.");
+ ret = 0;
+ } else {
+ pr_info("alloc from other group success unexpected.");
+ ret = -1;
+ }
+
+ KILL_CHILD(friend);
+ sem_close(semid);
+ return ret;
+}
+
+/* testcase1: 申请共享组内存,flag为0。预期申请成功。
+ * testcase2: 申请共享组内存,flag为HP,size不对齐。预期申请到对齐大页大小size的内存。
+ * testcase3: 申请共享组内存,flag为HPONLY。预期申请成功,或空间不足时返回ENOMEM。
+ * testcase4: 申请共享组内存,flag为DVPP。预期申请成功,虚拟地址空间范围在dvpp地址空间。
+ * testcase5: 申请共享组内存,flag为DVPP|HP。预期申请到对齐大页大小size的内存,虚拟地址空间范围在dvpp地址空间。
+ * testcase6: 申请共享组内存,flag为DVPP|ONLY。预期申请成功,虚拟地址空间范围在dvpp地址空间,或空间不足时返回ENOMEM。
+ * testcase7: 申请共享组内存,flag非法。预期申请失败,返回EINVAL。
+ * testcase8: size指定为0,远超出物理内存大小。预期申请失败。
+ * testcase9: spg_id为NONE,未使用过的正常范围内数值,DVPP,或超出范围。预期申请失败。
+ * testcase10: 申请指定memory node节点内存,flag中包含用户传入的device id,对应memory node
+ * testcase11: 单进程申请8G,观察维测结构打印是否正常
+*/
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "申请共享组内存,flag为0。预期申请成功。")
+ TESTCASE_CHILD(testcase2, "申请共享组内存,flag为HP,size不对齐。预期申请到对齐大页大小size的内存。")
+ TESTCASE_CHILD(testcase3, "申请共享组内存,flag为HPONLY。预期申请成功,或空间不足时返回ENOMEM。")
+ TESTCASE_CHILD(testcase4, "申请共享组内存,flag为DVPP。预期申请成功,虚拟地址空间范围在dvpp地址空间。")
+ TESTCASE_CHILD(testcase5, "申请共享组内存,flag为DVPP|HP。预期申请到对齐大页大小size的内存,虚拟地址空间范围在dvpp地址空间。")
+ TESTCASE_CHILD(testcase6, "申请共享组内存,flag为DVPP|ONLY。预期申请成功,虚拟地址空间范围在dvpp地址空间,或空间不足时返回ENOMEM。")
+ TESTCASE_CHILD(testcase7, "申请共享组内存,flag非法。预期申请失败,返回EINVAL。")
+ TESTCASE_CHILD(testcase8, "size指定为0,远超出物理内存大小。预期申请失败。")
+ TESTCASE_CHILD(testcase9, "spg_id为NONE,未使用过的正常范围内数值,DVPP,或超出范围。预期申请失败。")
+ TESTCASE_CHILD_MANUAL(testcase10, "申请指定memory node节点内存,flag中包含用户传入的device id,对应memory node")
+ TESTCASE_CHILD(testcase11, "单进程申请1G普通页,重复5次,然后退出。观察维测结构打印是否正常")
+ TESTCASE_CHILD(testcase12, "单进程申请1G大页,重复5次,然后退出。观察维测结构打印是否正常")
+ TESTCASE_CHILD(testcase13, "sp_alloc尝试从未加入的组申请内存,预期失败")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc2.c b/tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc2.c
new file mode 100644
index 000000000000..451cb987934d
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc2.c
@@ -0,0 +1,131 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon May 31 07:19:22 2021
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <sys/mman.h>
+
+#include "sharepool_lib.h"
+
+
+#define ADD_GROUP \
+ struct sp_add_group_info ag_info = { \
+ .pid = getpid(), \
+ .prot = PROT_READ | PROT_WRITE, \
+ .spg_id = SPG_ID_AUTO, \
+ }; \
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+
+/*
+ * size = 0
+ * 错误码,EINVAL
+ */
+static int testcase1(void)
+{
+ int ret;
+ pid_t pid;
+
+ ADD_GROUP
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 0,
+ .spg_id = ag_info.spg_id,
+ };
+ TEST_CHECK_FAIL(ioctl_alloc(dev_fd, &alloc_info), EINVAL, out);
+
+out:
+ return ret;
+}
+
+/*
+ * size = -1
+ * 错误码,EINVAL
+ */
+static int testcase2(void)
+{
+ int ret;
+ pid_t pid;
+
+ ADD_GROUP
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = -1,
+ .spg_id = ag_info.spg_id,
+ };
+ TEST_CHECK_FAIL(ioctl_alloc(dev_fd, &alloc_info), EINVAL, out);
+
+out:
+ return ret;
+}
+
+/*
+ * size = 1 << 48
+ * 错误码,EINVAL
+ */
+static int testcase3(void)
+{
+ int ret;
+ pid_t pid;
+
+ ADD_GROUP
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 1UL << 48,
+ .spg_id = ag_info.spg_id,
+ };
+ TEST_CHECK_FAIL(ioctl_alloc(dev_fd, &alloc_info), EINVAL, out);
+
+out:
+ return ret;
+}
+
+/*
+ * size = 1 << 36
+ * 会oom,手动执行,需要无内存泄漏等异常
+ */
+static int testcase4(void)
+{
+ int ret;
+ pid_t pid;
+
+ ADD_GROUP
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 1UL << 36, // 64G
+ .spg_id = ag_info.spg_id,
+ };
+ TEST_CHECK(ioctl_alloc(dev_fd, &alloc_info), out);
+
+out:
+ return ret;
+}
+
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "size = 0,错误码EINVAL")
+ TESTCASE_CHILD(testcase2, "size = -1,错误码EINVAL")
+ TESTCASE_CHILD(testcase3, "size = 1 << 48,错误码EINVAL")
+ TESTCASE_CHILD_MANUAL(testcase4, "size = 1 << 36,会oom,手动执行,预期无内存泄漏")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc3.c b/tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc3.c
new file mode 100644
index 000000000000..4334b1d365bc
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc3.c
@@ -0,0 +1,147 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Jun 07 02:10:29 2021
+ */
+#include <stdio.h>
+#include <string.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <sys/mman.h>
+#include <errno.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+/*
+ * 申请直调内存,写入,释放
+ * 预期申请内存成功,写入、释放无异常
+ */
+
+#define SIZE_1K (1024UL)
+#define SIZE_PAGE (4 * 1024UL)
+#define SIZE_HUGEPAGE (2 * 1024 * 1024UL)
+
+static int alloc_from_spg_none(unsigned long flag)
+{
+ void *buf;
+ char *cbuf;
+
+ unsigned long size[3] = {
+ SIZE_1K,
+ SIZE_PAGE,
+ SIZE_HUGEPAGE
+ };
+
+ for ( int i = 0; i < sizeof(size) / sizeof(size[0]); i++) {
+ buf = (void *)wrap_sp_alloc(SPG_ID_DEFAULT, size[i], flag);
+ if (!buf) {
+ pr_info("alloc failed by size %d, flag %d, errno: %d",
+ size[i], flag, errno);
+ return -1;
+ } else {
+ pr_info("alloc success by size %d, flag %d. va: 0x%lx",
+ size[i], flag, buf);
+ // 写入
+ cbuf = (char *)buf;
+ for (int j = 0; j < size[i]; j++)
+ *(cbuf + i) = 'A';
+
+ // 释放
+ wrap_sp_free(buf);
+ }
+ }
+
+ return 0;
+}
+
+/* testcase4
+ * N个组并发加入spg_none, 并发申请内存,然后并发退出
+ */
+#define PROC_NUM 10
+#define REPEAT 10
+#define ALLOC_SIZE SIZE_HUGEPAGE
+#define PRT PROT_READ | PROT_WRITE
+int sem_tc4;
+static int testcase_child(bool hugepage)
+{
+ int ret;
+ sem_dec_by_one(sem_tc4);
+
+ if (wrap_sp_alloc(SPG_ID_DEFAULT, ALLOC_SIZE, hugepage? 2 : 0) == (void *)-1) {
+ pr_info("child %d alloc hugepage failed.", getpid());
+ return -1;
+ }
+ pr_info("alloc success. child %d", getpid());
+
+ sem_check_zero(sem_tc4);
+ sleep(3);
+ sem_dec_by_one(sem_tc4);
+ pr_info("exit success. child %d", getpid());
+
+ return 0;
+}
+
+static int testcase(bool hugepage)
+{
+ int ret = 0;
+ int child[PROC_NUM];
+
+ sem_tc4 = sem_create(1234, "sem for testcase4");
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ FORK_CHILD_ARGS(child[i], testcase_child(hugepage));
+ }
+
+ // 并发申请内存
+ pr_info("all start to allocate...");
+ sem_inc_by_val(sem_tc4, PROC_NUM);
+ sem_check_zero(sem_tc4);
+ sleep(5);
+
+ // 并发退出组
+ pr_info("all start to exit...");
+ sem_inc_by_val(sem_tc4, PROC_NUM);
+ sleep(5);
+ sem_check_zero(sem_tc4);
+
+ for (int i = 0; i < PROC_NUM; i++)
+ WAIT_CHILD_STATUS(child[i], out);
+
+ sem_close(sem_tc4);
+ return 0;
+
+out:
+ for (int i = 0; i < PROC_NUM; i++)
+ KILL_CHILD(child[i]);
+ sem_close(sem_tc4);
+ return -1;
+}
+
+
+static int testcase1(void) { return alloc_from_spg_none(0); }
+static int testcase2(void) { return alloc_from_spg_none(SP_HUGEPAGE);}
+static int testcase3(void) { return alloc_from_spg_none(SP_HUGEPAGE_ONLY); }
+static int testcase4(void) { return testcase(1); }
+static int testcase5(void) { return testcase(0); }
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "申请直调内存 普通页")
+ TESTCASE_CHILD(testcase2, "申请直调内存 大页")
+ TESTCASE_CHILD(testcase3, "申请直调内存 大页only")
+ TESTCASE_CHILD(testcase4, "并发申请直调内存大页,写入,释放。预期申请内存成功,写入、释放无异常")
+ TESTCASE_CHILD(testcase5, "并发申请直调内存普通页,写入,释放。预期申请内存成功,写入、释放无异常")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_alloc/test_spa_error.c b/tools/testing/sharepool/testcase/api_test/sp_alloc/test_spa_error.c
new file mode 100644
index 000000000000..d0fec0bfbb23
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_alloc/test_spa_error.c
@@ -0,0 +1,109 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <sys/ioctl.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sharepool_lib.h"
+
+#define ALIGN_UP(x, align_to) (((x) + ((align_to)-1)) & ~((align_to)-1))
+#define CMD_LEN 100
+#define UNIT 1024
+#define PAGE_NUM 1
+#define HGPAGE_NUM 10
+#define LARGE_PAGE_NUM 1000000
+#define ATOMIC_TEST_SIZE (1024UL * 1024UL * 1024UL) // 1G
+#define SPG_ID_AUTO 200000
+#define DAVINCI_IOCTL_VA_TO_PA 0xfff9
+#define DVPP_START (0x100000000000UL)
+#define DVPP_SIZE 0x400000000UL
+
+
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ } else {
+ ret = ag_info.spg_id;
+ }
+ return ret;
+}
+
+static int cleanup(struct sp_alloc_info *alloc_info)
+{
+ int ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ int fd;
+ long err;
+ unsigned long phys_addr;
+ int spg_id = 0;
+
+ spg_id = addgroup();
+ if (spg_id <= 0) {
+ pr_info("spgid <= 0, value: %d", spg_id);
+ return -1;
+ } else {
+ pr_info("spg id %d", spg_id);
+ }
+
+ struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = ((10UL << 32) | 0x7), // 普通大页
+ .size = PAGE_NUM * PMD_SIZE,
+ .spg_id = spg_id,
+ },
+ };
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_infos[i]);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_alloc failed as expected, errno: %d", errno);
+ } else {
+ pr_info("alloc success unexpected, type, va: %lx", alloc_infos[i].addr);
+ return -1;
+ }
+ }
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++)
+ cleanup(&alloc_infos[i]);
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "sp_alloc传入非法flag,预期sp_area返回错误值,sp_alloc失败")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/Makefile b/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/Makefile
new file mode 100644
index 000000000000..ef670b48300a
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $< -o $@ -L$(SHARELIB_DIR) -lsharepool_lib -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(INSTALL_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/start_vm_test_16.sh b/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/start_vm_test_16.sh
new file mode 100755
index 000000000000..64b92606094b
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/start_vm_test_16.sh
@@ -0,0 +1,49 @@
+qemu-system-aarch64 \
+ -m \
+ 16G \
+ -object memory-backend-ram,size=1G,id=mem0 \
+ -object memory-backend-ram,size=1G,id=mem1 \
+ -object memory-backend-ram,size=1G,id=mem2 \
+ -object memory-backend-ram,size=1G,id=mem3 \
+ -object memory-backend-ram,size=1G,id=mem4 \
+ -object memory-backend-ram,size=1G,id=mem5 \
+ -object memory-backend-ram,size=1G,id=mem6 \
+ -object memory-backend-ram,size=1G,id=mem7 \
+ -object memory-backend-ram,size=1G,id=mem8 \
+ -object memory-backend-ram,size=1G,id=mem9 \
+ -object memory-backend-ram,size=1G,id=mem10 \
+ -object memory-backend-ram,size=1G,id=mem11 \
+ -object memory-backend-ram,size=1G,id=mem12 \
+ -object memory-backend-ram,size=1G,id=mem13 \
+ -object memory-backend-ram,size=1G,id=mem14 \
+ -object memory-backend-ram,size=1G,id=mem15 \
+ -numa node,memdev=mem0,nodeid=0 \
+ -numa node,memdev=mem1,nodeid=1 \
+ -numa node,memdev=mem2,nodeid=2 \
+ -numa node,memdev=mem3,nodeid=3 \
+ -numa node,memdev=mem4,nodeid=4 \
+ -numa node,memdev=mem5,nodeid=5 \
+ -numa node,memdev=mem6,nodeid=6 \
+ -numa node,memdev=mem7,nodeid=7 \
+ -numa node,memdev=mem8,nodeid=8 \
+ -numa node,memdev=mem9,nodeid=9 \
+ -numa node,memdev=mem10,nodeid=10 \
+ -numa node,memdev=mem11,nodeid=11 \
+ -numa node,memdev=mem12,nodeid=12 \
+ -numa node,memdev=mem13,nodeid=13 \
+ -numa node,memdev=mem14,nodeid=14 \
+ -numa node,memdev=mem15,nodeid=15 \
+ -kernel \
+ /home/data/qemu/images/openEuler-22.03-arm64/Image \
+ -drive file=/home/data/qemu/images/openEuler-22.03-arm64/rootfs.qcow2,if=none,format=qcow2,cache=none,id=root -device virtio-blk,drive=root,id=d_root \
+ -device virtio-scsi-pci -drive file=/home/data/qemu/images/disk/disk.img,if=none,format=raw,id=dd_1 -device scsi-hd,drive=dd_1,id=disk_1 \
+ -M virt -cpu cortex-a57 \
+ -smp \
+ 8 \
+ -net nic,model=virtio-net-pci \
+ -net \
+ user,host=10.0.2.2,hostfwd=tcp::10022-:22 \
+ -fsdev local,security_model=passthrough,id=fsdev0,path=/tmp -device virtio-9p-pci,id=fs0,fsdev=fsdev0,mount_tag=hostshare \
+ -append \
+ "console=ttyAMA0 root=/dev/vda2 rw printk.time=y oops=panic panic_on_oops=1 panic_on_warn=1 panic=-1 net.ifnames=0 ftrace_dump_on_oops=orig_cpu debug earlyprintk=serial slub_debug=UZ selinux=0 highres=off earlycon systemd.default_timeout_start_sec=600 crashkernel=256M enable_ascend_share_pool" \
+ -nographic
diff --git a/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/start_vm_test_4.sh b/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/start_vm_test_4.sh
new file mode 100755
index 000000000000..b4d95a5857f4
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/start_vm_test_4.sh
@@ -0,0 +1,25 @@
+qemu-system-aarch64 \
+ -m \
+ 16G \
+ -object memory-backend-ram,size=4G,id=mem0 \
+ -object memory-backend-ram,size=4G,id=mem1 \
+ -object memory-backend-ram,size=4G,id=mem2 \
+ -object memory-backend-ram,size=4G,id=mem3 \
+ -numa node,memdev=mem0,nodeid=0 \
+ -numa node,memdev=mem1,nodeid=1 \
+ -numa node,memdev=mem2,nodeid=2 \
+ -numa node,memdev=mem3,nodeid=3 \
+ -kernel \
+ /home/data/qemu/images/openEuler-22.03-arm64/Image \
+ -drive file=/home/data/qemu/images/openEuler-22.03-arm64/rootfs.qcow2,if=none,format=qcow2,cache=none,id=root -device virtio-blk,drive=root,id=d_root \
+ -device virtio-scsi-pci -drive file=/home/data/qemu/images/disk/disk.img,if=none,format=raw,id=dd_1 -device scsi-hd,drive=dd_1,id=disk_1 \
+ -M virt -cpu cortex-a57 \
+ -smp \
+ 8 \
+ -net nic,model=virtio-net-pci \
+ -net \
+ user,host=10.0.2.2,hostfwd=tcp::10022-:22 \
+ -fsdev local,security_model=passthrough,id=fsdev0,path=/tmp -device virtio-9p-pci,id=fs0,fsdev=fsdev0,mount_tag=hostshare \
+ -append \
+ "console=ttyAMA0 root=/dev/vda2 rw printk.time=y oops=panic panic_on_oops=1 panic_on_warn=1 panic=-1 net.ifnames=0 ftrace_dump_on_oops=orig_cpu debug earlyprintk=serial slub_debug=UZ selinux=0 highres=off earlycon systemd.default_timeout_start_sec=600 crashkernel=256M enable_ascend_share_pool" \
+ -nographic
diff --git a/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/test_nodemask.c b/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/test_nodemask.c
new file mode 100644
index 000000000000..1c1fbf023327
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/test_nodemask.c
@@ -0,0 +1,782 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+#include <string.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+#define ALIGN_UP(x, align_to) (((x) + ((align_to)-1)) & ~((align_to)-1))
+#define CMD_LEN 100
+#define UNIT 1024
+#define PAGE_NUM 100
+#define HGPAGE_NUM 10
+#define LARGE_PAGE_NUM 1000000
+#define ATOMIC_TEST_SIZE (1024UL * 1024UL * 1024UL) // 1G
+
+static int check_mem_in_node(unsigned long addr, int huge, unsigned long *mask, int max_node)
+{
+ FILE *fp;
+ char buffer[512];
+ char *cmd;
+ char *p;
+ int ret;
+
+ cmd = malloc(64 + 32 * max_node);
+
+ p = cmd;
+ p += sprintf(p, "cat /proc/%d/numa_maps | grep %lx | grep kernelpagesize_kB=4", getpid(), addr);
+ fp = popen(cmd, "r");
+ if (fgets(buffer, sizeof(buffer), fp))
+ ret = !huge;
+ else
+ ret = huge;
+ fclose(fp);
+
+ p = cmd;
+ p += sprintf(p, "cat /proc/%d/numa_maps | grep %lx ", getpid(), addr);
+
+ for (int i = 0; i < max_node; i++) {
+ if (mask[i / 64] & (1UL << (i % 64))) {
+ p += sprintf(p, "| sed -r \"s/N%d=[0-9]+ //g\" ", i);
+ }
+ }
+
+ p += sprintf(p, "| sed -nr \"/N[0-9]+=[0-9]+ /p\"");
+
+ fp = popen(cmd, "r");
+ if (fgets(buffer, sizeof(buffer), fp))
+ ret = 0;
+ else
+ ret = 1;
+
+ printf("cmd: %s\n", cmd);
+ printf("%s\n", buffer);
+
+ fclose(fp);
+ free(cmd);
+ return ret;
+}
+
+static int check_mem_not_in_node(unsigned long addr, int huge, unsigned long *mask, int max_node)
+{
+ FILE *fp;
+ char buffer[256];
+ char *cmd;
+ char *p;
+ int ret;
+ int tmp = 0;
+
+ cmd = malloc(64 + 32 * max_node);
+
+ p = cmd;
+ p += sprintf(p, "cat /proc/%d/numa_maps | grep %lx | grep kernelpagesize_kB=4", getpid(), addr);
+ fp = popen(cmd, "r");
+ if (fgets(buffer, sizeof(buffer), fp))
+ ret = !huge;
+ else
+ ret = huge;
+ fclose(fp);
+
+ for (int i = 0; i < max_node / 64 + 1; i++) {
+ tmp |= mask[i];
+ }
+
+ if (!tmp) /* no node provided */
+ goto out;
+
+ p = cmd;
+ p += sprintf(p, "cat /proc/%d/numa_maps | grep %lx | ", getpid(), addr);
+ p += sprintf(p, "sed -nr \"/");
+ for (int i = 0; i < max_node; i++) {
+ if (mask[i / 64] & (1UL << (i % 64))) {
+ printf("mask[%d / 64]: %lx\n", i, mask[i / 64]);
+ p += sprintf(p, "(N%d)|", i);
+ }
+ }
+ p += sprintf(p, "(XXX)=[0-9]+ /p\"\n");
+
+ fp = popen(cmd, "r");
+ if (fgets(buffer, sizeof(buffer), fp))
+ ret = 1;
+ else
+ ret = 0;
+
+ printf("cmd: %s\n", cmd);
+ printf("%s\n", buffer);
+
+ fclose(fp);
+out:
+ free(cmd);
+
+ return ret;
+}
+
+/*
+ * mg_sp_alloc & mg_sp_alloc_nodemask 内存申请混合操作场景
+ * 同时混合sharepool用户态接口打印操作
+ * 测试环境需求 numa节点数 4
+ */
+static int testcase1_1(void)
+{
+ int default_id = 1;
+ int page_size = 4096;
+ int ret = 0;
+
+ int proc_num = 10;
+ int childs[proc_num];
+ int prints_num = 3;
+ int prints[prints_num];
+
+ unsigned long node_id = 1;
+ unsigned long flags = node_id << 36 | SP_SPEC_NODE_ID;
+ unsigned long nodemask[2];
+ unsigned long max_node = 128;
+ nodemask[0] = 0xFUL;
+ nodemask[1] = 0x0UL;
+
+ // 持续打印维测接口
+ for(int i = 0; i < prints_num; i++){
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ sharepool_print();
+ usleep(10000);
+ }
+ } else {
+ prints[i] = pid;
+ }
+ }
+
+ for(int i=0; i<proc_num; i++){
+ int pid = fork();
+ if (pid == 0){
+ int ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ while(1){
+ void *addr;
+ int ret;
+ addr = wrap_sp_alloc(default_id, page_size, 0);
+ if (addr == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ }
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d free failed.", getpid());
+ } else {
+ pr_info("process %d free success.", getpid());
+ }
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, page_size, flags, nodemask, max_node);
+ if (addr == (void *)-1) {
+ pr_info("process %d multinode alloc failed.", getpid());
+ } else {
+ pr_info("process %d multinode alloc success.", getpid());
+ }
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d multinode free failed.", getpid());
+ } else {
+ pr_info("process %d multinode free success.", getpid());
+ }
+ }
+ }
+ else{
+ childs[i] = pid;
+ }
+ }
+
+ sleep(10);
+
+ int status;
+
+ for (int i = 0; i < proc_num; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ }
+ for (int i = 0; i < prints_num; i++) {
+ kill(prints[i], SIGKILL);
+ waitpid(prints[i], &status, 0);
+ }
+
+ return 0;
+}
+
+/*
+ * mg_sp_alloc & mg_sp_alloc_nodemask 内存申请混合操作场景
+ * 测试环境需求 numa节点数 4 每个node配置4GB内存
+ * 申请内存量增加 确保申请到不同的node
+ * 附加内存读写验证操作
+ */
+static int testcase1_2(void)
+{
+ int default_id = 1;
+ unsigned long mem_size = 1UL * 1024 * 1024 * 1024;
+ int page_size = 4096;
+ int ret = 0;
+
+ int proc_num = 5;
+ int childs[proc_num];
+ int prints_num = 3;
+ int prints[prints_num];
+
+ unsigned long flags = 0;
+ unsigned long nodemask[2];
+ unsigned long max_node = 128;
+ nodemask[0] = 0xFUL;
+ nodemask[1] = 0x0UL;
+
+ for(int i=0; i<proc_num; i++){
+ int pid = fork();
+ if (pid == 0){
+ int ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ for(int j=0; j<2; j++){
+ char *addr;
+ int ret;
+ char ori_c = 'a';
+ addr = wrap_sp_alloc(default_id, page_size, 0);
+ if (addr == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ }
+
+ memset(addr, ori_c, page_size);
+ for (size_t i = 0; i < page_size; i = i + 2)
+ {
+ char c = addr[i];
+ if (c != ori_c) {
+ wrap_sp_free(addr);
+ perror("memory contect check error \n");
+ return 1;
+ }
+ }
+
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d free failed.", getpid());
+ } else {
+ pr_info("process %d free success.", getpid());
+ }
+
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, mem_size, flags, nodemask, max_node);
+ if (addr == (void *)-1) {
+ pr_info("process %d multinode alloc failed.", getpid());
+ } else {
+ pr_info("process %d multinode alloc success.", getpid());
+ }
+
+ memset(addr, ori_c, mem_size);
+ for (size_t i = 0; i < mem_size; i = i + 2) {
+ char c = addr[i];
+ if (c != ori_c) {
+ wrap_sp_free(addr);
+ perror("memory contect check error \n");
+ return 1;
+ }
+ }
+
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d multinode free failed.", getpid());
+ } else {
+ pr_info("process %d multinode free success.", getpid());
+ }
+ }
+ }
+ else{
+ childs[i] = pid;
+ }
+ }
+
+ sleep(10);
+
+ int status;
+
+ for (int i = 0; i < proc_num; i++) {
+ waitpid(childs[i], &status, 0);
+ }
+
+ return status;
+}
+/*
+ * mg_sp_alloc_nodemask 并发申请内存小页 大页场景
+ * 同时混合sharepool用户态接口打印操作
+ * 测试环境需求 numa节点数 4
+ */
+static int testcase2(void)
+{
+ int default_id = 1;
+ int page_size = 4096;
+ int ret = 0;
+
+ int proc_num = 10;
+ int childs[proc_num];
+ int prints_num = 3;
+ int prints[prints_num];
+
+ unsigned long node_id = 1;
+ unsigned long flags_p = node_id << 36 | SP_SPEC_NODE_ID;
+ unsigned long flags_hp = node_id << 36 | SP_SPEC_NODE_ID | SP_HUGEPAGE | SP_HUGEPAGE_ONLY;
+ unsigned long nodemask[2];
+ unsigned long max_node = 128;
+ nodemask[0] = 0x2UL;
+ nodemask[1] = 0x0UL;
+
+ // 持续打印维测接口
+ for(int i = 0; i < prints_num; i++){
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ sharepool_print();
+ usleep(10000);
+ }
+ } else {
+ prints[i] = pid;
+ }
+ }
+
+ // 创建子进程申请和释放4K内存
+ for(int i=0; i<proc_num; i++){
+ int pid = fork();
+ if (pid == 0){
+ int ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ while(1){
+ void *addr;
+ int ret;
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, page_size, flags_p, nodemask, max_node);
+ if (addr == (void *)-1) {
+ pr_info("process %d multinode alloc failed.", getpid());
+ } else {
+ pr_info("process %d multinode alloc success.", getpid());
+ }
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d multinode free failed.", getpid());
+ } else {
+ pr_info("process %d multinode free success.", getpid());
+ }
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, page_size, flags_hp, nodemask, max_node);
+ if (addr == (void *)-1) {
+ pr_info("process %d multinode hugepage alloc failed.", getpid());
+ } else {
+ pr_info("process %d multinode hugepage alloc success.", getpid());
+ }
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d multinode hugepage free failed.", getpid());
+ } else {
+ pr_info("process %d multinode hugepage free success.", getpid());
+ }
+ }
+ }
+ else{
+ childs[i] = pid;
+ }
+ }
+
+ sleep(10);
+
+ int status;
+
+ for (int i = 0; i < proc_num; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ }
+ for (int i = 0; i < prints_num; i++) {
+ kill(prints[i], SIGKILL);
+ waitpid(prints[i], &status, 0);
+ }
+
+ return 0;
+}
+
+/*
+ * 多组使用 mg_sp_alloc_nodemask 并发内存申请场景
+ * 同时混合sharepool用户态接口打印操作
+ * 测试环境需求 numa节点数 4
+ */
+static int testcase3(void)
+{
+ int group_num = 10;
+ int page_size = 4096;
+ int ret = 0;
+ int childs[group_num];
+ int prints_num = 3;
+ int prints[prints_num];
+
+ unsigned long node_id = 1;
+ unsigned long flags_p = node_id << 36 | SP_SPEC_NODE_ID;
+ unsigned long flags_hp = node_id << 36 | SP_SPEC_NODE_ID | SP_HUGEPAGE | SP_HUGEPAGE_ONLY;
+ unsigned long nodemask[2];
+ unsigned long max_node = 128;
+ nodemask[0] = 0x2UL;
+ nodemask[1] = 0x0UL;
+
+ // 持续打印维测接口
+ for(int i = 0; i < prints_num; i++){
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ sharepool_print();
+ usleep(10000);
+ }
+ } else {
+ prints[i] = pid;
+ }
+ }
+
+ // 创建子进程申请和释放4K内存
+ for(int i=0; i<group_num; i++){
+ int pid = fork();
+ if (pid == 0){
+ int ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, i);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), i);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), i);
+ }
+ while(1){
+ void *addr;
+ int ret;
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, page_size, flags_p, nodemask, max_node);
+ if (addr == (void *)-1) {
+ pr_info("process %d multinode alloc failed.", getpid());
+ } else {
+ pr_info("process %d multinode alloc success.", getpid());
+ }
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d multinode free failed.", getpid());
+ } else {
+ pr_info("process %d multinode free success.", getpid());
+ }
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, page_size, flags_hp, nodemask, max_node);
+ if (addr == (void *)-1) {
+ pr_info("process %d multinode hugepage alloc failed.", getpid());
+ } else {
+ pr_info("process %d multinode hugepage alloc success.", getpid());
+ }
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d multinode hugepage free failed.", getpid());
+ } else {
+ pr_info("process %d multinode hugepage free success.", getpid());
+ }
+ }
+ }
+ else{
+ childs[i] = pid;
+ }
+ }
+
+ sleep(10);
+
+ int status;
+
+ for (int i = 0; i < group_num; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ }
+ for (int i = 0; i < prints_num; i++) {
+ kill(prints[i], SIGKILL);
+ waitpid(prints[i], &status, 0);
+ }
+
+ return 0;
+}
+
+/*
+ * mg_sp_alloc_nodemask 内存读写验证
+ * 同时混合sharepool用户态接口打印操作
+ * 测试环境需求 numa节点数 4
+ */
+static int testcase4(void)
+{
+ int default_id = 1;
+ int page_size = 4096;
+ int ret = 0;
+
+ unsigned long node_id = 1;
+ unsigned long flags_p = node_id << 36 | SP_SPEC_NODE_ID;
+ unsigned long nodemask[2];
+ unsigned long max_node = 128;
+ nodemask[0] = 0xFUL;
+ nodemask[1] = 0x0UL;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ char *addr;
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, page_size, flags_p, nodemask, max_node);
+ if (addr == (void *)-1) {
+ pr_info("process %d multinode alloc failed.", getpid());
+ } else {
+ pr_info("process %d multinode alloc success.", getpid());
+ }
+
+ char ori_c = 'a';
+ memset(addr, ori_c, page_size);
+
+ for (size_t i = 0; i < page_size; i = i + 2)
+ {
+ char c = addr[i];
+ if (c != ori_c) {
+ wrap_sp_free(addr);
+ perror("memory contect check error \n");
+ return 1;
+ }
+ }
+
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d multinode free failed.", getpid());
+ } else {
+ pr_info("process %d multinode free success.", getpid());
+ }
+
+ unsigned long nodemask_c[2];
+ nodemask[0] = 0x1UL;
+ nodemask[1] = 0x0UL;
+
+ if (!check_mem_not_in_node(addr, 0, nodemask, max_node)) {
+ pr_info("page is not expected");
+ ret = -1;
+ }
+
+ return 0;
+}
+
+/*
+ * 虚拟机配置 numa节点数16 每个节点配置1GB内存 共计16GB
+ */
+static int testcase5(void)
+{
+ int default_id = 1;
+ int mem_size = 1UL * 1024 * 1024 * 1024;
+ int ret = 0;
+ int proc_num = 10;
+ int childs[proc_num];
+
+ unsigned long node_id = 1;
+ unsigned long flags_p = node_id << 36 | SP_SPEC_NODE_ID;
+ unsigned long nodemask[2];
+ unsigned long max_node = 128;
+ nodemask[0] = 0xFFFFUL;
+ nodemask[1] = 0x0UL;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ for(int i=0; i<proc_num; i++){
+ int pid = fork();
+ if (pid == 0){
+ int ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ for(int j=0; j<3; j++){
+ void *addr;
+ int ret;
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, mem_size, flags_p, nodemask, max_node);
+ if (addr == (void *)-1) {
+ pr_info("process %d multinode alloc failed.", getpid());
+ } else {
+ pr_info("process %d multinode alloc success.", getpid());
+ }
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d multinode free failed.", getpid());
+ } else {
+ pr_info("process %d multinode free success.", getpid());
+ }
+ }
+ return ret;
+ }
+ else{
+ childs[i] = pid;
+ }
+ }
+
+ int status;
+ for (int i = 0; i < proc_num; i++) {
+ waitpid(childs[i], &status, 0);
+ }
+
+ return 0;
+}
+
+/*
+ * 虚拟机配置 numa节点数4 每个节点配置4GB内存 共计16GB
+ */
+static int testcase6_1(void)
+{
+ system("echo 0 > /proc/sys/vm/enable_oom_killer");
+ int default_id = 1;
+ unsigned long mem_size = 9UL * 1024 * 1024 *1024;
+ int ret = 0;
+
+ unsigned long flags_p = SP_SPEC_NODE_ID;
+ unsigned long nodemask[2];
+ unsigned long max_node = 128;
+ nodemask[0] = 0x3UL;
+ nodemask[1] = 0x0UL;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ char *addr;
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, mem_size, flags_p, nodemask, max_node);
+ if (addr == MAP_FAILED) {
+ if (errno == ENOMEM || errno == EINTR) { /* EINTR for OOM */
+ ret = 0;
+ } else {
+ printf("errno[%d] is not expected\n", errno);
+ ret = -1;
+ }
+ } else {
+ printf("alloc should fail\n");
+ wrap_sp_free(addr);
+ ret = -1;
+ }
+
+ return ret;
+}
+
+/*
+ * 虚拟机配置 numa节点数4 每个节点配置4GB内存 共计16GB
+ */
+static int testcase6_2(void)
+{
+ int default_id = 1;
+ unsigned long mem_size = 12UL * 1024 * 1024 *1024;
+ int ret = 0;
+
+ unsigned long flags_p = SP_SPEC_NODE_ID;
+ unsigned long nodemask[2];
+ unsigned long max_node = 128;
+ nodemask[0] = 0xFUL;
+ nodemask[1] = 0x0UL;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ char *addr;
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, mem_size, flags_p, nodemask, max_node);
+ if (addr == MAP_FAILED) {
+ pr_info("alloc failed, errno:%d", errno);
+ return -1;
+ }else{
+ pr_info("alloc success");
+ }
+
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d multinode free failed.", getpid());
+ } else {
+ pr_info("process %d multinode free success.", getpid());
+ }
+
+ return 0;
+}
+
+/*
+ * 虚拟机配置 numa节点数4 每个节点配置4GB内存 共计16GB
+ * cgroup限制内存的总量为1G
+ * echo 0 > /proc/sys/vm/enable_oom_killer
+ */
+static int testcase6_3(void)
+{
+ int default_id = 1;
+ unsigned long mem_size = 2UL * 1024 * 1024 *1024;
+ int ret = 0;
+
+ unsigned long node_id = 2;
+ unsigned long flags_p = node_id << 36 | SP_SPEC_NODE_ID;
+ unsigned long nodemask[2];
+ unsigned long max_node = 128;
+ nodemask[0] = 0xFUL;
+ nodemask[1] = 0x0UL;
+
+ char *addr;
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, mem_size, 0, nodemask, max_node);
+ if (addr == MAP_FAILED) {
+ if (errno == ENOMEM || errno == EINTR) { /* EINTR for OOM */
+ ret = 0;
+ } else {
+ printf("errno[%d] is not expected\n", errno);
+ ret = -1;
+ }
+ } else {
+ printf("alloc should fail\n");
+ ret = -1;
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1_1, "mg_sp_alloc & mg_sp_alloc_nodemask 内存申请混合操作场景")
+ TESTCASE_CHILD(testcase1_2, "mg_sp_alloc & mg_sp_alloc_nodemask 内存申请混合操作场景 内存大量申请 混合读写验证操作")
+ TESTCASE_CHILD(testcase2, "mg_sp_alloc_nodemask 并发申请内存小页 大页场景")
+ TESTCASE_CHILD(testcase3, "多组使用 mg_sp_alloc_nodemask 并发内存申请场景")
+ TESTCASE_CHILD(testcase4, "mg_sp_alloc_nodemask 内存读写验证")
+ //TESTCASE_CHILD(testcase5, "极大数量node测试")
+ TESTCASE_CHILD(testcase6_1, "mg_sp_alloc_nodemask 两个numa node内存不足")
+ TESTCASE_CHILD(testcase6_2, "mg_sp_alloc_nodemask 使用4个numa node申请足量内存")
+ //TESTCASE_CHILD(testcase6_3, "mg_sp_alloc_nodemask cgroup限制内存的使用")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/Makefile b/tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/test_sp_config_dvpp_range.c b/tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/test_sp_config_dvpp_range.c
new file mode 100644
index 000000000000..251e28618d61
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/test_sp_config_dvpp_range.c
@@ -0,0 +1,367 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 08:38:18 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include "sem_use.h"
+#include "sharepool_lib.h"
+#define PAGE_NUM 100
+
+static int addgroup(int group_id)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int cleanup(struct sp_alloc_info *alloc_info)
+{
+ int ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+/* 进程未加组,配置dvpp地址空间,预期成功 */
+static int testcase1(void)
+{
+ struct sp_config_dvpp_range_info cdr_info = {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = DVPP_16G / 2,
+ .device_id = 0,
+ .pid = getpid(),
+ };
+
+ if (ioctl_config_dvpp_range(dev_fd, &cdr_info)) {
+ pr_info("ioctl_config_dvpp_range failed unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* 进程加组,但是size、pid、device_id非法 */
+static int testcase2(void)
+{
+ if (addgroup(10))
+ return -1;
+
+ struct sp_config_dvpp_range_info cdr_infos[] = {
+ {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = DVPP_16G / 2,
+ .device_id = 100, // 非法
+ .pid = getpid(),
+ },
+ {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = DVPP_16G / 2,
+ .device_id = 0,
+ .pid = -1, // 非法
+ },
+ {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = DVPP_16G / 2,
+ .device_id = 0,
+ .pid = 32769, // 非法
+ },
+ {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = 0xfffffffff, // 超过16G,非法
+ .device_id = 0,
+ .pid = getpid(),
+ },
+ };
+
+ for (int i = 0; i < sizeof(cdr_infos) / sizeof(cdr_infos[0]); i++) {
+ if (!ioctl_config_dvpp_range(dev_fd, cdr_infos + i)) {
+ pr_info("ioctl_config_dvpp_range success unexpected");
+ // return -1;
+ }
+ }
+
+ return 0;
+}
+
+/* 进程加组,参数合理,预期成功, 并且申请dvpp内存,确认是否是设置的范围 */
+static int testcase3(void)
+{
+ int ret;
+
+ if (addgroup(100))
+ return -1;
+
+ struct sp_config_dvpp_range_info cdr_info = {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = DVPP_16G / 2,
+ .device_id = 0,
+ .pid = getpid(),
+ };
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 4,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 100,
+ };
+
+ if (ioctl_config_dvpp_range(dev_fd, &cdr_info)) {
+ pr_info("ioctl_config_dvpp_range failed unexpected");
+ return -1;
+ }
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if(ret != 0) {
+ pr_info("sp_alloc failed errno %d\n", errno);
+ return ret;
+ } else if (alloc_info.addr < cdr_info.start &&
+ alloc_info.addr >= cdr_info.start + cdr_info.size) {
+ pr_info("the range of addr is invalid 0x%llx\n", alloc_info.addr);
+ return -1;
+ } else
+ cleanup(&alloc_info);
+
+ return 0;
+}
+
+/* 进程加组,参数合理,重复设置,预期失败 */
+static int testcase4(void)
+{
+ if (addgroup(10))
+ return -1;
+
+ struct sp_config_dvpp_range_info cdr_info = {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = DVPP_16G / 2,
+ .device_id = 0,
+ .pid = getpid(),
+ };
+
+ if (ioctl_config_dvpp_range(dev_fd, &cdr_info)) {
+ pr_info("ioctl_config_dvpp_range failed unexpected");
+ return -1;
+ }
+ if (!ioctl_config_dvpp_range(dev_fd, &cdr_info)) {
+ pr_info("ioctl_config_dvpp_range successed unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* 进程A配置dvpp地址空间,并且创建共享组。
+ * 申请内存,应该落在进程A的区间内。
+ * 进程B也配置不同的dvpp地址空间。
+ * 然后加组。预期成功但是有warning.
+ * 再次申请内存,所属地址空间是在进程A的。
+ * */
+
+/* size: 4 0000 0000 (16G)
+ * proc A: 0x e700 0000 0000 ~ 0x e704 0000 0000
+ * proc B: 0x e600 0000 0000 ~ 0x e604 0000 0000
+ */
+struct sp_config_dvpp_range_info dvpp_infos[] = {
+ {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = DVPP_16G / 2,
+ .device_id = 0,
+ },
+ {
+ .start = DVPP_BASE + 3 * DVPP_16G,
+ .size = DVPP_16G / 2,
+ .device_id = 0,
+ },
+};
+static int semid;
+static int testcase5(void)
+{
+ int ret;
+ pid_t procA, procB;
+ unsigned long addr;
+ int spg_id = 1;
+ unsigned long size;
+
+ semid = sem_create(1234, "proc A then proc B");
+ procA = fork();
+ if (procA == 0) {
+ /* 配置dvpp地址空间 */
+ dvpp_infos[0].pid = getpid();
+ if (ioctl_config_dvpp_range(dev_fd, &dvpp_infos[0])) {
+ pr_info("proc A config dvpp failed. errno: %d", errno);
+ exit(-1);
+ }
+ if (addgroup(spg_id) < 0) {
+ pr_info("add group failed.");
+ exit(-1);
+ }
+ size = PMD_SIZE;
+ addr = (unsigned long)wrap_sp_alloc(spg_id, size, SP_DVPP | SP_HUGEPAGE);
+ if (addr == -1) {
+ pr_info("alloc failed.");
+ exit(-1);
+ }
+ if (addr < dvpp_infos[0].start ||
+ addr + size > dvpp_infos[0].start + dvpp_infos[0].size) {
+ pr_info("alloc dvpp range incorrect. addr: %lx", addr);
+ exit(-1);
+ }
+ sem_inc_by_one(semid);
+ pr_info(" proc A finished.");
+ sem_check_zero(semid);
+ sem_dec_by_one(semid);
+ exit(0);
+ }
+
+ procB = fork();
+ if (procB == 0) {
+ sem_dec_by_one(semid);
+ pr_info(" proc B started.");
+ /* 配置dvpp地址空间 */
+ dvpp_infos[1].pid = getpid();
+ if (ioctl_config_dvpp_range(dev_fd, &dvpp_infos[1])) {
+ pr_info("proc B config dvpp failed. errno: %d", errno);
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ if (addgroup(spg_id) < 0) {
+ pr_info("add group failed");
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ sem_inc_by_one(semid);
+ exit(0);
+ }
+
+ WAIT_CHILD_STATUS(procA, out_a);
+ WAIT_CHILD_STATUS(procB, out_b);
+out_a:
+ KILL_CHILD(procB);
+out_b:
+ sem_close(semid);
+ return ret;
+}
+
+/* 进程A配置dvpp地址空间,并且创建共享组。
+ * 但不申请内存。
+ * 进程B也配置不同的dvpp地址空间,并且申请直调内存。
+ * 然后加组。预期成功但是有warning.
+ * 再次申请内存,所属地址空间在进程B的。
+ * */
+static int testcase6(void)
+{
+ int ret;
+ pid_t procA, procB;
+ unsigned long addr;
+ int spg_id = 1;
+ unsigned long size;
+
+ semid = sem_create(1234, "proc A then proc B");
+ procA = fork();
+ if (procA == 0) {
+ /* 配置dvpp地址空间 */
+ dvpp_infos[0].pid = getpid();
+ if (ioctl_config_dvpp_range(dev_fd, &dvpp_infos[0])) {
+ pr_info("proc A config dvpp failed. errno: %d", errno);
+ exit(-1);
+ }
+ if (addgroup(spg_id) < 0) {
+ pr_info("add group failed.");
+ exit(-1);
+ }
+ sem_inc_by_one(semid);
+ pr_info(" proc A finished.");
+ sem_check_zero(semid);
+ sem_dec_by_one(semid);
+ exit(0);
+ }
+
+ procB = fork();
+ if (procB == 0) {
+ sem_dec_by_one(semid);
+ pr_info(" proc B started.");
+ /* 配置dvpp地址空间 */
+ dvpp_infos[1].pid = getpid();
+ if (ioctl_config_dvpp_range(dev_fd, &dvpp_infos[1])) {
+ pr_info("proc B config dvpp failed. errno: %d", errno);
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ size = PMD_SIZE;
+ addr = (unsigned long)wrap_sp_alloc(SPG_ID_DEFAULT, size, SP_DVPP | SP_HUGEPAGE);
+ if (addr == -1) {
+ pr_info("alloc pass through failed.");
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ if (addr < dvpp_infos[1].start ||
+ addr + size > dvpp_infos[1].start + dvpp_infos[1].size) {
+ pr_info("alloc dvpp range incorrect. addr: %lx", addr);
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ if (addgroup(spg_id) < 0) {
+ pr_info("add group failed");
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ /* 再次申请内存,检验区间 */
+ addr = (unsigned long)wrap_sp_alloc(spg_id, size, SP_DVPP | SP_HUGEPAGE);
+ if (addr == -1) {
+ pr_info("alloc failed.");
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ if (addr < dvpp_infos[1].start ||
+ addr + size > dvpp_infos[1].start + dvpp_infos[1].size) {
+ pr_info("alloc dvpp range incorrect. addr: %lx", addr);
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ sem_inc_by_one(semid);
+ exit(0);
+ }
+
+ WAIT_CHILD_STATUS(procA, out_a);
+ WAIT_CHILD_STATUS(procB, out_b);
+out_a:
+ KILL_CHILD(procB);
+out_b:
+ sem_close(semid);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "进程未加组,调用dvpp_config,预期失败")
+ // TESTCASE_CHILD(testcase2, "进程加组,但是size、pid、device_id非法,预期失败")
+ TESTCASE_CHILD(testcase3, "进程加组,参数合理,预期成功, 并且申请dvpp内存,确认是否是设置的范围")
+ //TESTCASE_CHILD(testcase4, "进程加组,参数合理,重复设置,预期失败")
+ // TESTCASE_CHILD(testcase5, "DVPP地址空间合并:设置dvpp但未申请内存的进程加入已经设置不同dvpp且已申请内存的组,预期成功,有warning打印")
+ // TESTCASE_CHILD(testcase6, "DVPP地址空间合并:设置dvpp且已申请内存的进程加入已经设置不同dvpp但未申请内存的组,预期成功,有warning打印")
+
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/test_sp_multi_numa_node.c b/tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/test_sp_multi_numa_node.c
new file mode 100644
index 000000000000..02d7a2317f25
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/test_sp_multi_numa_node.c
@@ -0,0 +1,289 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 08:38:18 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+#include "sharepool_lib.h"
+
+#define NUMA_NODES 4
+#define PAGE_NUM 10
+#define ALLOC_SIZE (1024UL * 1024UL * 20UL)
+#define DVPP_SIZE (0xffff0000)
+#define DEVICE_SHIFT 32
+static int addgroup(int group_id)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int cleanup(struct sp_alloc_info *alloc_info)
+{
+ int ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+struct sp_config_dvpp_range_info cdr_infos[] = { // config 两P环境, pg0和pg1
+ {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = DVPP_SIZE,
+ .device_id = 0,
+ },
+ {
+ .start = DVPP_BASE + DVPP_16G * 3,
+ .size = DVPP_SIZE,
+ .device_id = 1,
+ },
+};
+
+/* 进程加组,参数合理,预期成功, 并且申请dvpp内存,确认是否是设置的范围 */
+static int testcase1(void)
+{
+ int ret;
+
+ if (addgroup(100))
+ return -1;
+
+ for (int i = 0; i < sizeof(cdr_infos) / sizeof(cdr_infos[0]); i++) {
+ cdr_infos[i].pid = getpid();
+ if (ioctl_config_dvpp_range(dev_fd, &cdr_infos[i])) {
+ pr_info("ioctl_config_dvpp_range failed unexpected, errno: %d", errno);
+ return -1;
+ } else
+ pr_info("ioctl_config_dvpp_range success: node %d, start: %lx, size: %lx",
+ i, cdr_infos[i].start, cdr_infos[i].size);
+ }
+
+ return 0;
+}
+
+static int testcase2(void)
+{
+ int ret = 0;
+
+ if (addgroup(100))
+ return -1;
+
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_DVPP, // 4
+ .size = ALLOC_SIZE,
+ .spg_id = 100,
+ };
+
+
+ // 用device_id申请,device_id=0申请到node0,device_id=1申请到node1
+ for (int i = 0; i < 2; i++) {
+ unsigned long device_id = i;
+ device_id = device_id << DEVICE_SHIFT;
+ alloc_info.flag = SP_DVPP | device_id;
+ pr_info("alloc %d time, flag: 0x%lx", i, alloc_info.flag);
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if(ret == -1) {
+ pr_info("sp_alloc failed errno %d\n", errno);
+ return -1;
+ } else {
+ pr_info("alloc at device %d success! va: 0x%lx", i, alloc_info.addr);
+ }
+ }
+
+ return 0;
+}
+
+static int testcase3(void)
+{
+ int ret = 0;
+
+ if (addgroup(100))
+ return -1;
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0, // 4
+ .size = ALLOC_SIZE,
+ .spg_id = 100,
+ };
+
+ for (int i = 0; i < NUMA_NODES; i++) {
+ unsigned long node_id = i;
+ node_id = node_id << NODE_ID_SHIFT;
+ alloc_info.flag = SP_SPEC_NODE_ID | node_id;
+ pr_info("alloc at node %d, flag: 0x%lx", i, alloc_info.flag);
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if(ret == -1) {
+ pr_info("sp_alloc failed errno %d\n", errno);
+ ret = -1;
+ } else {
+ pr_info("alloc at node %d success! va: 0x%lx", i, alloc_info.addr);
+ }
+ }
+
+ return 0;
+}
+
+static int testcase4(void)
+{
+ int ret = 0;
+ struct vmalloc_info vmalloc_info = {
+ .size = ALLOC_SIZE,
+ };
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'a',
+ .addr = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto out;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = SP_DVPP,
+ .pid = getpid(),
+ };
+
+ for (int i = 0; i < 2; i++) {
+ // 配置DVPP地址空间
+ cdr_infos[i].pid = getpid();
+ if (ioctl_config_dvpp_range(dev_fd, &cdr_infos[i])) {
+ pr_info("ioctl_config_dvpp_range failed unexpected, errno: %d", errno);
+ return -1;
+ } else
+ pr_info("ioctl_config_dvpp_range success: node %d, start: %lx, size: %lx",
+ i, cdr_infos[i].start, cdr_infos[i].size);
+
+ unsigned long device_id = i;
+ device_id = device_id << DEVICE_SHIFT;
+ k2u_info.sp_flags |= device_id;
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto out;
+ } else if (k2u_info.addr < cdr_infos[i].start ||
+ k2u_info.addr >= cdr_infos[i].start + cdr_infos[i].size) {
+ pr_info("the range of addr is invalid 0x%llx\n", k2u_info.addr);
+ // return -1;
+ } else
+ pr_info("k2u success for device %d, addr: %#x", i, k2u_info.addr);
+
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0)
+ pr_info("unshare failed");
+ }
+
+out:
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ return ret;
+};
+
+/*
+ * 前置条件:系统中node数量不少于2,node的空闲内存不少于100M
+ * 测试步骤:1. 进程加组,从node1申请内存
+ * 2. 反复测试100次
+ * 3. 加组\直调,DVPP\normal,大页\小页组合分别测试
+ * 预期结果:申请前后node1的free内存减少的数量应不少于申请的内存数量
+ */
+#define SIZE_1M 0x100000UL
+static int test_child(int spg_id, unsigned long flags)
+{
+ int node_id = 3;
+ unsigned long size = SIZE_1M * 10;
+
+ if (spg_id != SPG_ID_DEFAULT) {
+ spg_id = wrap_add_group(getpid(), PROT_READ|PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0)
+ return -1;
+ }
+
+ flags |= (unsigned long)node_id << NODE_ID_SHIFT;
+ flags |= SP_SPEC_NODE_ID;
+ void *addr = wrap_sp_alloc(spg_id, size, flags);
+ if (addr == (void *)-1)
+ return -1;
+
+ int ret = ioctl_check_memory_node((unsigned long)addr, size, node_id);
+
+ wrap_sp_free(addr);
+
+ return ret;
+}
+
+static int test_route(int spg_id, unsigned long flags)
+{
+ int i, ret = 0;
+
+ for (i = 0; i < 20; i++) {
+ pid_t pid;
+ FORK_CHILD_ARGS(pid, test_child(spg_id, flags));
+
+ WAIT_CHILD_STATUS(pid, out);
+ }
+
+out:
+ if (ret)
+ pr_info("numa node alloc test failed: spg_id:%d, flag:%u, i:%d", spg_id, flags, i);
+
+ return ret;
+}
+
+static int testcase5(void) { return test_route(0, 0); } // 直调,小页
+static int testcase6(void) { return test_route(0, SP_HUGEPAGE); } // 直调,大页
+static int testcase7(void) { return test_route(1, 0); } // 加组,小页
+static int testcase8(void) { return test_route(1, SP_HUGEPAGE); } // 加组,大页
+static int testcase9(void) { return test_route(0, SP_DVPP | 0); } // 直调,小页,DVPP
+static int testcase10(void) { return test_route(0, SP_DVPP | SP_HUGEPAGE); } // 直调,大页,DVPP
+static int testcase11(void) { return test_route(1, SP_DVPP | 0); } // 加组,小页,DVPP
+static int testcase12(void) { return test_route(1, SP_DVPP | SP_HUGEPAGE); } // 加组,大页,DVPP
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "进程分别在device 0和device 1进行dvpp config,传入start和size,预期成功")
+ TESTCASE_CHILD(testcase2, "config dvpp地址空间后,用device_id申请内存,预期成功(未校验device/node是否正确)")
+ TESTCASE_CHILD(testcase3, "config dvpp地址空间后,用node_id申请内存,预期成功(未校验node是否正确)")
+ TESTCASE_CHILD(testcase4, "config dvpp地址空间后,用device_id进行k2u,并校验地址在config区间内。预期成功")
+ TESTCASE_CHILD(testcase5, "系统中node数量不少于2,node的空闲内存不少于100M。测试步骤:1. 进程加组,从node1申请内存 2. 反复测试100次 3. 加组/直调,DVPP/normal,大页/小页组合分别测试。预期结果:申请前后node1的free内存减少的数量应不少于申请的内存数量. 直调,小页")
+ TESTCASE_CHILD(testcase6, "直调,大页")
+ TESTCASE_CHILD(testcase7, "加组,小页")
+ TESTCASE_CHILD(testcase8, "加组,大页")
+ TESTCASE_CHILD(testcase9, "直调,小页,DVPP")
+ TESTCASE_CHILD(testcase10, "直调,大页,DVPP")
+ TESTCASE_CHILD(testcase11, "加组,小页,DVPP")
+ TESTCASE_CHILD(testcase12, "加组,大页,DVPP")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_free/Makefile b/tools/testing/sharepool/testcase/api_test/sp_free/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_free/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_free/test_sp_free.c b/tools/testing/sharepool/testcase/api_test/sp_free/test_sp_free.c
new file mode 100644
index 000000000000..246f710f4ca8
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_free/test_sp_free.c
@@ -0,0 +1,127 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Dec 15 20:41:34 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <string.h>
+
+#include "sharepool_lib.h"
+
+#define PAGE_NUM 100
+
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+/*
+ * testcase1: 释放地址为非sp_alloc申请地址,预期失败。
+ */
+static int testcase1(void)
+{
+ int ret;
+
+ if (addgroup())
+ return -1;
+
+ char *user_addr = malloc(PAGE_NUM * PAGE_SIZE);
+ if (user_addr == NULL) {
+ pr_info("testcase1 malloc failed, errno: %d", errno);
+ return -1;
+ }
+ memset((void *)user_addr, 'q', PAGE_NUM * PAGE_SIZE);
+
+ struct sp_alloc_info fake_alloc_info = {
+ .flag = 0,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ .addr = (unsigned long)user_addr,
+ };
+
+ ret = ioctl_free(dev_fd, &fake_alloc_info);
+ if (ret < 0 && errno == EINVAL) {
+ pr_info("testcase1 ioctl_free failed expected");
+ free(user_addr);
+ return 0;
+ } else {
+ pr_info("testcase1 ioctl_free failed unexpected");
+ free(user_addr);
+ return -1;
+ }
+}
+
+/*
+ * testcase2: 释放地址不是sp_alloc()分配内存的起始地址,预期失败。
+ */
+static int testcase2(void)
+{
+ int ret;
+ int result;
+
+ if (addgroup())
+ return -1;
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ pr_info("testcase2 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+
+ alloc_info.addr += 1;
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret == 0) {
+ pr_info("testcase2 ioctl_free success unexpected");
+ return -1;
+ } else if (ret < 0 && errno == EINVAL) {
+ pr_info("testcase2 ioctl_free failed expected");
+ result = 0;
+ } else {
+ pr_info("testcase2 ioctl_free failed unexpected, errno = %d", errno);
+ result = -1;
+ }
+
+ // clean up
+ alloc_info.addr -= 1;
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("testcase2 ioctl_free failed, errno: %d", errno);
+ }
+ return result;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "释放地址为非sp_alloc申请地址,预期失败。")
+ TESTCASE_CHILD(testcase2, "释放地址不是sp_alloc()分配内存的起始地址,预期失败。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_add_task/Makefile b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task.c b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task.c
new file mode 100644
index 000000000000..54d353d9afd1
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task.c
@@ -0,0 +1,568 @@
+#include <stdio.h>
+#include <errno.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/wait.h> // for wait
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+
+/*
+ * testcase1
+ * 测试点:有效pid加组
+ * 预期结果:加组成功,返回正确组id
+ */
+static int testcase1(void)
+{
+ int ret;
+ int group_id = 10;
+ pid_t pid;
+
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ while (1);
+ exit(-1);
+ } else {
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0 || ag_info.spg_id != group_id) {
+ pr_info("unexpect result, ret:%d, ag_info.spg_id:%d", ret, ag_info.spg_id);
+ ret = -1;
+ } else {
+ //pr_info("testcase1 success!!");
+ ret = 0;
+ }
+
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ }
+
+ return ret;
+}
+
+/*
+ * testcase2
+ * 测试点:进程退出时加入指定共享组
+ * 预期结果:加组失败,错误码 -ESRCH
+ */
+static int testcase2_result = 0;
+static struct sp_add_group_info testcase2_ag_info = {
+ .spg_id = 10,
+ .prot = PROT_READ | PROT_WRITE,
+};
+
+static void testcase2_sigchld_handler(int num)
+{
+ int ret = ioctl_add_group(dev_fd, &testcase2_ag_info);
+ if (!(ret < 0 && errno == ESRCH)) {
+ pr_info("unexpect result, ret: %d, errno: %d", ret, errno);
+ testcase2_result = -1;
+ }
+}
+
+static int testcase2(void)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ while (1);
+ exit(-1);
+ } else {
+ struct sigaction osa = {0};
+ struct sigaction sa = {0};
+ sa.sa_handler = testcase2_sigchld_handler;
+ sigaction(SIGCHLD, &sa, &osa);
+ testcase2_ag_info.pid = pid;
+
+ kill(pid, SIGKILL);
+
+ waitpid(pid, NULL, 0);
+ sigaction(SIGCHLD, &osa, NULL);
+ }
+
+ return testcase2_result;
+}
+
+/*
+ * testcase3
+ * 测试点:无效pid加入共享组
+ * 预期结果:加组失败,错误码-ESRCH
+ */
+static int testcase3(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = -1,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 10,
+ };
+
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (!(ret < 0 && errno == ESRCH)) {
+ pr_info("failed, ret:%d, errno:%d", ret, errno);
+ return -1;
+ } else {
+ //pr_info("testcase3 success!!");
+ return 0;
+ }
+}
+
+/*
+ * testcase4
+ * 测试点:有效pid(进程非退出状态)重复加入不同组
+ * 预期结果:加组失败,错误码-EEXIST
+ */
+static int testcase4(void)
+{
+ int ret;
+ int group_id = 10;
+ pid_t pid;
+
+ // 多组场景可以单进程加多组,用例失效
+ return 0;
+
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ while (1);
+ exit(-1);
+ } else {
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0 || ag_info.spg_id != group_id) {
+ pr_info("first add failed, ret:%d, ag_info.spg_id:%d", ret, ag_info.spg_id);
+ ret = -1;
+ goto error_out;
+ }
+
+ ag_info.spg_id = group_id + 1;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (!(ret < 0 && errno == EEXIST)) {
+ pr_info("failed, ret:%d, errno:%d", ret, errno);
+ ret = -1;
+ } else {
+ //pr_info("testcase4 success!!");
+ ret = 0;
+ }
+
+error_out:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return ret;
+ }
+}
+
+/*
+ * testcase5
+ * 测试点:有效pid(进程非退出状态)重复加入相同组
+ * 预期结果:加组失败,错误码-EEXIST
+ */
+static int testcase5(void)
+{
+ int ret;
+ int group_id = 10;
+ pid_t pid;
+
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ while (1);
+ exit(-1);
+ } else {
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0 || ag_info.spg_id != group_id) {
+ pr_info("first add failed, ret:%d, ag_info.spg_id:%d", ret, ag_info.spg_id);
+ ret = -1;
+ goto error_out;
+ }
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (!(ret < 0 && errno == EEXIST)) {
+ pr_info("failed, ret:%d, errno:%d", ret, errno);
+ ret = -1;
+ } else {
+ //pr_info("testcase5 success!!");
+ ret = 0;
+ }
+
+error_out:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return ret;
+ }
+}
+
+/*
+ * testcase6
+ * 测试点:不同线程加相同组
+ * 预期结果:第一个线程加组成功,其他线程加组失败,错误码-EEXIST
+ */
+static void *testcase6_7_thread1(void *data)
+{
+ int group_id = *(int *)data;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0 || ag_info.spg_id != group_id) {
+ pr_info("first add failed, ret:%d, ag_info.spg_id:%d", ret, ag_info.spg_id);
+ ret = -1;
+ }
+
+ return (void *)ret;
+}
+
+static void *testcase6_7_thread2(void *data)
+{
+ int group_id = *(int *)data;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (!(ret < 0 && errno == EEXIST)) {
+ pr_info("failed, ret:%d, errno:%d", ret, errno);
+ ret = -1;
+ } else
+ ret = 0;
+
+ return (void *)ret;
+}
+
+static int testcase6_7_child_process(int arg)
+{
+ int ret;
+ pthread_t p1, p2;
+
+ int group1 = 10;
+ int group2 = group1 + arg;
+
+ ret = pthread_create(&p1, NULL, testcase6_7_thread1, &group1);
+ if (ret) {
+ pr_info("create thread1 failed, ret: %d", ret);
+ return -1;
+ }
+ void *result = NULL;
+ pthread_join(p1, &result);
+ if (result)
+ return -1;
+
+ ret = pthread_create(&p2, NULL, testcase6_7_thread2, &group2);
+ if (ret) {
+ pr_info("create thread2 failed, ret: %d", ret);
+ return -1;
+ }
+ pthread_join(p2, &result);
+ if (result)
+ return -1;
+
+ return 0;
+}
+
+static int testcase6(void)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ int ret = testcase6_7_child_process(0);
+ if (ret)
+ pr_info("testcase6 failed!!");
+ else
+ //pr_info("testcase6 success!!");
+ exit(ret);
+ } else {
+ int status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status)) {
+ pr_info("child process exited unexpected");
+ return -1;
+ }
+ return (char)WEXITSTATUS(status);
+ }
+}
+
+/*
+ * testcase7
+ * 测试点:不同线程加不同组
+ * 预期结果:第一个线程加组成功,其他线程加组失败,错误码-EEXIST
+ */
+static int testcase7(void)
+{
+ // 多组场景可以单进程加多组,用例失效
+ return 0;
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ int ret = testcase6_7_child_process(1);
+ if (ret)
+ pr_info("testcase7 failed!!");
+ else
+ //pr_info("testcase7 success!!");
+ exit(ret);
+ } else {
+ int status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status)) {
+ pr_info("child process exited unexpected");
+ return -1;
+ }
+ return (char)WEXITSTATUS(status);
+ }
+}
+
+/*
+ * testcase8
+ * 测试点:父进程加组后fork子进程加相同的组
+ * 预期结果:子进程加组失败,错误码-EEXIST
+ */
+
+/*
+ * 入参为0表示父子进程加入相同的组,非0表示加入不同的组
+ */
+static int testcase8_9_child(int arg)
+{
+ int group_id = 10;
+ pid_t pid;
+
+ char *sem_name = "/add_task_testcase8";
+ sem_t *sync = sem_open(sem_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sem_name);
+
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id + arg,
+ };
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret && errno == EINTR);
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed, ret:%d, errno:%d", ret, errno);
+ } else {
+ ret = 0;
+ }
+
+ exit(ret);
+ } else {
+ int ret;
+ int status = 0;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ sem_post(sync);
+ if (ret < 0) {
+ pr_info("add group failed");
+ ret = -1;
+ goto error_out;
+ }
+
+error_out:
+ waitpid(pid, &status, 0);
+ if (!ret)
+ ret = (char)WEXITSTATUS(status);
+ return ret;
+ }
+}
+
+static int testcase8(void)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(testcase8_9_child(0));
+ } else {
+ int status = 0;
+ waitpid(pid, &status, 0);
+ int ret = (char)WEXITSTATUS(status);
+ if (ret) {
+ return -1;
+ } else {
+ return 0;
+ }
+ }
+}
+
+/*
+ * testcase9
+ * 测试点:父进程加组后fork子进程加不同的组
+ * 预期结果:子进程加组成功
+ */
+static int testcase9(void)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(testcase8_9_child(1));
+ } else {
+ int status = 0;
+ waitpid(pid, &status, 0);
+ int ret = (char)WEXITSTATUS(status);
+ if (ret) {
+ return -1;
+ } else {
+ return 0;
+ }
+ }
+}
+
+/*
+ * testcase10
+ * 测试点:有效pid加入非法组
+ * 预期结果:加组失败,错误码-EINVAL
+ */
+static int testcase10(void)
+{
+ int ret = 0;
+ int group_ids[] = {0, -1, 100000, 200001, 800000, 900001};
+
+ for (int i = 0; !ret && i < sizeof(group_ids) / sizeof(int); i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ while (1);
+ exit(-1);
+ } else {
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_ids[i],
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (!(ret < 0 && errno == EINVAL)) {
+ pr_info("failed, ret: %d, errno: %d", ret, errno);
+ ret = -1;
+ } else
+ ret = 0;
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ }
+ }
+
+ return ret;
+}
+
+/*
+ * testcase11
+ * 测试点:有效pid加组,spg_id=SPG_ID_AUTO
+ * 预期结果:加组成功,自动生成[SPG_ID_AUTO_MIN, SPG_ID_AUTO_MAX]之间的组id
+ */
+static int testcase11(void)
+{
+ int ret;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ while (1);
+ exit(-1);
+ } else {
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0 || ag_info.spg_id < SPG_ID_AUTO_MIN
+ || ag_info.spg_id > SPG_ID_AUTO_MAX) {
+ pr_info("failed, ret: %d, errno: %d, spg_id: %d",
+ ret, errno, ag_info.spg_id);
+ ret = -1;
+ }
+
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ }
+
+ //pr_info("testcase11 %s!!", ret ? "failed" : "success");
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "有效pid加组,预期成功,返回正确组Id")
+ TESTCASE(testcase2, "进程退出时加入指定共享组,预期结果:加组失败,错误码 -ESRCH")
+ TESTCASE(testcase3, "无效pid加入共享组,预期结果:加组失败,错误码-ESRCH")
+ TESTCASE(testcase4, "有效pid(进程非退出状态)重复加入不同组,预期结果:加组失败,错误码-EEXIST")
+ TESTCASE(testcase5, "有效pid(进程非退出状态)重复加入相同组,预期结果:加组失败,错误码-EEXIST")
+ TESTCASE(testcase6, "不同线程加相同组,预期结果:第一个线程加组成功,其他线程加组失败,错误码-EEXIST")
+ TESTCASE(testcase7, "不同线程加不同组,预期结果:第一个线程加组成功,其他线程加组失败,错误码-EEXIST")
+ TESTCASE(testcase8, "父进程加组后fork子进程加相同的组,预期结果:子进程加组失败,错误码-EEXIST")
+ TESTCASE(testcase9, "父进程加组后fork子进程加不同的组,预期结果:子进程加组成功")
+ TESTCASE(testcase10, "有效pid加入非法组,预期结果:加组失败,错误码-EINVAL")
+ TESTCASE(testcase11, "有效pid加组,spg_id=SPG_ID_AUTO,预期结果:加组成功,自动生成[SPG_ID_AUTO_MIN, SPG_ID_AUTO_MAX]之间的组id")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task2.c b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task2.c
new file mode 100644
index 000000000000..d31e409e110e
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task2.c
@@ -0,0 +1,254 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue May 18 03:09:01 2021
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <sys/mman.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * 单进程两次加不同组
+ * 预期加组成功
+ */
+static int testcase1(void)
+{
+ int ret;
+ pid_t pid;
+
+ FORK_CHILD_DEADLOOP(pid);
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+
+out:
+ KILL_CHILD(pid);
+ return ret;
+}
+
+/*
+ * 单进程两次加相同组
+ * 预期第二次加组失败,错误码EEXIST
+ */
+static int testcase2(void)
+{
+ int ret;
+ pid_t pid;
+
+ FORK_CHILD_DEADLOOP(pid);
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ TEST_CHECK_FAIL(ioctl_add_group(dev_fd, &ag_info), EEXIST, out);
+
+out:
+ KILL_CHILD(pid);
+ return ret;
+}
+
+/*
+ * 多个进程都加入多个组
+ * 预期加组成功
+ */
+#define testcase3_group_num 10
+#define testcase3_child_num 20
+static int testcase3_child(int idx, sem_t *sync)
+{
+ int ret;
+
+ SEM_WAIT(sync);
+
+ pr_info("child idx: %d", idx);
+
+ return 0;
+}
+
+static int testcase3(void)
+{
+ int ret, i, j;
+ pid_t pid[testcase3_child_num];
+ sem_t *sync[testcase3_child_num];
+ int group_num = testcase3_group_num;
+ int groups[testcase3_group_num];
+
+ for (i = 0; i < testcase3_child_num; i++)
+ SEM_INIT(sync[i], i);
+
+ for (i = 0; i < testcase3_child_num; i++)
+ FORK_CHILD_ARGS(pid[i], testcase3_child(i, sync[i]));
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid[0],
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+
+ for (i = 0; i < group_num; i++) {
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ groups[i] = ag_info.spg_id;
+ }
+
+ for (i = 0; i < group_num; i++) {
+ ag_info.spg_id = groups[i];
+ for (j = 1; j < testcase3_child_num; j++) {
+ ag_info.pid = pid[j];
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ }
+ }
+
+ for (i = 0; i < testcase3_child_num; i++)
+ sem_post(sync[i]);
+
+ for (i = 0; i < testcase3_child_num;i++)
+ waitpid(pid[0], NULL, 0);
+
+out:
+ return ret;
+}
+
+/*
+ * 多个进程都加入多个组,首进程申请内存,并写入,后面的进程读数据
+ * 预期加组成功,读写内存成功
+ */
+#define testcase4_group_num 10
+#define testcase4_child_num 11
+#define testcase4_buf_size 0x1024
+
+struct testcase4_data {
+ int group_id;
+ void *share_area;
+};
+
+static int testcase4_writer(sem_t **sync, struct testcase4_data *data)
+{
+ int ret, i;
+
+ SEM_WAIT(sync[0]);
+
+ for (i = 0; i < testcase4_group_num; i++) {
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = testcase4_buf_size,
+ .spg_id = data[i].group_id,
+ };
+ TEST_CHECK(ioctl_alloc(dev_fd, &alloc_info), out);
+ data[i].share_area = (void *)alloc_info.addr;
+ memset(data[i].share_area, 'a', testcase4_buf_size);
+ }
+
+ for (i = 1; i < testcase4_child_num; i++)
+ sem_post(sync[i]);
+
+out:
+ return ret;
+}
+
+static int testcase4_reader(sem_t *sync, struct testcase4_data *data)
+{
+ int ret, i, j;
+
+ SEM_WAIT(sync);
+
+ for (i = 0; i < testcase4_group_num; i++) {
+ char *base = data[i].share_area;
+ for (j = 0; j < testcase4_buf_size; j++)
+ if (base[j] != 'a') {
+ pr_info("unexpect result: i = %d, base[%d]: %d", i, j, base[j]);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase4(void)
+{
+ int ret, i, j, fd;
+ pid_t pid[testcase4_child_num];
+ sem_t *sync[testcase4_child_num];
+ struct testcase4_data *data = mmap(NULL, sizeof(*data) * testcase4_child_num,
+ PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0);
+ if (data == MAP_FAILED) {
+ pr_info("map failed");
+ return -1;
+ }
+
+ for (i = 0; i < testcase4_child_num; i++)
+ SEM_INIT(sync[i], i);
+
+ FORK_CHILD_ARGS(pid[0], testcase4_writer(sync, data));
+ for (i = 1; i < testcase4_child_num; i++)
+ FORK_CHILD_ARGS(pid[i], testcase4_reader(sync[i], data));
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid[0],
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+
+ for (i = 0; i < testcase4_group_num; i++) {
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out_kill);
+ data[i].group_id = ag_info.spg_id;
+ }
+
+ for (i = 0; i < testcase4_group_num; i++) {
+ ag_info.spg_id = data[i].group_id;
+ for (j = 1; j < testcase4_child_num; j++) {
+ ag_info.pid = pid[j];
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out_kill);
+ }
+ }
+
+ sem_post(sync[0]);
+
+ for (i = 0; i < testcase4_child_num; i++)
+ WAIT_CHILD_STATUS(pid[i], out_kill);
+
+ return 0;
+
+out_kill:
+ for (i = 0; i < testcase4_child_num; i++)
+ kill(pid[i], SIGKILL);
+out:
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "单进程两次加不同组,预期:加组成功")
+ TESTCASE(testcase2, "单进程两次加相同组,预期:第二次加组失败,错误码EEXIST")
+ TESTCASE(testcase3, "多个进程都加入多个组,预期:加组成功")
+ TESTCASE(testcase4, "多个进程都加入多个组,首进程申请内存,并写入,后面的进程读数据。预期:加组成功,读写内存成功")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task3.c b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task3.c
new file mode 100644
index 000000000000..b56999ba371e
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task3.c
@@ -0,0 +1,250 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed May 19 06:19:21 2021
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <sys/mman.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * 进程加组,设置成只读权限,尝试读写普通内存和共享内存
+ * 预期普通内存读写成功,共享内存读成功,写失败
+ */
+static int testcase1(void)
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ,
+ .spg_id = SPG_ID_AUTO,
+ };
+
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+
+ char *buf = mmap(NULL, 1024, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+ if (buf == MAP_FAILED) {
+ pr_info("mmap failed");
+ return -1;
+ }
+ memset(buf, 'a', 10);
+ memcpy(buf + 20, buf, 10);
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 1024,
+ .spg_id = ag_info.spg_id,
+ };
+ TEST_CHECK(ioctl_alloc(dev_fd, &alloc_info), out);
+
+ buf = (char *)alloc_info.addr;
+ /* 这一步应该触发segment fault */
+ memset(buf, 'a', 10);
+ memcpy(buf + 20, buf, 10);
+ if (strncmp(buf, buf + 20, 10))
+ pr_info("compare failed");
+ else
+ pr_info("compare success");
+
+ // unaccessable
+ pr_info("ERROR!! unaccessable statement reach");
+ return -1;
+out:
+ return ret;
+}
+
+/*
+ * 进程A加组,申请共享内存,读写;进程B加组,设置成只读权限,读写进程A申请的共享内存。
+ * 预期进程B读成功,写失败。
+ */
+static int testcase2_child(char *addr, sem_t *sync)
+{
+ int ret;
+ pr_info("in child process");
+
+ SEM_WAIT(sync);
+
+ pr_info("first two char: %c %c", addr[0], addr[1]);
+ if (addr[0] != 'a' || addr[1] != 'a') {
+ pr_info("memory context check failed");
+ return -1;
+ }
+
+ addr[1] = 'b';
+
+ // unaccessable
+ pr_info("ERROR!! unaccessable statement reach");
+
+ return -1;
+}
+
+static int testcase2(void)
+{
+ int ret;
+ pid_t pid;
+ sem_t *sync;
+ SEM_INIT(sync, 0);
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 1024,
+ .spg_id = ag_info.spg_id,
+ };
+ TEST_CHECK(ioctl_alloc(dev_fd, &alloc_info), out);
+ char *buf = (char *)alloc_info.addr;
+ memset(buf, 'a', 10);
+
+ FORK_CHILD_ARGS(pid, testcase2_child(buf, sync));
+ ag_info.pid = pid;
+ ag_info.prot = PROT_READ;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+
+ sem_post(sync);
+
+ WAIT_CHILD_SIGNAL(pid, SIGSEGV, out);
+
+out:
+ return ret;
+}
+
+/*
+ * 测试步骤:两个进程加同一组,A读写,B只读,然后A申请内存,A读写内存,B读、写
+ * 预期结果:加组成功, A写成功,B读成功,写失败
+ */
+static int testcase3_child(char **paddr, sem_t *sync)
+{
+ int ret;
+ pr_info("in child process");
+
+ SEM_WAIT(sync);
+
+ char *addr = *paddr;
+
+ pr_info("first two char: %c %c", addr[0], addr[1]);
+ if (addr[0] != 'a' || addr[1] != 'a') {
+ pr_info("memory context check failed");
+ return -1;
+ }
+
+ addr[1] = 'b';
+
+ // unaccessable
+ pr_info("ERROR!! unaccessable statement reach");
+
+ return -1;
+}
+
+static int testcase3(void)
+{
+ int ret;
+ pid_t pid;
+ sem_t *sync;
+ SEM_INIT(sync, 0);
+
+ char **paddr = mmap(NULL, sizeof(*paddr), PROT_READ | PROT_WRITE,
+ MAP_SHARED | MAP_ANONYMOUS, -1, 0);
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+
+ FORK_CHILD_ARGS(pid, testcase3_child(paddr, sync));
+ ag_info.pid = pid;
+ ag_info.prot = PROT_READ;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 1024,
+ .spg_id = ag_info.spg_id,
+ };
+ TEST_CHECK(ioctl_alloc(dev_fd, &alloc_info), out);
+ char *buf = (char *)alloc_info.addr;
+ memset(buf, 'a', 10);
+ *paddr = buf;
+
+ sem_post(sync);
+
+ WAIT_CHILD_SIGNAL(pid, SIGSEGV, out);
+
+out:
+ return ret;
+}
+
+/*
+ * 进程加组,权限位无效,0,设置无效bit
+ * 预期返回错误
+ */
+static int testcase4(void)
+{
+ int i, ret;
+
+ struct sp_add_group_info ag_info[] = {
+ {
+ .pid = getpid(),
+ .prot = 0,
+ .spg_id = SPG_ID_AUTO,
+ },
+ {
+ .pid = getpid(),
+ .prot = PROT_EXEC,
+ .spg_id = SPG_ID_AUTO,
+ },
+ {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_EXEC,
+ .spg_id = SPG_ID_AUTO,
+ },
+ {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE | PROT_EXEC,
+ .spg_id = SPG_ID_AUTO,
+ },
+ };
+
+ for (i = 0; i < ARRAY_SIZE(ag_info); i++) {
+ ret = ioctl_add_group(dev_fd, ag_info + i);
+ if (!(ret == -1 && errno == EINVAL)) {
+ pr_info("ioctl_add_group return unexpected, i:%d, ret:%d, errno:%d", i, ret, errno);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD_SIGNAL(testcase1, SIGSEGV, "进程加组,设置成只读权限,尝试读写普通内存和共享内存。预期:普通内存读写成功,共享内存读成功,写失败")
+ TESTCASE_CHILD(testcase2, "进程A加组,申请共享内存,读写;进程B加组,设置成只读权限,读写进程A申请的共享内存。预期:进程B读成功,写失败。")
+ TESTCASE_CHILD(testcase3, "两个进程加同一组,A读写,B只读,然后A申请内存,A读写内存,B读、写。预期:加组成功, A写成功,B读成功,写失败")
+ TESTCASE_CHILD(testcase4, "进程加组,权限位无效,0,设置无效bit。预期:返回错误")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task4.c b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task4.c
new file mode 100644
index 000000000000..fbd2073333d6
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task4.c
@@ -0,0 +1,148 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Sat May 29 07:24:40 2021
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <sys/mman.h>
+#include <pthread.h>
+
+#include "sharepool_lib.h"
+
+
+#define MAX_PROCESS_PER_GROUP 1024
+#define MAX_GROUP_PER_PROCESS 3000
+
+/*
+ * 测试步骤:单进程循环加组
+ * 预期结果:第3000次加组失败,前面加组成功
+ */
+static int testcase1(void)
+{
+ int i, ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ,
+ };
+
+ for (i = 0; i < MAX_GROUP_PER_PROCESS - 1; i++) {
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ }
+
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK_FAIL(ioctl_add_group(dev_fd, &ag_info), ENOSPC, out);
+
+out:
+ return ret;
+}
+
+/*
+ * 测试步骤:多线程并发加组
+ * 预期结果,所有线程加组成功次数为2999
+ */
+static void *test2_thread(void *arg)
+{
+ int i, ret = 0;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ,
+ };
+
+ for (i = 0; i < MAX_GROUP_PER_PROCESS - 1; i++) {
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ }
+
+out:
+ pr_info("thread%d, returned, %d groups has been added successfully", (int)arg, i);
+ return (void *)i;
+}
+
+#define TEST2_THERAD_NUM 20
+static int testcase2(void)
+{
+ int i, ret, sum = 0;
+ void *val;
+ pthread_t th[TEST2_THERAD_NUM];
+
+ for (i = 0; i < ARRAY_SIZE(th); i++)
+ TEST_CHECK(pthread_create(th + i, NULL, test2_thread, (void *)i), out);
+
+ for (i = 0; i < ARRAY_SIZE(th); i++) {
+ TEST_CHECK(pthread_join(th[i], &val), out);
+ sum += (int)val;
+ }
+
+ if (sum != MAX_GROUP_PER_PROCESS - 1) {
+ pr_info("MAX_GROUP_PER_PROCESS check failed, %d", sum);
+ return -1;
+ }
+
+out:
+ return ret;
+}
+
+/*
+ * 功能未实现
+ * 测试点:单组进程上限1024
+ * 测试步骤:进程不断fork,并且给子进程加组
+ * 预期结果:有1023个进程加组成功
+ */
+
+static int testcase3(void)
+{
+ int i = 0, ret;
+ pid_t pid[MAX_PROCESS_PER_GROUP + 1];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+
+ for (i = 0; i < MAX_PROCESS_PER_GROUP + 1; i++) {
+ FORK_CHILD_SLEEP(pid[i]);
+ ag_info.pid = pid[i];
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ }
+
+out:
+ if (i == MAX_PROCESS_PER_GROUP - 1) {
+ ret = 0;
+ pr_info("%d processes added to a group, success", i);
+ } else {
+ ret = -1;
+ pr_info("%d processes added to a group, failed", i);
+ }
+
+ while (i >= 0) {
+ KILL_CHILD(pid[i]);
+ i--;
+ }
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "单进程循环加组。预期结果:第3000次加组失败,前面加组成功")
+ TESTCASE_CHILD(testcase2, "多线程并发加组。预期结果:所有线程加组成功次数为2999")
+ TESTCASE_CHILD(testcase3, "单组进程上限1024;进程不断fork,并且给子进程加组。预期结果:有1023个进程加组成功")
+};
+
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task5.c b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task5.c
new file mode 100644
index 000000000000..30ae4e1f6cf9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task5.c
@@ -0,0 +1,113 @@
+#include <stdio.h>
+#include <errno.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/wait.h> // for wait
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+#include "sem_use.h"
+
+#define GROUP_NUM 2999
+#define PROC_NUM 100
+
+int sem;
+int sem_dump;
+
+void *thread_dump(void *arg)
+{
+ sem_dec_by_one(sem_dump);
+ generateCoredump();
+ return (void *)0;
+}
+
+static int tc1_child(void)
+{
+ int ret = 0;
+ pthread_t thread;
+
+ pthread_create(&thread, NULL, thread_dump, NULL);
+ sem_dec_by_one(sem);
+ pr_info("child %d starts add group", getpid());
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, i + 1);
+ if (ret < 0)
+ break;
+ }
+
+ return 0;
+}
+
+/*
+ * testcase1
+ * 测试点:有效pid加组
+ * 预期结果:加组成功,返回正确组id
+ */
+static int testcase1(void)
+{
+ int ret;
+ int pid;
+ int child[PROC_NUM];
+
+ sem = sem_create(1234, "12");
+ sem_dump = sem_create(3456, "sem for coredump control");
+
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, i + 1);
+ if (ret < 0) {
+ pr_info("create group failed");
+ return -1;
+ }
+ for (int j = 0; j < 5; j++)
+ ret = wrap_sp_alloc(i + 1, 4096, 0);
+ }
+ pr_info("create all groups success\n");
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(tc1_child());
+ } else {
+ child[i] = pid;
+ }
+ }
+
+ sem_inc_by_val(sem, PROC_NUM);
+ pr_info("create all processes success\n");
+
+ sleep(3);
+
+ sem_inc_by_val(sem_dump, PROC_NUM);
+
+ for (int i = 0; i < PROC_NUM; i++)
+ WAIT_CHILD_STATUS(child[i], out);
+out:
+ sem_close(sem);
+ sem_close(sem_dump);
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "拉起多个进程创建组,上限49999,同时令其coredump")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_del_task/Makefile b/tools/testing/sharepool/testcase/api_test/sp_group_del_task/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_del_task/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_del_task/test_sp_group_del_task.c b/tools/testing/sharepool/testcase/api_test/sp_group_del_task/test_sp_group_del_task.c
new file mode 100644
index 000000000000..090cfd514dae
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_del_task/test_sp_group_del_task.c
@@ -0,0 +1,1083 @@
+#include "sharepool_lib.h"
+#include "sem_use.h"
+#include <stdlib.h>
+#include <errno.h>
+#include <assert.h>
+#include <pthread.h>
+#include <sys/types.h>
+
+#define PROC_NUM 8
+#define THREAD_NUM 5
+#define GROUP_NUM 16
+#define ALLOC_TYPE 4
+#define REPEAT_TIMES 2
+#define ALLOC_SIZE PAGE_SIZE
+#define PROT (PROT_READ | PROT_WRITE)
+
+static int group_ids[GROUP_NUM];
+static int default_id = 1;
+static int semid;
+
+static int add_multi_group();
+static int check_multi_group();
+static int delete_multi_group();
+static int process();
+void *thread_and_process_helper(int group_id);
+void *del_group_thread(void *arg);
+void *del_proc_from_group(void *arg);
+
+/* testcase1: A and B 加组,A调用sp_group_del_task退出组。预期成功 */
+static int testcase1(void)
+{
+ int ret= 0;
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ int pid = fork();
+ if (pid == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ sharepool_print();
+ ret = wrap_del_from_group(getpid(), default_id);
+ pr_info("\nafter delete process %d\n", getpid());
+ sharepool_print();
+ exit(ret);
+ }
+
+ int status;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status)) {
+ pr_info("child process exited unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* testcase2: A and B 加组,B申请了内存。A调用sp_group_del_task退出组。预期失败 */
+static int testcase2(void)
+{
+ int ret = 0;
+ void *pret;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ pret = wrap_sp_alloc(default_id, ALLOC_SIZE, 0);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ ret = -1;
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ ret = 0;
+ }
+
+ int pid = fork();
+ if (pid == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ sharepool_print();
+ ret = wrap_del_from_group(getpid(), default_id);
+ if (ret < 0) {
+ pr_info("delete failed, errno: %d", errno);
+ } else {
+ pr_info("delete success");
+ }
+ pr_info("\nafter delete process %d\n", getpid());
+ sharepool_print();
+ exit(ret);
+ }
+
+ int status;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status)) {
+ pr_info("child process exited unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* testcase3: A加组,A调用sp_group_del_task退出组。预期成功 */
+static int testcase3(void)
+{
+ int ret = 0;
+
+ int pid = fork();
+ if (pid == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ sharepool_print();
+ ret = wrap_del_from_group(getpid(), default_id);
+ if (ret < 0) {
+ pr_info("delete failed, errno: %d", errno);
+ } else {
+ pr_info("delete success");
+ }
+ pr_info("\nafter delete process %d\n", getpid());
+ sharepool_print();
+ exit(ret);
+ }
+
+ int status;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status)) {
+ pr_info("child process exited unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* testcase4: A加组并申请内存。A调用sp_group_del_task退出组。预期失败。再free再del,预期成功 */
+static int testcase4(void)
+{
+ int ret = 0;
+
+ int pid = fork();
+ if (pid == 0) {
+ void *addr;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ addr = wrap_sp_alloc(default_id, PAGE_SIZE, 0);
+ if (addr == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ }
+ sharepool_print();
+
+ ret = wrap_del_from_group(getpid(), default_id);
+ if (ret < 0) {
+ pr_info("delete failed, errno: %d", errno);
+ } else {
+ pr_info("delete success");
+ }
+
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d free failed.", getpid());
+ } else {
+ pr_info("process %d free success.", getpid());
+ }
+
+ ret = wrap_del_from_group(getpid(), default_id);
+ if (ret < 0) {
+ pr_info("delete failed, errno: %d", errno);
+ } else {
+ pr_info("delete success");
+ }
+
+ pr_info("\nafter delete process %d\n", getpid());
+ sharepool_print();
+ exit(ret);
+ }
+
+ int status;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status)) {
+ pr_info("child process exited unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* testcase5: N个进程在未申请内存的组中,并发退组。预期成功 */
+static int testcase5(void)
+{
+ int ret = 0;
+ pid_t childs[PROC_NUM];
+
+ semid = sem_create(1234, "concurrent delete from group");
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed.");
+ return -1;
+ } else if (pid == 0) {
+ ret = add_multi_group();
+ if (ret < 0) {
+ pr_info("process %d add all groups failed.", getpid());
+ exit(-1);
+ }
+ ret = check_multi_group();
+ if (ret < 0) {
+ pr_info("process %d check all groups failed.", getpid());
+ exit(-1);
+ } else {
+ pr_info("process %d check all groups success.", getpid());
+ }
+
+ sem_inc_by_one(semid);
+ sem_check_zero(semid);
+
+ ret = delete_multi_group();
+ exit(ret);
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ sem_dec_by_val(semid, PROC_NUM);
+ for (int i = 0; i < PROC_NUM; i++) {
+ int status = 0;
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", i, status);
+ ret = -1;
+ }
+ }
+
+ sem_close(semid);
+ return ret;
+}
+
+/* testcase6: N个进程在申请内存的组中,并发退组。预期失败 */
+static int testcase6(void)
+{
+ int ret = 0;
+ void *pret;
+ pid_t childs[PROC_NUM];
+
+ semid = sem_create(1234, "concurrent delete from group");
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed.");
+ return -1;
+ } else if (pid == 0) {
+ pr_info("fork child %d success", getpid());
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ pret = wrap_sp_alloc(default_id, ALLOC_SIZE, 0);
+ if (pret == (void *)-1) {
+ pr_info("alloc failed %d", getpid());
+ sem_inc_by_one(semid);
+ exit(-1);
+ } else {
+ pr_info("alloc addr: %lx", pret);
+ }
+
+ sem_inc_by_one(semid);
+ sem_check_zero(semid);
+
+ pr_info("child %d del", getpid());
+ ret = wrap_del_from_group(getpid(), default_id);
+ if (ret == 0)
+ exit(-1);
+ exit(0);
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ sem_dec_by_val(semid, PROC_NUM);
+ for (int i = 0; i < PROC_NUM; i++) {
+ int status = 0;
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", i, status);
+ ret = -1;
+ } else {
+ pr_info("child%d test success, %d", i, status);
+ }
+ }
+
+ sem_close(semid);
+ sharepool_print();
+
+ return ret;
+}
+
+/* testcase7: 主进程申请释放,子进程一边加组一边退组。一段时间后杀死,预期无死锁/泄漏 */
+static int testcase7(void)
+{
+ int ret = 0;
+ int childs[PROC_NUM];
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("parent %d add into group failed. errno: %d", getpid(), ret);
+ return -1;
+ }
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0)
+ pr_info("child %d add into group failed. errno: %d", getpid(), ret);
+ ret = wrap_del_from_group(getpid(), default_id);
+ if (ret < 0)
+ pr_info("child %d del from group failed. errno: %d", getpid(), ret);
+ }
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ int alloc_time = 200, j = 0;
+ void *pret;
+ while (j++ < 200) {
+ pret = wrap_sp_alloc(default_id, ALLOC_SIZE, 0);
+ if (pret == (void *)-1)
+ pr_info("alloc failed errno %d", errno);
+ else {
+ ret = wrap_sp_free(ret);
+ if (ret < 0) {
+ pr_info("free failed errno %d", ret);
+ goto free_error;
+ }
+ }
+ }
+
+free_error:
+ for (int i = 0; i < PROC_NUM; i++) {
+ kill(childs[i], SIGKILL);
+ int status;
+ waitpid(childs[i], &status, 0);
+ }
+
+ sharepool_print();
+ return 0;
+}
+
+/* testcase8: N个进程加组,一半进程退组,一半进程退出。预期稳定 */
+static int testcase8(void)
+{
+ int ret = 0;
+ int childs[PROC_NUM];
+
+ semid = sem_create(1234, "half exit, half delete");
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int pid = fork();
+ if (pid == 0) {
+ //ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ add_multi_group();
+ check_multi_group();
+
+ sem_inc_by_one(semid);
+ sem_check_zero(semid);
+
+ if (getpid() % 2) {
+ pr_info("child %d exit", getpid());
+ exit(0);
+ } else {
+ pr_info("child %d del", getpid());
+ //ret = wrap_del_from_group(getpid(), default_id);
+ ret = delete_multi_group();
+ exit(ret);
+ }
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ sem_dec_by_val(semid, PROC_NUM);
+ for (int i = 0; i < PROC_NUM; i++) {
+ int status = 0;
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", i, status);
+ ret = -1;
+ }
+ }
+
+ sharepool_print();
+ sem_close(semid);
+ return ret;
+}
+
+/* testcase9: N个进程加组,顺序进行加组-alloc-free-u2k-k2u-退组。预期稳定 */
+static int testcase9(void)
+{
+ int ret = 0;
+ int childs[PROC_NUM];
+
+ return 0;
+ for (int i = 0; i < PROC_NUM; i++) {
+ int pid = fork();
+ if (pid == 0) {
+ add_multi_group();
+ if (check_multi_group()) {
+ pr_info("child %d add all groups check failed.", getpid());
+ exit(-1);
+ }
+ exit(process());
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int status = 0;
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", i, status);
+ ret = -1;
+ }
+ }
+
+ sharepool_print();
+ return ret;
+}
+
+/* testcase10: 多线程并发调用删除接口。预期只有一个执行成功 */
+static int testcase10(void)
+{
+ int ret = 0;
+ int del_fail = 0, del_succ = 0;
+ void *tret;
+ pthread_t threads[THREAD_NUM];
+
+ semid = sem_create(1234, "call del_group when all threads are ready.");
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process add group failed.");
+ return -1;
+ }
+ for (int i = 0; i < THREAD_NUM; i++) {
+ ret = pthread_create(threads + i, NULL, del_group_thread, (void *)i);
+ if (ret < 0) {
+ pr_info("pthread create failed.");
+ return -1;
+ }
+ }
+
+ // wait until all threads are created.
+ sharepool_print();
+ sem_dec_by_val(semid, THREAD_NUM);
+
+ for (int i = 0; i < THREAD_NUM; i++) {
+ ret = pthread_join(threads[i], &tret);
+ if (ret < 0) {
+ pr_info("pthread %d join failed.", i);
+ ret = -1;
+ }
+ if ((long)tret < 0) {
+ pr_info("thread %dth del failed", i);
+ del_fail++;
+ } else {
+ pr_info("thread %dth del success", i);
+ del_succ++;
+ }
+ }
+
+ pr_info("thread total num: %d, del fail %d, del success %d\n",
+ THREAD_NUM, del_fail, del_succ);
+ sharepool_print();
+ return ret;
+}
+
+/* testcase11: 退组和alloc接口并发。预期无死锁/泄漏 */
+static int testcase11(void)
+{
+ int ret = 0;
+ int childs[PROC_NUM];
+
+ semid = sem_create(1234, "half delete, half alloc");
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed!");
+ return -1;
+ } else if (pid == 0) {
+ //ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group failed.", getpid());
+ return -1;
+ } else {
+ pr_info("process %d add group success.", getpid());
+ }
+
+ sem_inc_by_one(semid);
+ sem_check_zero(semid);
+
+ if (getpid() % 2) {
+ ret = wrap_sp_alloc(default_id, ALLOC_SIZE, 0);
+ if (ret == -1)
+ pr_info("child %d alloc failed. errno is: %d", getpid(), errno);
+ else
+ pr_info("child %d alloc success.", getpid());
+ } else {
+ ret = wrap_del_from_group(getpid(), default_id);
+ if (ret < 0)
+ pr_info("child %d del failed. errno is: %d", getpid(), errno);
+ else
+ pr_info("child %d del success.", getpid());
+ }
+
+ pr_info("child %d finish, sem val is %d", getpid(), sem_get_value(semid));
+ while(1) {
+
+ }
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ sem_dec_by_val(semid, PROC_NUM); /* let child process alloc or del */
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int status = 0;
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ }
+
+ sharepool_print(); /* observe the result */
+ sem_close(semid);
+ return ret;
+}
+
+/* testcase12: A进程调用del将C进程退出组,同时B进程也调用del将C进程退出组。预期成功 */
+static int testcase12(void)
+{
+ int ret = 0;
+ int childs[PROC_NUM];
+ int ppid;
+ int del_fail = 0, del_succ = 0;
+
+ ppid = getpid();
+ semid = sem_create(1234, "half exit, half delete");
+ ret = wrap_add_group(ppid, PROT, default_id);
+ if (ret < 0) {
+ pr_info("parent proc %d add group failed.", ppid);
+ return -1;
+ }
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int pid = fork();
+ if (pid == 0) {
+ sem_inc_by_one(semid);
+ /* a2 add group finish*/
+ sem_check_zero(semid);
+ sem_dec_by_one(semid);
+
+ ret = wrap_del_from_group(ppid, default_id);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ sem_dec_by_val(semid, PROC_NUM);
+ for (int i = 0; i < PROC_NUM; i++) {
+ ret = wrap_add_group(childs[i], PROT, default_id);
+ if (ret < 0) {
+ pr_info("p %d add group failed.", childs[i]);
+ return -1;
+ }
+ }/* a1 add group finish */
+ sharepool_print();
+ sem_inc_by_val(semid, PROC_NUM);
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int status = 0;
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("proc %dth del failed", i);
+ del_fail++;
+ } else {
+ pr_info("proc %dth del success", i);
+ del_succ++;
+ }
+ }
+
+ pr_info("del fail: %d, del success: %d", del_fail, del_succ);
+ sharepool_print();
+ sem_close(semid);
+ return del_succ == 1 ? 0 : -1;
+}
+
+/* testcase13: A进程调用del将N个进程退出组,同时N个进程exit */
+static int testcase13(void)
+{
+ int ret = 0;
+ void *tret;
+ int childs[PROC_NUM];
+ pthread_t threads[PROC_NUM]; /* one thread del one proc, so use PROC_NUM here */
+ int del_fail = 0, del_succ = 0;
+
+ semid = sem_create(1234, "exit & del group");
+ ret = wrap_add_group(getpid(), PROT, default_id);
+ if (ret < 0) {
+ pr_info("parent proc %d add group failed.", getpid());
+ return -1;
+ }
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int pid = fork();
+ if (pid == 0) {
+ sem_inc_by_one(semid);
+ /* a2 add group finish*/
+ sem_check_zero(semid);
+ sem_dec_by_one(semid);
+
+ exit(0);
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ sem_dec_by_val(semid, PROC_NUM);
+ for (int i = 0; i < PROC_NUM; i++) {
+ ret = wrap_add_group(childs[i], PROT, default_id);
+ if (ret < 0) {
+ pr_info("p %d add group failed.", childs[i]);
+ return -1;
+ }
+ }/* a1 add group finish */
+ sharepool_print();
+
+ for (int j = 0; j < PROC_NUM; j++) {
+ ret = pthread_create(threads + j, NULL, del_proc_from_group, (void *)childs[j]);
+ }
+ sem_inc_by_val(semid, 2 * PROC_NUM);
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int status = 0;
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("proc %dth exit unexpected", i);
+ }
+ }
+
+ for (int j = 0; j < PROC_NUM; j++) {
+ ret = pthread_join(threads[j], &tret);
+ if (ret < 0) {
+ pr_info("pthread %d join failed.", j);
+ ret = -1;
+ }
+ if ((long)tret < 0) {
+ pr_info("thread %dth del failed", j);
+ del_fail++;
+ } else {
+ pr_info("thread %dth del success", j);
+ del_succ++;
+ }
+ }
+
+ pr_info("del fail: %d, del success: %d", del_fail, del_succ);
+ sharepool_print();
+ sem_close(semid);
+ return 0;
+}
+
+/*
+* A 进程加组1000,B进程不加组,A调用删组接口将B从组1000中删除,预期失败
+*/
+static int testcase14(void)
+{
+ pid_t pid;
+ int ret, group_id = 1000;
+
+ ret = wrap_add_group(getpid(), PROT, group_id);
+ if (ret < 0) {
+ pr_info("add group failed.");
+ return -1;
+ }
+
+ pid = fork();
+ if (pid == 0) {
+ /* do nothing for child */
+ while (1);
+ }
+
+ ret = wrap_del_from_group(pid, group_id);
+ if (!ret) {
+ pr_info("del task from group success unexpected");
+ ret = -1;
+ } else {
+ pr_info("del task from group failed as expected");
+ ret = 0;
+ }
+
+ kill(pid, SIGKILL);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "A and B 加组,A调用sp_group_del_task退出组。预期成功")
+ TESTCASE_CHILD(testcase2, "A and B 加组,B申请了内存。A调用sp_group_del_task退出组。预期失败")
+ TESTCASE_CHILD(testcase3, "A加组,A调用sp_group_del_task退出组。预期成功")
+ TESTCASE_CHILD(testcase4, "A加组并申请内存。A调用sp_group_del_task退出组。预期失败。再free再del,预期成功")
+ TESTCASE_CHILD(testcase5, "N个进程在未申请内存的组中,并发退组。预期成功")
+ TESTCASE_CHILD(testcase6, "N个进程在申请内存的组中,并发退组。预期失败")
+ TESTCASE_CHILD(testcase7, "主进程申请释放,子进程一边加组一边退组。一段时间后杀死,预期无死锁/泄漏")
+ TESTCASE_CHILD(testcase8, "N个进程加组,一半进程退组,一半进程退出。预期稳定")
+ //TESTCASE_CHILD(testcase9, "N个进程加组,顺序进行加组-alloc-free-u2k-k2u-退组。预期稳定")
+ TESTCASE_CHILD(testcase10, "多线程并发调用删除接口。预期只有一个执行成功")
+ TESTCASE_CHILD(testcase11, "退组和alloc接口并发。预期无死锁/泄漏")
+ TESTCASE_CHILD(testcase12, "A进程调用del将C进程退出组,同时B进程也调用del将C进程退出组。预期成功")
+ TESTCASE_CHILD(testcase13, "A进程调用del将N个进程退出组,同时N个进程exit")
+ TESTCASE_CHILD(testcase14, "A 进程加组1000,B进程不加组,A调用删组接口将B从组1000中删除,预期失败")
+};
+
+static int add_multi_group()
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+ for (int i = 0; i < GROUP_NUM; i++) {
+ group_ids[i] = i + 1;
+ ag_info.spg_id = group_ids[i];
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("proc%d add group%d failed", getpid(), group_ids[i]);
+ return -1;
+ }
+ }
+
+ return ret;
+}
+
+static int check_multi_group()
+{
+ int ret;
+ // query groups
+ int spg_ids[GROUP_NUM];
+ int group_num = GROUP_NUM;
+ struct sp_group_id_by_pid_info find_group_info = {
+ .num = &group_num,
+ .spg_ids = spg_ids,
+ .pid = getpid(),
+ };
+ ret = ioctl_find_group_by_pid(dev_fd, &find_group_info);
+ if (ret < 0) {
+ pr_info("find group id by pid failed");
+ return ret;
+ } else
+ for (int i = 0; i < GROUP_NUM; i++)
+ if (spg_ids[i] != group_ids[i]) {
+ pr_info("group id %d not consistent", i);
+ ret = -1;
+ }
+ return ret;
+}
+
+static int delete_multi_group()
+{
+ int ret = 0;
+ int fail = 0, suc = 0;
+ // delete from all groups
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = wrap_del_from_group(getpid(), group_ids[i]);
+ if (ret < 0) {
+ //pr_info("process %d delete from group %d failed, errno: %d", getpid(), group_ids[i], errno);
+ fail++;
+ }
+ else {
+ pr_info("process %d delete from group %d success", getpid(), group_ids[i]);
+ suc++;
+ }
+ }
+
+ return fail;
+}
+
+static int process()
+{
+ int ret = 0;
+ for (int j = 0; j < REPEAT_TIMES; j++) {
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = thread_and_process_helper(group_ids[i]);
+ if (ret < 0) {
+ pr_info("thread_and_process_helper failed");
+ return -1;
+ }
+ }
+ }
+
+ return ret;
+}
+
+static int try_del_from_group(int group_id)
+{
+ int ret = wrap_del_from_group(getpid(), group_id);
+ return -errno;
+}
+
+void *thread_and_process_helper(int group_id)
+{
+ int ret = 0, i;
+ pid_t pid;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[ALLOC_TYPE] = {0};
+ struct sp_make_share_info u2k_info[ALLOC_TYPE] = {0}, k2u_info = {0}, k2u_huge_info = {0};
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ char *addr;
+
+ /* check sp group */
+ pid = getpid();
+
+ // 大页
+ alloc_info[0].flag = SP_HUGEPAGE;
+ alloc_info[0].spg_id = group_id;
+ alloc_info[0].size = 2 * PMD_SIZE;
+
+ // 大页 DVPP
+ alloc_info[1].flag = SP_DVPP | SP_HUGEPAGE;
+ alloc_info[1].spg_id = group_id;
+ alloc_info[1].size = 2 * PMD_SIZE;
+
+ // 普通页 DVPP
+ alloc_info[2].flag = SP_DVPP;
+ alloc_info[2].spg_id = group_id;
+ alloc_info[2].size = 4 * PAGE_SIZE;
+
+ // 普通页
+ alloc_info[3].flag = 0;
+ alloc_info[3].spg_id = group_id;
+ alloc_info[3].size = 4 * PAGE_SIZE;
+
+ /* alloc & u2k */
+ for (i = 0; i < ALLOC_TYPE; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("sp_alloc return addr %lx\n", alloc_info[i].addr);
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(dev_fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("addr %lx is a valid share pool addr\n", alloc_info[i].addr);
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ pr_info("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("u2k return addr %lx, check memory content succ.\n",
+ // u2k_info[i].addr);
+ }
+ }
+ }
+ assert(try_del_from_group(group_id) == -EINVAL);
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = group_id;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = group_id;
+
+ /* k2u */
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+ assert(try_del_from_group(group_id) == -EINVAL);
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ ioctl_vfree(dev_fd, &vmalloc_huge_info);
+ assert(try_del_from_group(group_id) == -EINVAL);
+
+ /* unshare kva & sp_free*/
+ for (i = 0; i < ALLOC_TYPE; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ return 0;
+
+error:
+ return -1;
+}
+
+void *del_group_thread(void *arg)
+{
+ int ret = 0;
+ int i = (int)arg;
+
+ sem_inc_by_one(semid);
+ sem_check_zero(semid);
+
+ pr_info("thread %d now tries to exit from group %d", getpid() + i + 1, default_id);
+ ret = wrap_del_from_group(getpid() + i + 1, default_id);
+ if (ret < 0)
+ pthread_exit((void *)-1);
+ pthread_exit((void *)0);
+}
+
+void *del_proc_from_group(void *arg)
+{
+ sem_dec_by_one(semid);
+ pthread_exit((void *)(wrap_del_from_group((int)arg, default_id)));
+}
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/Makefile b/tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/test_sp_group_id_by_pid.c b/tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/test_sp_group_id_by_pid.c
new file mode 100644
index 000000000000..604de856f9ab
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/test_sp_group_id_by_pid.c
@@ -0,0 +1,179 @@
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+#include "sharepool_lib.h"
+
+
+
+/*
+ * testcase1
+ * 测试点:有效pid,已加组查询
+ * 预期结果:查询成功,返回正确的group_id
+ */
+static int testcase1(void)
+{
+ int group_id = 10;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ while (1);
+ exit(-1);
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ goto out;
+ }
+
+ ret = ioctl_find_first_group(dev_fd, pid);
+ if (ret != group_id) {
+ pr_info("failed, group_id: %d, expected: %d", ret, group_id);
+ ret = -1;
+ } else {
+ //pr_info("testcase1 success!!");
+ ret = 0;
+ }
+
+out:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return ret;
+}
+
+/*
+ * testcase2
+ * 测试点:有效pid,进程处于退出状态并且已经加过组
+ * 预期结果:查询失败,错误码-ESRCH
+ */
+static int testcase2_result = -1;
+static pid_t testcase2_child_pid;
+static void testcase2_sigchld_handler(int num)
+{
+ int ret = ioctl_find_first_group(dev_fd, testcase2_child_pid);
+ if (!(ret < 0 && errno == ESRCH)) {
+ pr_info("failed, ret: %d, errno: %d", ret, errno);
+ pr_info("testcase2 failed!!");
+ testcase2_result = -1;
+ } else {
+ //pr_info("testcase2 success!!");
+ testcase2_result = 0;
+ }
+}
+
+static int testcase2(void)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ pr_info("child pid %d", getpid());
+ while (1);
+ exit(-1);
+ }
+
+ struct sigaction sa = {0};
+ struct sigaction osa = {0};
+ sa.sa_handler = testcase2_sigchld_handler;
+ sigaction(SIGCHLD, &sa, &osa);
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 10,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret)
+ pr_info("add group failed, errno: %d", errno);
+
+ testcase2_child_pid = pid;
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+
+ sigaction(SIGCHLD, &osa, NULL);
+
+ return testcase2_result;
+}
+
+/*
+ * testcase3
+ * 测试点:有效pid,未加组查询
+ * 预期结果:查询失败,错误码 -ENODEV
+ */
+static int testcase3(void)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ while (1);
+ exit(-1);
+ }
+
+ int ret = ioctl_find_first_group(dev_fd, pid);
+ if (!(ret < 0 && errno == ENODEV)) {
+ pr_info("failed, ret: %d, errno: %d", ret, errno);
+ pr_info("testcase3 failed!!");
+ ret = -1;
+ } else {
+ //pr_info("testcase3 success!!");
+ ret = 0;
+ }
+
+out:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return ret;
+}
+
+/*
+ * testcase4
+ * 测试点:无效pid查询
+ * 预期结果:查询失败,错误码 -ESRCH
+ */
+static int testcase4(void)
+{
+ int ret = ioctl_find_first_group(dev_fd, -1);
+ if (!(ret < 0 && errno == ESRCH)) {
+ pr_info("failed, ret: %d, errno: %d", ret, errno);
+ pr_info("testcase4 failed!!");
+ ret = -1;
+ } else {
+ //pr_info("testcase4 success!!");
+ ret = 0;
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "有效pid,已加组查询。预期结果:查询成功,返回正确的group_id")
+ TESTCASE(testcase2, "有效pid,进程处于退出状态并且已经加过组。预期结果:查询失败,错误码-ESRCH")
+ TESTCASE(testcase3, "有效pid,未加组查询。预期结果:查询失败,错误码 -ENODEV")
+ TESTCASE(testcase4, "无效pid查询。预期结果:查询失败,错误码 -ESRCH")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/test_sp_group_id_by_pid2.c b/tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/test_sp_group_id_by_pid2.c
new file mode 100644
index 000000000000..7b82e591fce7
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/test_sp_group_id_by_pid2.c
@@ -0,0 +1,318 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed May 26 06:20:07 2021
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+#include "sharepool_lib.h"
+
+
+static int spg_id_query(int id, int *ids, int len)
+{
+ for (int i = 0; i < len; i++)
+ if (id == ids[i])
+ return i;
+
+ return -1;
+}
+
+/*
+ * 进程加n个组,查询组id
+ * 预期加组成功,组ID查询结果正确
+ */
+static int testcase1(void)
+{
+#define group_nr_test1 10
+ int ret = 0, nr, i;
+ int spg_id1[group_nr_test1];
+ int spg_id2[group_nr_test1 + 1];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+
+ for (i = 0; i < ARRAY_SIZE(spg_id1); i++) {
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ spg_id1[i] = ag_info.spg_id;
+ }
+
+ nr = ARRAY_SIZE(spg_id2);
+ struct sp_group_id_by_pid_info info = {
+ .pid = getpid(),
+ .spg_ids = spg_id2,
+ .num = &nr,
+ };
+ TEST_CHECK(ioctl_find_group_by_pid(dev_fd, &info), out);
+
+ if (nr != ARRAY_SIZE(spg_id1)) {
+ pr_info("sp_group_id_by_pid check failed, group_nr:%d, expect:%d", nr, ARRAY_SIZE(spg_id1));
+ return -1;
+ }
+
+ for (i = 0; i < group_nr_test1; i++)
+ if (spg_id_query(spg_id2[i], spg_id1, ARRAY_SIZE(spg_id1)) < 0) {
+ pr_info("sp_group_id_by_pid check failed, spg_id %d no found", spg_id2[i]);
+ return -1;
+ }
+
+out:
+ return ret;
+}
+
+/*
+ * 进程加n个组,查询组id
+ * 预期加组成功,组ID查询失败,错误码E2BIG
+ */
+static int testcase2(void)
+{
+#define group_nr_test2 10
+ int ret = 0, nr, i;
+ int spg_id2[group_nr_test2 - 1];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+
+ for (i = 0; i < group_nr_test2; i++) {
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ }
+
+ nr = ARRAY_SIZE(spg_id2);
+ struct sp_group_id_by_pid_info info = {
+ .pid = getpid(),
+ .spg_ids = spg_id2,
+ .num = &nr,
+ };
+ TEST_CHECK_FAIL(ioctl_find_group_by_pid(dev_fd, &info), E2BIG, out);
+
+out:
+ return ret;
+}
+
+/*
+ * 进程不加组,查询组id
+ * 查询失败,错误码ENODEV
+ */
+static int testcase3(void)
+{
+ int ret = 0, nr, i;
+ int spg_id2[3];
+
+ nr = ARRAY_SIZE(spg_id2);
+ struct sp_group_id_by_pid_info info = {
+ .pid = getpid(),
+ .spg_ids = spg_id2,
+ .num = &nr,
+ };
+ TEST_CHECK_FAIL(ioctl_find_group_by_pid(dev_fd, &info), ENODEV, out);
+
+out:
+ return ret;
+}
+
+/*
+ * 查询内核线程组id
+ * 查询失败,错误码ENODEV
+ */
+static int testcase4(void)
+{
+ int ret = 0, nr, i;
+ int spg_id2[3];
+
+ nr = ARRAY_SIZE(spg_id2);
+ struct sp_group_id_by_pid_info info = {
+ .pid = 2, // kthreadd
+ .spg_ids = spg_id2,
+ .num = &nr,
+ };
+ TEST_CHECK_FAIL(ioctl_find_group_by_pid(dev_fd, &info), ENODEV, out);
+
+out:
+ return ret;
+}
+
+/*
+ * 非法pid
+ * 预期加组成功,组ID查询失败,错误码ESRCH
+ */
+static int testcase5(void)
+{
+ int ret = 0, nr, i;
+ int spg_id2[3];
+
+ nr = ARRAY_SIZE(spg_id2);
+ struct sp_group_id_by_pid_info info = {
+ .pid = -1,
+ .spg_ids = spg_id2,
+ .num = &nr,
+ };
+ TEST_CHECK_FAIL(ioctl_find_group_by_pid(dev_fd, &info), ESRCH, out);
+
+out:
+ return ret;
+}
+
+static int testcase6(void)
+{
+ int ret = 0, nr, i;
+ int spg_id2[2];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+
+ nr = ARRAY_SIZE(spg_id2);
+ struct sp_group_id_by_pid_info info = {
+ .pid = getpid(),
+ .spg_ids = spg_id2,
+ .num = NULL
+ };
+ TEST_CHECK_FAIL(ioctl_find_group_by_pid(dev_fd, &info), EINVAL, out);
+ info.num = &nr;
+ info.spg_ids = NULL;
+ TEST_CHECK_FAIL(ioctl_find_group_by_pid(dev_fd, &info), EINVAL, out);
+
+out:
+ return ret;
+}
+
+/*
+ * 子进程加n个组,查询组id
+ * 预期加组成功,组ID查询结果正确
+ */
+static int testcase7(void)
+{
+#define group_nr_test7 10
+ pid_t pid;
+ int ret = 0, nr, i;
+ int spg_id1[group_nr_test7];
+ int spg_id2[group_nr_test7 + 1];
+
+ FORK_CHILD_DEADLOOP(pid);
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ };
+
+ for (i = 0; i < ARRAY_SIZE(spg_id1); i++) {
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ spg_id1[i] = ag_info.spg_id;
+ }
+
+ nr = ARRAY_SIZE(spg_id2);
+ struct sp_group_id_by_pid_info info = {
+ .pid = pid,
+ .spg_ids = spg_id2,
+ .num = &nr,
+ };
+ TEST_CHECK(ioctl_find_group_by_pid(dev_fd, &info), out);
+
+ if (nr != ARRAY_SIZE(spg_id1)) {
+ pr_info("sp_group_id_by_pid check failed, group_nr:%d, expect:%d", nr, ARRAY_SIZE(spg_id1));
+ ret = -1;
+ goto out;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(spg_id1); i++)
+ if (spg_id_query(spg_id2[i], spg_id1, ARRAY_SIZE(spg_id1)) < 0) {
+ pr_info("sp_group_id_by_pid check failed, spg_id %d no found", spg_id2[i]);
+ ret = -1;
+ goto out;
+ }
+
+out:
+ KILL_CHILD(pid);
+ return ret;
+}
+
+/*
+ * 测试步骤:进程直调申请内存,然后查询组
+ * 预期结果:查询失败,错误码ENODEV
+ */
+static int testcase8(void)
+{
+ int ret = 0;
+ void *buf;
+ unsigned long size = 1024;
+
+ buf = (void *)wrap_sp_alloc(SPG_ID_DEFAULT, size, 0);
+ if (buf == (void *)-1)
+ return -1;
+
+ TEST_CHECK_FAIL(ioctl_find_first_group(dev_fd, getpid()), ENODEV, out);
+ return 0;
+
+out:
+ return ret;
+}
+
+/*
+ * 测试步骤:做k2task,然后查询组
+ * 预期结果:查询失败,错误码ENODEV
+ */
+static int testcase9(void)
+{
+ int ret = 0;
+ void *buf;
+ unsigned long kva, uva;
+ unsigned long size = 1024;
+
+ kva = wrap_vmalloc(size, true);
+ if (!kva)
+ return -1;
+
+ uva = wrap_k2u(kva, size, SPG_ID_DEFAULT, 0);
+ if (!uva) {
+ ret = -1;
+ goto out_vfree;
+ }
+
+ TEST_CHECK_FAIL(ioctl_find_first_group(dev_fd, getpid()), ENODEV, out_unshare);
+ ret = 0;
+
+out_unshare:
+ wrap_unshare(uva, size);
+out_vfree:
+ wrap_vfree(kva);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "进程加n个组,查询组id。预期加组成功,组ID查询结果正确")
+ TESTCASE_CHILD(testcase2, "进程加n个组,查询组id。预期加组成功,组ID查询失败,错误码E2BIG")
+ TESTCASE(testcase3, "进程不加组,查询组id。预期查询失败,错误码ENODEV")
+ TESTCASE(testcase4, "查询内核线程组id。预期查询失败,错误码ENODEV")
+ TESTCASE(testcase5, "非法pid。预期加组成功,组ID查询失败,错误码ESRCH")
+ TESTCASE_CHILD(testcase6, "使用auto组id,传入空info指针/空返回值指针,预期失败,错误码EINVAL")
+ TESTCASE(testcase7, "子进程加n个组,查询组id。预期加组成功,组ID查询结果正确")
+ TESTCASE_CHILD(testcase8, "进程直调申请内存,然后查询组。预期结果:查询失败,错误码ENODEV")
+ TESTCASE_CHILD(testcase9, "k2task,然后查询组。预期结果:查询失败,错误码ENODEV")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_id_of_current/Makefile b/tools/testing/sharepool/testcase/api_test/sp_id_of_current/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_id_of_current/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_id_of_current/test_sp_id_of_current.c b/tools/testing/sharepool/testcase/api_test/sp_id_of_current/test_sp_id_of_current.c
new file mode 100644
index 000000000000..7aa05b2bdca6
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_id_of_current/test_sp_id_of_current.c
@@ -0,0 +1,112 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Sat Dec 19 11:29:06 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdbool.h>
+#include <pthread.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+/*
+ * case1: 进程不加组,多线程并发获取进程的local 组ID
+ * 预期都能获取成功
+ */
+/* 等待一个信号,然后开始查询 */
+static void *thread1(void *arg)
+{
+ int semid = *(int *)arg;
+
+ sem_dec_by_one(semid);
+ for (int i = 0; i < 20; i++) {
+ int ret = wrap_sp_id_of_current();
+ if (ret < 0)
+ return (void *)-1;
+ }
+
+ return NULL;
+}
+
+#define TEST1_CHILD_NUM 20
+#define TEST1_THREAD_NUM 20
+static int child1(int idx)
+{
+ void *thread_ret;
+ int semid, j, ret = 0;
+ pthread_t threads[TEST1_THREAD_NUM];
+
+ semid = sem_create(4466 + idx, "sp_id_of_current sem");
+ if (semid < 0)
+ return semid;
+
+ for (j = 0; j < TEST1_THREAD_NUM; j++) {
+ ret = pthread_create(threads + j, NULL, thread1, &semid);
+ if (ret < 0) {
+ pr_info("pthread create failed");
+ goto out_pthread_join;
+ }
+ }
+
+ sem_set_value(semid, TEST1_THREAD_NUM);
+
+out_pthread_join:
+ for (j--; j >= 0; j--) {
+ pthread_join(threads[j], &thread_ret);
+ if (thread_ret != NULL) {
+ pr_info("child thread%d exited unexpected", j + 1);
+ ret = -1;
+ }
+ }
+
+ sem_close(semid);
+
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret = 0;
+ pid_t child[TEST1_CHILD_NUM];
+
+ for (int i = 0; i < TEST1_CHILD_NUM; i++) {
+ pid_t pid = fork();
+ if (pid == 0)
+ exit(child1(i));
+ child[i] = pid;
+ }
+
+ // 等待子进程退出
+ for (int i = 0; i < TEST1_CHILD_NUM; i++) {
+ int status;
+ waitpid(child[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d exited unexpected", i);
+ ret = -1;
+ }
+ child[i] = 0;
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "进程不加组,多线程并发获取进程的local 组ID,预期都能获取成功")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/Makefile b/tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/test_sp_make_share_k2u.c b/tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/test_sp_make_share_k2u.c
new file mode 100644
index 000000000000..24fe1d2320ca
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/test_sp_make_share_k2u.c
@@ -0,0 +1,624 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 10:45:21 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+#define FAKE_NUM 3
+
+/*
+ * testcase1: vmalloc_user申请共享内存,k2u的pid未加组,spg_id == SPG_ID_DEFAULT或SPG_ID_NONE。预期成功。
+ * testcase2: vmalloc_user申请共享内存,k2u的pid已加组,(1) spg_id == SPG_ID_DEFAULT。预期成功。
+ * (2) spg_id == SPG_ID_NONE。预期失败,返回EINVAL。
+ * testcase3: vmalloc_huge_user申请共享内存,(1) k2u的pid已加组,spg_id有效范围内未使用过。预期失败。
+ * testcase4: vmalloc_user申请共享内存,(1) k2u的kva不存在。预期失败。(2) k2u的size超出申请大小、size为0。预期失败。
+ * testcase5: vmalloc_huge_user申请共享内存,(1) k2u的pid未加组,spg_id == SPG_ID_NONE。预期成功。
+ * (2) k2u的kva、size不对齐。预期成功。(3) k2u的sp_flags=SP_DVPP。预期成功。
+ */
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int prepare(struct vmalloc_info *ka_info, bool ishuge)
+{
+ int ret;
+ if (ishuge) {
+ ret = ioctl_vmalloc_hugepage(dev_fd, ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc_hugepage failed, errno: %d", errno);
+ return -1;
+ }
+ } else {
+ ret = ioctl_vmalloc(dev_fd, ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'a',
+ .addr = ka_info->addr,
+ .size = ka_info->size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, ka_info);
+ }
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_infos[] = {
+ {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ };
+ for (int i = 0; i < sizeof(k2u_infos) / sizeof(k2u_infos[0]); i++) {
+ ret = ioctl_k2u(dev_fd, &k2u_infos[i]);
+ if (ret < 0) {
+ pr_info("testcase1 ioctl_k2u %d failed unexpected, errno: %d", i, errno);
+ goto out;
+ } else {
+ //pr_info("testcase1 ioctl_k2u %d success expected", i);
+ ret = ioctl_unshare(dev_fd, &k2u_infos[i]);
+ if (ret < 0) {
+ pr_info("testcase1 ioctl_unshare %d failed, errno: %d", i, errno);
+ goto out;
+ }
+ }
+ }
+
+out:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+static int testcase2(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ if (addgroup() != 0) {
+ return -1;
+ }
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_infos[] = {
+ {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_infos[0]);
+ if (ret < 0) {
+ pr_info("testcase2 ioctl_k2u 0 failed unexpected, errno: %d", errno);
+ goto out;
+ } else {
+ //pr_info("testcase2 ioctl_k2u 0 success unexpected");
+ ret = ioctl_unshare(dev_fd, &k2u_infos[0]);
+ if (ret < 0) {
+ pr_info("testcase2 ioctl_unshare 0 failed, errno: %d", errno);
+ goto out;
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_infos[1]);
+ if (ret != 0 && errno == EINVAL) {
+ //pr_info("testcase2 ioctl_k2u 1 failed expected");
+ ret = 0;
+ goto out;
+ } else if (ret != 0) {
+ pr_info("testcase2 ioctl_k2u 1 failed unexpected, errno: %d", errno);
+ goto out;
+ } else {
+ pr_info("testcase2 ioctl_k2u 1 success unexpected");
+ ioctl_unshare(dev_fd, &k2u_infos[1]);
+ ret = -1;
+ goto out;
+ }
+
+out:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+static int testcase3(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PMD_SIZE,
+ };
+ if (prepare(&ka_info, true) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_infos[] = {
+ {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_AUTO_MIN,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ };
+ int errs[] = {EINVAL, ESRCH};
+
+ if (addgroup() != 0) {
+ return -1;
+ }
+
+ for (int i = 0; i < sizeof(k2u_infos) / sizeof(k2u_infos[0]); i++) {
+ ret = ioctl_k2u(dev_fd, &k2u_infos[i]);
+ if (ret != 0 && errno == errs[i]) {
+ //pr_info("testcase3 ioctl_k2u %d failed expected", i);
+ ret = 0;
+ } else if (ret != 0) {
+ pr_info("testcase3 ioctl_k2u %d failed unexpected, errno: %d", i, errno);
+ goto out;
+ } else {
+ pr_info("testcase3 ioctl_k2u %d success unexpected", i);
+ ioctl_unshare(dev_fd, &k2u_infos[i]);
+ ret = -1;
+ goto out;
+ }
+ }
+
+out:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+static int testcase4(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ struct vmalloc_info fake_ka_info = {
+ .size = PAGE_SIZE,
+ };
+ ret = ioctl_vmalloc(dev_fd, &fake_ka_info);
+ if (ret < 0) {
+ pr_info("testcase4 vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+ ioctl_vfree(dev_fd, &fake_ka_info);
+
+ struct sp_make_share_info k2u_infos[] = {
+ {
+ .kva = fake_ka_info.addr,
+ .size = fake_ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ /*{
+ .kva = ka_info.addr,
+ .size = ka_info.size * FAKE_NUM,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ */{
+ .kva = ka_info.addr,
+ .size = 0,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ {
+ .kva = ka_info.addr,
+ .size = 0,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ };
+
+ for (int i = 0; i < sizeof(k2u_infos) / sizeof(k2u_infos[0]); i++) {
+ ret = ioctl_k2u(dev_fd, &k2u_infos[i]);
+ if (ret != 0 && errno == EINVAL) {
+ //pr_info("testcase4 ioctl_k2u %d failed expected", i);
+ ret = 0;
+ } else if (ret != 0) {
+ pr_info("testcase4 ioctl_k2u %d failed unexpected, errno: %d", i, errno);
+ goto out;
+ } else {
+ pr_info("testcase4 ioctl_k2u %d success unexpected", i);
+ ioctl_unshare(dev_fd, &k2u_infos[i]);
+ ret = -1;
+ goto out;
+ }
+ }
+
+out:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+static int testcase5(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PMD_SIZE,
+ };
+ if (prepare(&ka_info, true) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_infos[] = {
+ {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ {
+ .kva = ka_info.addr,
+ .size = ka_info.size - 1,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ {
+ .kva = ka_info.addr + 1,
+ .size = ka_info.size - 1,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = SP_DVPP,
+ .pid = getpid(),
+ },
+ };
+
+ for (int i = 0; i < sizeof(k2u_infos) / sizeof(k2u_infos[0]); i++) {
+ ret = ioctl_k2u(dev_fd, &k2u_infos[i]);
+ if (ret < 0) {
+ pr_info("testcase5 ioctl_k2u %d failed unexpected, errno: %d", i, errno);
+ goto out;
+ } else {
+ //pr_info("testcase5 ioctl_k2u %d success expected", i);
+ ret = ioctl_unshare(dev_fd, &k2u_infos[i]);
+ if (ret < 0) {
+ pr_info("testcase5 ioctl_unshare %d failed, errno: %d", i, errno);
+ goto out;
+ }
+ }
+ }
+
+out:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+static int testcase6(void)
+{
+ int ret = 0;
+
+ struct sp_make_share_info k2u_info = {
+ .kva = 0,
+ .size = 4096,
+ .spg_id = 1,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("testcase5 ioctl_k2u failed expected", errno);
+ return 0;
+ } else {
+ pr_info("testcase5 ioctl_k2u success unexpected");
+ ioctl_unshare(dev_fd, &k2u_info);
+ return -1;
+ }
+}
+
+static int testcase7(void)
+{
+ int ret = 0;
+
+ struct vmalloc_info ka_info = {
+ .size = PMD_SIZE,
+ };
+ if (prepare(&ka_info, true) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = 1,
+ .sp_flags = 25,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("testcase5 ioctl_k2u failed expected", errno);
+ ret = 0;
+ } else {
+ pr_info("testcase5 ioctl_k2u success unexpected");
+ ioctl_unshare(dev_fd, &k2u_info);
+ ret = -1;
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+static int testcase8(void)
+{
+ int ret = 0;
+
+ struct vmalloc_info ka_info = {
+ .size = PMD_SIZE,
+ };
+ if (prepare(&ka_info, true) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 25,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("testcase5 ioctl_k2u failed expected", errno);
+ ret = 0;
+ } else {
+ pr_info("testcase5 ioctl_k2u success unexpected");
+ ioctl_unshare(dev_fd, &k2u_info);
+ ret = -1;
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+static int testcase9(void)
+{
+ int ret = 0;
+#if 0
+ unsigned long flag;
+ int node_id = 5;
+
+ struct vmalloc_info ka_info = {
+ .size = PMD_SIZE,
+ };
+ if (prepare(&ka_info, true) != 0) {
+ return -1;
+ }
+
+ flag |= (unsigned long)node_id << NODE_ID_SHIFT;
+ flag |= SP_SPEC_NODE_ID;
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = flag,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("testcase5 ioctl_k2u failed expected", errno);
+ ret = 0;
+ } else {
+ pr_info("testcase5 ioctl_k2u success unexpected");
+ ioctl_unshare(dev_fd, &k2u_info);
+ ret = -1;
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+#endif
+ return ret;
+}
+
+static int testcase10(void)
+{
+ int ret = 0;
+#if 0
+ unsigned long flag;
+ int node_id = 5;
+
+ struct vmalloc_info ka_info = {
+ .size = PMD_SIZE,
+ };
+ if (prepare(&ka_info, true) != 0) {
+ return -1;
+ }
+
+ flag |= (unsigned long)node_id << NODE_ID_SHIFT;
+ flag |= SP_SPEC_NODE_ID;
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = 1,
+ .sp_flags = flag,
+ .pid = getpid(),
+ };
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 1);
+ if (ret < 0)
+ return -1;
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("testcase5 ioctl_k2u failed expected", errno);
+ ret = 0;
+ } else {
+ pr_info("testcase5 ioctl_k2u success unexpected");
+ ioctl_unshare(dev_fd, &k2u_info);
+ ret = -1;
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+#endif
+ return ret;
+}
+
+static int testcase11(void)
+{
+ int ret = 0;
+ unsigned long flag;
+ int node_id = 5;
+
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ if (ioctl_kmalloc(dev_fd, &ka_info) < 0) {
+ pr_info("kmalloc failed");
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = 1,
+ .sp_flags = flag,
+ .pid = getpid(),
+ };
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 1);
+ if (ret < 0)
+ return -1;
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("k2u failed as expected");
+ ret = 0;
+ } else {
+ pr_info("k2u success unexpected");
+ ret = -1;
+ }
+
+ ioctl_kfree(dev_fd, &ka_info);
+ return ret;
+}
+
+#define PROC_NUM 100
+static int testcase12(void) {
+ pid_t child[PROC_NUM];
+ pid_t pid;
+ int ret = 0;
+ int semid = sem_create(1234, "sem");
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid = fork();
+ if (pid == 0) {
+ sem_dec_by_one(semid);
+ pr_info("child %d started!", getpid());
+ exit(testcase4());
+ } else {
+ child[i] = pid;
+ }
+ }
+
+ sem_inc_by_val(semid, PROC_NUM);
+ pr_info("sem released!");
+ for (int i = 0; i < PROC_NUM; i++) {
+ WAIT_CHILD_STATUS(child[i], out);
+ }
+out:
+ sem_close(semid);
+ return ret;
+}
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "vmalloc_user申请共享内存,k2u的pid未加组,spg_id == SPG_ID_DEFAULT或SPG_ID_NONE。预期成功。")
+ TESTCASE_CHILD_MANUAL(testcase2, "vmalloc_user申请共享内存,k2u的pid已加组,(1) spg_id == SPG_ID_DEFAULT。预期成功。(2) spg_id == SPG_ID_NONE。预期失败,返回EINVAL。") // single only
+ TESTCASE_CHILD_MANUAL(testcase3, "vmalloc_huge_user申请共享内存,(1) k2u的pid已加组,spg_id有效范围内未使用过。预期失败。") // single only
+ TESTCASE_CHILD(testcase4, "vmalloc_user申请共享内存,(1) k2u的kva不存在。预期失败。(2) k2u的size超出申请大小、size为0。预期失败。")
+ TESTCASE_CHILD(testcase5, "vmalloc_huge_user申请共享内存,(1) k2u的pid未加组,spg_id == SPG_ID_NONE。预期成功。(2) k2u的kva、size不对齐。预期成功。(3) k2u的sp_flags=SP_DVPP。预期成功。")
+ TESTCASE_CHILD(testcase6, "kva is 0")
+ TESTCASE_CHILD(testcase7, "k2spg invalid flag")
+ TESTCASE_CHILD(testcase8, "k2task invalid flag")
+ TESTCASE_CHILD(testcase9, "用例废弃:k2spg invalid numa node flag")
+ TESTCASE_CHILD(testcase10, "用例废弃:k2task invalid numa node flag")
+ TESTCASE_CHILD(testcase11, "use kmalloc memory for k2u")
+ TESTCASE_CHILD(testcase12, "BUG test")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/test_sp_make_share_k2u2.c b/tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/test_sp_make_share_k2u2.c
new file mode 100644
index 000000000000..79bf2ee7c665
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/test_sp_make_share_k2u2.c
@@ -0,0 +1,361 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Thu Jun 03 06:35:49 2021
+ */
+#include <stdio.h>
+#include <string.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <sys/mman.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 测试步骤:进程A加组,内核申请内存,做k2spg,用户写,内核check,用户调用unshare----先加组,后k2spg
+ * 预期结果:内核check内存成功
+ */
+static int testcase1(bool is_hugepage)
+{
+ int ret = 0, spg_id;
+ unsigned long kva, uva;
+ unsigned long size = 1024;
+
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0)
+ return -1;
+
+ // 加组两次
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0) {
+ pr_info("add group failed. ret %d", spg_id);
+ return -1;
+ }
+
+ kva = wrap_vmalloc(size, is_hugepage);
+ if (!kva) {
+ pr_info("kva null");
+ return -1;
+ }
+
+ uva = wrap_k2u(kva, size, spg_id, 0);
+ if (!uva) {
+ pr_info("k2u failed");
+ ret = -1;
+ goto out_vfree;
+ }
+
+ pr_info("memset to uva 0x%lx", uva);
+ sleep(1);
+ for ( int i = 0; i < size; i++) {
+ memset((void *)uva + i, 'a', 1);
+ pr_info("memset success at %dth byte", i);
+ KAREA_ACCESS_CHECK('a', kva + i, 1, out_unshare);
+ pr_info("kva check success at %dth byte", i);
+ }
+
+out_unshare:
+ wrap_unshare(uva, size);
+out_vfree:
+ wrap_vfree(kva);
+
+ return ret;
+}
+
+/*
+ * 测试步骤:父进程加组,并申请内核内存,做k2spg,写内存,fork子进程,子进程加相同组,子进程读(check)、写共享内存,
+ * 内核check-----先做k2spg,然后加组
+ */
+static int testcase2_child(sem_t *sync, unsigned long uva, unsigned long kva, unsigned long size)
+{
+ int ret = 0, i;
+
+ SEM_WAIT(sync);
+
+ for (i = 0; i < size; i++)
+ if (((char *)uva)[i] != 'a') {
+ pr_info("buf check failed, i:%d, value:%d", i, ((char *)uva)[i]);
+ return -1;
+ }
+
+ memset((void *)uva, 'b', size);
+ KAREA_ACCESS_CHECK('b', kva, size, out);
+
+out:
+ return ret;
+}
+
+static int testcase2(bool is_hugepage)
+{
+ pid_t pid;
+ int ret = 0, spg_id;
+ unsigned long kva, uva;
+ unsigned long size = 1024;
+ sem_t *sync;
+
+ SEM_INIT(sync, (int)is_hugepage);
+
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0)
+ return -1;
+
+ // 加组两次
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0)
+ return -1;
+
+ kva = wrap_vmalloc(size, is_hugepage);
+ if (!kva)
+ return -1;
+
+ uva = wrap_k2u(kva, size, spg_id, 0);
+ if (!uva) {
+ ret = -1;
+ goto out_vfree;
+ }
+
+ memset((void *)uva, 'a', size);
+
+ FORK_CHILD_ARGS(pid, testcase2_child(sync, uva, kva, size));
+ ret = wrap_add_group(pid, PROT_READ | PROT_WRITE, spg_id);
+ if (ret < 0 || ret != spg_id) {
+ pr_info("add child to group %d failed, ret: %d, errno: %d", spg_id, ret, errno);
+ ret = -1;
+ KILL_CHILD(pid);
+ goto out_unshare;
+ } else
+ ret = 0;
+
+ sem_post(sync);
+
+ WAIT_CHILD_STATUS(pid, out_unshare);
+
+out_unshare:
+ wrap_unshare(uva, size);
+out_vfree:
+ wrap_vfree(kva);
+
+ return ret;
+}
+
+/*
+ * 测试步骤:父进程加组,并申请内核内存,fork子进程,子进程加相同组,设置成只读权限,父进程执行k2spg,并且写共享内存----先加组,然后k2spg
+ * 预期结果:用户写失败,触发segment fault
+ */
+static int testcase3_child(sem_t *sync, unsigned long *puva, unsigned long size)
+{
+ int ret;
+ SEM_WAIT(sync);
+
+ memset((void *)(*puva), 'a', size);
+
+ // unaccessable
+ pr_info("ERROR!! unaccessable statement reach");
+
+ return -1;
+}
+
+static int testcase3(bool is_hugepage)
+{
+ pid_t pid;
+ int ret = 0, spg_id;
+ unsigned long kva, uva;
+ unsigned long size = 1024;
+ unsigned long *puva;
+ sem_t *sync;
+
+ SEM_INIT(sync, (int)is_hugepage);
+
+ puva = mmap(NULL, sizeof(*puva), PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0);
+ if (puva == MAP_FAILED) {
+ pr_info("map failed");
+ return -1;
+ }
+
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0)
+ return -1;
+
+ // 加组两次
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0)
+ return -1;
+
+ kva = wrap_vmalloc(size, is_hugepage);
+ if (!kva)
+ return -1;
+
+ FORK_CHILD_ARGS(pid, testcase3_child(sync, puva, size));
+ ret = wrap_add_group(pid, PROT_READ, spg_id);
+ if (ret < 0 || ret != spg_id) {
+ pr_info("add child to group %d failed, ret: %d, errno: %d", spg_id, ret, errno);
+ ret = -1;
+ KILL_CHILD(pid);
+ goto out_vfree;
+ } else
+ ret = 0;
+
+ uva = wrap_k2u(kva, size, spg_id, 0);
+ if (!uva) {
+ pr_info("k2u failed");
+ ret = -1;
+ KILL_CHILD(pid);
+ goto out_vfree;
+ }
+ *puva = uva;
+
+ sem_post(sync);
+
+ WAIT_CHILD_SIGNAL(pid, SIGSEGV, out_unshare);
+
+out_unshare:
+ wrap_unshare(uva, size);
+out_vfree:
+ wrap_vfree(kva);
+
+ return ret;
+}
+
+/*
+ * 测试步骤:父进程加组,并申请内核内存,做k2spg,fork子进程,子进程加相同组,设置成只读权限,子进程写共享内存-----先做k2spg,然后加组
+ * 预期结果:用户写失败,触发segment fault
+ */
+static int testcase4_child(sem_t *sync, unsigned long uva, unsigned long size)
+{
+ int ret;
+ SEM_WAIT(sync);
+
+ memset((void *)uva, 'a', size);
+
+ // unaccessable
+ pr_info("ERROR!! unaccessable statement reach");
+
+ return -1;
+}
+
+static int testcase4(bool is_hugepage)
+{
+ pid_t pid;
+ int ret = 0, spg_id;
+ unsigned long kva, uva;
+ unsigned long size = 1024;
+ sem_t *sync;
+
+ SEM_INIT(sync, (int)is_hugepage);
+
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0)
+ return -1;
+
+ // 加组两次
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0)
+ return -1;
+
+ kva = wrap_vmalloc(size, is_hugepage);
+ if (!kva)
+ return -1;
+
+ uva = wrap_k2u(kva, size, spg_id, 0);
+ if (!uva) {
+ ret = -1;
+ goto out_vfree;
+ }
+
+ FORK_CHILD_ARGS(pid, testcase4_child(sync, uva, size));
+ ret = wrap_add_group(pid, PROT_READ, spg_id);
+ if (ret < 0 || ret != spg_id) {
+ pr_info("add child to group %d failed, ret: %d, errno: %d", spg_id, ret, errno);
+ ret = -1;
+ KILL_CHILD(pid);
+ goto out_unshare;
+ } else
+ ret = 0;
+
+ sem_post(sync);
+
+ WAIT_CHILD_SIGNAL(pid, SIGSEGV, out_unshare);
+
+out_unshare:
+ wrap_unshare(uva, size);
+out_vfree:
+ wrap_vfree(kva);
+
+ return ret;
+}
+
+/*
+ * 测试步骤:内核申请内存,k2task,用户写,内核check,内核重写,用户check,用户unshare
+ * 预期结果:内核check内存成功,用户check成功,无其他异常
+ */
+static int testcase5(bool is_hugepage)
+{
+ int ret = 0, i;
+ unsigned long kva, uva;
+ unsigned long size = 1024;
+
+ kva = wrap_vmalloc(size, is_hugepage);
+ if (!kva)
+ return -1;
+
+ uva = wrap_k2u(kva, size, SPG_ID_DEFAULT, 0);
+ if (!uva) {
+ ret = -1;
+ goto out_vfree;
+ }
+
+ memset((void *)uva, 'a', size);
+ KAREA_ACCESS_CHECK('a', kva, size, out_unshare);
+ KAREA_ACCESS_SET('b', kva, size, out_unshare);
+
+ for (i = 0; i < size; i++)
+ if (((char *)uva)[i] != 'b') {
+ pr_info("buf check failed, i:%d, val:%d", i, ((char *)uva)[i]);
+ ret = -1;
+ break;
+ }
+
+out_unshare:
+ wrap_unshare(uva, size);
+out_vfree:
+ wrap_vfree(kva);
+
+ return ret;
+}
+
+static int test1(void) { return testcase1(false); }
+static int test2(void) { return testcase1(true); }
+static int test3(void) { return testcase2(false); }
+static int test4(void) { return testcase2(true); }
+static int test5(void) { return testcase3(false); }
+static int test6(void) { return testcase3(true); }
+static int test7(void) { return testcase4(false); }
+static int test8(void) { return testcase4(true); }
+static int test9(void) { return testcase5(false); }
+static int test10(void) { return testcase5(true); }
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(test1, "进程A加组,内核申请小页,做k2spg,用户写,内核check,用户调用unshare。预期结果:内核check内存成功")
+ TESTCASE_CHILD(test2, "进程A加组,内核申请大页,做k2spg,用户写,内核check,用户调用unshare。预期结果:内核check内存成功")
+ TESTCASE_CHILD(test3, "父进程加组,并申请内核内存小页,做k2spg,写内存,fork子进程,子进程加相同组,子进程读(check)、写共享内存,内核check-----先做k2spg,然后加组")
+ TESTCASE_CHILD(test4, "父进程加组,并申请内核内存大页,做k2spg,写内存,fork子进程,子进程加相同组,子进程读(check)、写共享内存,内核check -----先做k2spg,然后加组")
+ TESTCASE_CHILD(test5, "父进程加组,并申请内核内存小页,fork子进程,子进程加相同组,设置成只读权限,父进程执行k2spg,并且写共享内存----先加组,然后k2spg。预期结果:用户写失败,触发segment fault")
+ TESTCASE_CHILD(test6, "父进程加组,并申请内核内存大页,fork子进程,子进程加相同组,设置成只读权限,父进程执行k2spg,并且写共享内存----先加组,然后k2spg。预期结果:用户写失败,触发segment fault")
+ TESTCASE_CHILD(test7, "父进程加组,并申请内核内存小页,做k2spg,fork子进程,子进程加相同组,设置成只读权限,子进程写共享内存-----先做k2spg,然后加组预期结果:用户写失败,触发segment fault")
+ TESTCASE_CHILD(test8, "父进程加组,并申请内核内存大页,做k2spg,fork子进程,子进程加相同组,设置成只读权限,子进程写共享内存-----先做k2spg,然后加组预期结果:用户写失败,触发segment fault")
+ TESTCASE_CHILD(test9, "内核申请内存小页,k2task,用户写,内核check,内核重写,用户check,用户unshare。预期结果:内核check内存成功,用户check成功,无其他异常")
+ TESTCASE_CHILD(test10, "内核申请内存大页,k2task,用户写,内核check,内核重写,用户check,用户unshare。预期结果:内核check内存成功,用户check成功,无其他异常")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_make_share_u2k/Makefile b/tools/testing/sharepool/testcase/api_test/sp_make_share_u2k/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_make_share_u2k/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_make_share_u2k/test_sp_make_share_u2k.c b/tools/testing/sharepool/testcase/api_test/sp_make_share_u2k/test_sp_make_share_u2k.c
new file mode 100644
index 000000000000..828b6bf4691a
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_make_share_u2k/test_sp_make_share_u2k.c
@@ -0,0 +1,307 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 10:45:21 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+#include "sharepool_lib.h"
+
+#define LARGE_PAGE_NUM 100000
+
+/*
+ * testcase1: 用户地址映射到内核,参数均有效。预期成功。
+ * testcase2: 用户地址映射到内核,pid无效。预期失败。
+ * testcase3: 用户地址映射到内核,uva无效(未申请过,在sp范围内)。预期失败。
+ * testcase4: 用户地址映射到内核,size超出范围。预期失败。
+ * testcase5: 用户地址映射到内核,size为0、uva、size未对齐。预期成功。
+ * testcase6: 用户态申请大页内存,参数有效,映射给内核,内核check。
+ */
+
+static int prepare(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info->addr, 'q', alloc_info->size);
+ return ret;
+}
+
+static int cleanup(struct sp_make_share_info *u2k_info, struct sp_alloc_info *alloc_info)
+{
+ int ret = 0;
+ if (u2k_info != NULL) {
+ ret = ioctl_unshare(dev_fd, u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ if (alloc_info != NULL) {
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_SIZE,
+ .spg_id = 1,
+ };
+ if (prepare(&alloc_info) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("testcase1 ioctl_u2k failed, errno: %d", errno);
+ cleanup(NULL, &alloc_info);
+ return ret;
+ }
+
+ return cleanup(&u2k_info, &alloc_info);
+}
+
+static int testcase2(void)
+{
+ int ret;
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_SIZE,
+ .spg_id = 1,
+ };
+ if (prepare(&alloc_info) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = 0,
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret != 0 && errno == ESRCH) {
+ //pr_info("testcase2 ioctl_u2k failed as expected, errno: %d", errno);
+ } else if (ret != 0) {
+ pr_info("testcase2 ioctl_u2k failed unexpected, errno: %d", errno);
+ cleanup(NULL, &alloc_info);
+ return ret;
+ } else {
+ pr_info("testcase2 ioctl_u2k success unexpected");
+ cleanup(&u2k_info, &alloc_info);
+ return -1;
+ }
+
+ return cleanup(NULL, &alloc_info);
+}
+
+static int testcase3(void)
+{
+ int ret;
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_SIZE,
+ .spg_id = 1,
+ };
+ if (prepare(&alloc_info) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+ if (cleanup(NULL, &alloc_info) != 0) {
+ return -1;
+ }
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret != 0 && errno == EFAULT) {
+ //pr_info("testcase3 ioctl_u2k failed as expected, errno: %d", errno);
+ return 0;
+ } else if (ret != 0) {
+ pr_info("testcase3 ioctl_u2k failed unexpected, errno: %d", errno);
+ return ret;
+ } else {
+ pr_info("testcase3 ioctl_u2k success unexpected");
+ cleanup(&u2k_info, NULL);
+ return -1;
+ }
+}
+
+static int testcase4(void)
+{
+ int ret;
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_SIZE,
+ .spg_id = 1,
+ };
+ if (prepare(&alloc_info) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info u2k_infos[] = {
+ {
+ .uva = alloc_info.addr,
+ .size = LARGE_PAGE_NUM * PAGE_SIZE,
+ .pid = getpid(),
+ },
+ };
+
+ for (int i = 0; i < sizeof(u2k_infos) / sizeof(u2k_infos[0]); i++) {
+ ret = ioctl_u2k(dev_fd, &u2k_infos[i]);
+ if (ret != 0 && errno == EFAULT) {
+ //pr_info("testcase4 ioctl_u2k %d failed as expected, errno: %d", i, errno);
+ } else if (ret != 0) {
+ pr_info("testcase4 ioctl_u2k %d failed unexpected, errno: %d", i, errno);
+ cleanup(NULL, &alloc_info);
+ return ret;
+ } else {
+ pr_info("testcase4 ioctl_u2k %d success unexpected", i);
+ cleanup(&u2k_infos[i], &alloc_info);
+ return -1;
+ }
+ }
+
+ return cleanup(NULL, &alloc_info);
+}
+
+static int testcase5(void)
+{
+ int ret;
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_SIZE,
+ .spg_id = 1,
+ };
+ if (prepare(&alloc_info) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info u2k_infos[] = {
+ {
+ .uva = alloc_info.addr,
+ .size = 0,
+ .pid = getpid(),
+ },
+ {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size - 1,
+ .pid = getpid(),
+ },
+ {
+ .uva = alloc_info.addr + 1,
+ .size = alloc_info.size - 1,
+ .pid = getpid(),
+ },
+ };
+
+ for (int i = 0; i < sizeof(u2k_infos) / sizeof(u2k_infos[0]); i++) {
+ ret = ioctl_u2k(dev_fd, &u2k_infos[i]);
+ if (ret != 0) {
+ pr_info("testcase5 ioctl_u2k %d failed unexpected, errno: %d", i, errno);
+ cleanup(NULL, &alloc_info);
+ return ret;
+ } else {
+ //pr_info("testcase5 ioctl_u2k %d success expected", i);
+ if (cleanup(&u2k_infos[i], NULL) != 0) {
+ return -1;
+ }
+ }
+ }
+ return cleanup(NULL, &alloc_info);
+}
+
+static int testcase6(void)
+{
+ int ret;
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_HUGEPAGE,
+ .size = PMD_SIZE * 2,
+ .spg_id = 1,
+ };
+ if (prepare(&alloc_info) != 0) {
+ return -1;
+ }
+
+ char *addr = (char *)alloc_info.addr;
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ .u2k_hugepage_checker = true,
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("testcase1 ioctl_u2k failed, errno: %d", errno);
+ cleanup(NULL, &alloc_info);
+ return ret;
+ }
+
+ return cleanup(&u2k_info, &alloc_info);
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "用户地址映射到内核,参数均有效。预期成功。")
+ TESTCASE_STUB(testcase2, "用户地址映射到内核,pid无效。预期失败。")
+ TESTCASE_CHILD(testcase3, "用户地址映射到内核,uva无效(未申请过,在sp范围内)。预期失败。")
+ TESTCASE_CHILD(testcase4, "用户地址映射到内核,size超出范围。预期失败。")
+ TESTCASE_CHILD(testcase5, "用户地址映射到内核,size为0、uva、size未对齐。预期成功。")
+ TESTCASE_CHILD(testcase6, "用户态申请大页内存,参数有效,映射给内核,内核check。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_numa_maps/Makefile b/tools/testing/sharepool/testcase/api_test/sp_numa_maps/Makefile
new file mode 100644
index 000000000000..b640edb83546
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_numa_maps/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+# install: $(testcases)
+# cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_numa_maps/test_sp_numa_maps.c b/tools/testing/sharepool/testcase/api_test/sp_numa_maps/test_sp_numa_maps.c
new file mode 100644
index 000000000000..42cedd4457e7
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_numa_maps/test_sp_numa_maps.c
@@ -0,0 +1,164 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2026-2026. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ unsigned long node_id = 0;
+ if (addgroup() != 0) {
+ return -1;
+ }
+
+ /* normal */
+ struct sp_alloc_info alloc_info = {
+ .flag = (node_id << 36UL) | SP_SPEC_NODE_ID,
+ .size = 100 * PAGE_SIZE,
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_alloc failed1, errno: %d", errno);
+ return ret;
+ }
+
+ node_id = 1;
+
+ struct sp_alloc_info alloc_info2 = {
+ .flag = (node_id << 36) | SP_SPEC_NODE_ID,
+ .size = 200 * PAGE_SIZE,
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info2);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_alloc failed2, errno: %d", errno);
+ return ret;
+ }
+
+ /* hugetlb */
+ node_id = 2;
+
+ struct sp_alloc_info alloc_info3 = {
+ .flag = (node_id << 36) | SP_SPEC_NODE_ID | SP_HUGEPAGE,
+ .size = 10 * PMD_SIZE,
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info3);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_alloc failed3, errno: %d", errno);
+ return ret;
+ }
+
+
+ /* remote */
+ // struct sp_add_group_info ag_info = {
+ // .pid = getpid(),
+ // .prot = PROT_READ | PROT_WRITE,
+ // .spg_id = 20,
+ // };
+ // ret = ioctl_add_group(dev_fd, &ag_info);
+ // if (ret < 0) {
+ // pr_info("ioctl_add_group failed, errno: %d", errno);
+ // }
+
+ // struct register_remote_range_struct info = {
+ // .spg_id = 20,
+ // .va = 0xe8b000000000,
+ // .pa = 0x1ff0000000,
+ // .size = 8 * 1024 *1024, // 8M
+ // };
+
+ // ret = ioctl_register_remote_range(dev_fd, &info);
+ // if (ret != 0 && errno == ENOMEM) {
+ // printf("ioctl_register_remote_range failed, ret: %d\n", ret);
+ // return -1;
+ // } else if (ret != 0) {
+ // printf("ioctl_register_remote_range failed, ret: %d\n", ret);
+ // return -1;
+ // }
+
+
+ /* k2u */
+ struct vmalloc_info vmalloc_info = {
+ .size = 20 * PMD_SIZE,
+ };
+
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'a',
+ .addr = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = SP_DVPP,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ return -1;
+ } else
+ pr_info("k2u success, addr: %#x", k2u_info.addr);
+
+ /* 手动 Ctrl + Z,然后 cat /proc/sharepool/proc_stat */
+ sleep(10);
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "Ctrl + Z, and then `cat /proc/sharepool/proc_stat` to show numa maps; 预期: N0: 400, N2: 20480; REMOTE: 8192")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/Makefile b/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg.c b/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg.c
new file mode 100644
index 000000000000..ac06e14e1e64
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg.c
@@ -0,0 +1,44 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+static int testcase1(void)
+{
+ int ret;
+
+ ret = ioctl_hpage_reg_test_suite(dev_fd, (void *)1);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_hpage_reg_test_suite failed, ret: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "入参合法性检查 & 注册一次成功,第二次失败")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg_after_alloc.c b/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg_after_alloc.c
new file mode 100644
index 000000000000..797c1595240a
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg_after_alloc.c
@@ -0,0 +1,84 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+#define PAGE_NUM 100
+
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int cleanup(struct sp_alloc_info *alloc_info)
+{
+ int ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_hpage_reg_after_alloc(dev_fd, (void *)1);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_hpage_reg_after_alloc failed, ret: %d\n", ret);
+ return ret;
+ }
+
+ return cleanup(&alloc_info);
+}
+
+/* testcase1: "先申请内存,后注册"
+*/
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "先申请内存,后注册")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg_exec.c b/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg_exec.c
new file mode 100644
index 000000000000..e88d83688149
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg_exec.c
@@ -0,0 +1,82 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int cleanup(struct sp_alloc_info *alloc_info)
+{
+ int ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+
+static int testcase1(void)
+{
+ int ret;
+
+ ret = ioctl_hpage_reg_test_exec(dev_fd, (void *)1);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_hpage_reg_test_exec failed, ret: %d\n", ret);
+ return ret;
+ }
+
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_HUGEPAGE,
+ .size = PMD_SIZE,
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != -ENOMEM) {
+ pr_info("testcase1 ioctl_alloc unexpected, ret: 0x%d, errno: %d", ret, errno);
+ return ret;
+ }
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "大页申请函数注册后,能够成功执行到。预期dmesg打印:test_alloc_hugepage: execute succ.")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_unshare/Makefile b/tools/testing/sharepool/testcase/api_test/sp_unshare/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_unshare/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_unshare/test_sp_unshare.c b/tools/testing/sharepool/testcase/api_test/sp_unshare/test_sp_unshare.c
new file mode 100644
index 000000000000..823060626fd9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_unshare/test_sp_unshare.c
@@ -0,0 +1,394 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 10:45:21 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * testcase1: k2u后,解映射用户空间地址范围,size超出申请范围;spg_id非法。预期失败。
+ * testcase2: k2u后,解映射用户地址空间范围,va、size未对齐;size为0;pid非法,预期成功。
+ * testcase3: u2k后,解映射内核地址空间范围,size为0或超出申请范围;pid和spg_id非法;va、size未对齐。预期成功。
+ */
+
+static int prepare_kva(struct vmalloc_info *ka_info)
+{
+ int ret;
+ ret = ioctl_vmalloc(dev_fd, ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'a',
+ .addr = ka_info->addr,
+ .size = ka_info->size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, ka_info);
+ }
+ return ret;
+}
+
+static int prepare_uva(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info->addr, 'q', alloc_info->size);
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ if (prepare_kva(&ka_info) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("testcase1 ioctl_k2u failed unexpected, errno: %d", errno);
+ goto out2;
+ }
+ struct sp_make_share_info back_k2u_info = k2u_info;
+
+ struct sp_make_share_info unshare_info[] = {
+ {
+ .size = ka_info.size * 2,
+ .spg_id = 20,
+ .pid = getpid(),
+ },
+ };
+
+ for (int i = 0; i < sizeof(unshare_info) / sizeof(unshare_info[0]); i++) {
+ k2u_info.size = unshare_info[i].size;
+ k2u_info.spg_id = unshare_info[i].spg_id;
+ k2u_info.pid = unshare_info[i].pid;
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret != 0 && errno == EINVAL) {
+ pr_info("testcase1 ioctl_unshare %d failed expected", i);
+ ret = 0;
+ } else if (ret != 0) {
+ pr_info("testcase1 ioctl_unshare %d failed unexpected, errno: %d", i, errno);
+ goto out1;
+ } else {
+ pr_info("testcase1 ioctl_unshare %d success unexpected", i);
+ ret = -1;
+ goto out2;
+ }
+ }
+
+out1:
+ if (ioctl_unshare(dev_fd, &back_k2u_info) != 0) {
+ pr_info("testcase1 cleanup ioctl_unshare failed unexpected");
+ return -1;
+ }
+out2:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+static int testcase2(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ if (prepare_kva(&ka_info) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ struct sp_make_share_info back_k2u_info1 = k2u_info;
+
+ struct sp_make_share_info unshare_info[] = {
+ {
+ .addr = 1,
+ .size = ka_info.size - 1,
+ .pid = getpid(),
+ },
+ {
+ .addr = 0,
+ .size = ka_info.size - 1,
+ .pid = getpid(),
+ },
+ {
+ .addr = 0,
+ .size = 0,
+ .pid = getpid(),
+ },
+ {
+ .addr = 0,
+ .size = ka_info.size - 1,
+ .pid = 0,
+ },
+ };
+
+ struct sp_make_share_info back_k2u_info2;
+
+ for (int i = 0; i < sizeof(unshare_info) / sizeof(unshare_info[0]); i++) {
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("testcase2 ioctl_k2u failed unexpected, errno: %d", errno);
+ goto out2;
+ }
+ back_k2u_info2 = k2u_info;
+ k2u_info.addr += unshare_info[i].addr;
+ k2u_info.size = unshare_info[i].size;
+ k2u_info.pid = unshare_info[i].pid;
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret != 0) {
+ pr_info("testcase2 ioctl_unshare %d failed unexpected, errno: %d", i, errno);
+ goto out1;
+ } else {
+ pr_info("testcase2 ioctl_unshare %d success expected", i);
+ }
+ k2u_info = back_k2u_info1;
+ }
+ goto out2;
+
+out1:
+ if (ioctl_unshare(dev_fd, &back_k2u_info2) != 0) {
+ pr_info("testcase2 cleanup ioctl_unshare failed unexpected");
+ return -1;
+ }
+out2:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+/*
+static int testcase3(void)
+{
+ int ret;
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_SIZE,
+ .spg_id = 1,
+ };
+ if (prepare_uva(&alloc_info) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("testcase3 ioctl_u2k failed unexpected, errno: %d", errno);
+ goto out2;
+ }
+ struct sp_make_share_info back_u2k_info = u2k_info;
+
+ struct sp_make_share_info unshare_info[] = {
+ {
+ .size = 0,
+ .spg_id = 1,
+ .pid = getpid(),
+ },
+ {
+ .size = alloc_info.size * 2,
+ .spg_id = 1,
+ .pid = getpid(),
+ },
+ {
+ .size = alloc_info.size,
+ .spg_id = 1,
+ .pid = 0,
+ },
+ {
+ .size = alloc_info.size,
+ .spg_id = SPG_ID_AUTO_MIN,
+ .pid = getpid(),
+ },
+ };
+
+ for (int i = 0; i < sizeof(unshare_info) / sizeof(unshare_info[0]); i++) {
+ u2k_info.size = unshare_info[i].size;
+ u2k_info.spg_id = unshare_info[i].spg_id;
+ u2k_info.pid = unshare_info[i].pid;
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret != 0 && errno == EINVAL) {
+ pr_info("testcase3 ioctl_unshare %d failed expected", i);
+ ret = 0;
+ } else if (ret != 0) {
+ pr_info("testcase3 ioctl_unshare %d failed unexpected, errno: %d", i, errno);
+ goto out1;
+ } else {
+ pr_info("testcase3 ioctl_unshare %d success unexpected", i);
+ ret = -1;
+ goto out2;
+ }
+ }
+
+out1:
+ if (ioctl_unshare(dev_fd, &back_u2k_info) != 0) {
+ pr_info("testcase3 cleanup ioctl_unshare failed unexpected");
+ return -1;
+ }
+out2:
+ if (ioctl_free(dev_fd, &alloc_info) != 0) {
+ pr_info("testcase3 cleanup ioctl_free failed unexpected");
+ return -1;
+ }
+ return ret;
+}
+*/
+
+static int testcase3(void)
+{
+ int ret;
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_SIZE,
+ .spg_id = 1,
+ };
+ if (prepare_uva(&alloc_info) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+ struct sp_make_share_info back_u2k_info1 = u2k_info;
+
+ struct sp_make_share_info unshare_info[] = {
+ {
+ .addr = 1,
+ .size = alloc_info.size - 1,
+ .spg_id = 1,
+ .pid = getpid(),
+ },
+ {
+ .addr = 0,
+ .size = alloc_info.size - 1,
+ .spg_id = 1,
+ .pid = getpid(),
+ },
+ {
+ .addr = 0,
+ .size = 0,
+ .spg_id = 1,
+ .pid = getpid(),
+ },
+ {
+ .addr = 0,
+ .size = alloc_info.size / 2,
+ .spg_id = 1,
+ .pid = getpid(),
+ },
+ {
+ .addr = 0,
+ .size = alloc_info.size,
+ .spg_id = 1,
+ .pid = 0,
+ },
+ {
+ .addr = 0,
+ .size = alloc_info.size,
+ .spg_id = SPG_ID_AUTO_MIN,
+ .pid = getpid(),
+ },
+ };
+
+ struct sp_make_share_info back_u2k_info2;
+
+ for (int i = 0; i < sizeof(unshare_info) / sizeof(unshare_info[0]); i++) {
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("testcase3 ioctl_u2k failed unexpected, errno: %d", errno);
+ goto out2;
+ }
+ back_u2k_info2 = u2k_info;
+ u2k_info.addr += unshare_info[i].addr;
+ u2k_info.size = unshare_info[i].size;
+ u2k_info.spg_id = unshare_info[i].spg_id;
+ u2k_info.pid = unshare_info[i].pid;
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret != 0) {
+ pr_info("testcase3 ioctl_unshare %d failed unexpected, errno: %d", i, errno);
+ goto out1;
+ } else {
+ pr_info("testcase3 ioctl_unshare %d success expected", i);
+ }
+ u2k_info = back_u2k_info1;
+ }
+ goto out2;
+
+out1:
+ if (ioctl_unshare(dev_fd, &back_u2k_info2) != 0) {
+ pr_info("testcase4 cleanup ioctl_unshare failed unexpected");
+ return -1;
+ }
+out2:
+ if (ioctl_free(dev_fd, &alloc_info) != 0) {
+ pr_info("testcase3 cleanup ioctl_free failed unexpected");
+ return -1;
+ }
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "k2u后,解映射用户空间地址范围,size超出申请范围;spg_id非法。预期失败。")
+ TESTCASE_CHILD(testcase2, "k2u后,解映射用户地址空间范围,va、size未对齐;size为0;pid非法,预期成功。")
+ TESTCASE_CHILD(testcase3, "u2k后,解映射内核地址空间范围,size为0或超出申请范围;pid和spg_id非法;va、size未对齐。预期成功。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_walk_page_range_and_free/Makefile b/tools/testing/sharepool/testcase/api_test/sp_walk_page_range_and_free/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_walk_page_range_and_free/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_walk_page_range_and_free/test_sp_walk_page_range_and_free.c b/tools/testing/sharepool/testcase/api_test/sp_walk_page_range_and_free/test_sp_walk_page_range_and_free.c
new file mode 100644
index 000000000000..7669dc57cd83
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_walk_page_range_and_free/test_sp_walk_page_range_and_free.c
@@ -0,0 +1,339 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Sat Nov 28 08:13:17 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <sys/types.h>
+#include <sys/wait.h> // for wait
+
+#include "sharepool_lib.h"
+
+
+
+static int child(struct sp_alloc_info *alloc_info)
+{
+ int ret = 0;
+ int group_id = alloc_info->spg_id;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("add group failed, errno: %d", errno);
+ return -1;
+ }
+
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct sp_walk_page_range_info wpr_info = {
+ .uva = alloc_info->addr,
+ .size = alloc_info->size,
+ };
+ ret = ioctl_walk_page_range(dev_fd, &wpr_info);
+ if (ret < 0) {
+ pr_info("ioctl_walk_page_range failed, errno: %d", errno);
+ return -1;
+ }
+
+ ioctl_walk_page_free(dev_fd, &wpr_info);
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+ struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = 0,
+ .spg_id = 10,
+ .size = 10 * PAGE_SIZE,
+ },
+ {
+ .flag = SP_HUGEPAGE,
+ .spg_id = 12,
+ .size = 10 * PMD_SIZE,
+ },
+ {
+ .flag = SP_DVPP,
+ .spg_id = 19,
+ .size = 100000,
+ },
+ {
+ .flag = SP_DVPP | SP_HUGEPAGE,
+ .spg_id = 19,
+ .size = 10000000,
+ },
+ };
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(alloc_infos + i));
+ }
+
+ int status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("testcase1 failed!!, i: %d", i);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+/* uva 无效 */
+static int testcase2(void)
+{
+ struct sp_walk_page_range_info wpr_info = {
+ .uva = 0xe800000000,
+ .size = 1000,
+ };
+ int ret = ioctl_walk_page_range(dev_fd, &wpr_info);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range successed unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* size 为0,size超限制 */
+static int testcase3(void)
+{
+ int ret = 0;
+ int group_id = 100;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("add group failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .spg_id = group_id,
+ .size = 12345,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+ struct sp_walk_page_range_info wpr_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size * 10,
+ };
+ ret = ioctl_walk_page_range(dev_fd, &wpr_info);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range successed unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* size超限制,范围内存在未映射的物理页 */
+static int testcase4(void)
+{
+ int ret = 0;
+ int group_id = 130;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("add group failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_HUGEPAGE,
+ .spg_id = group_id,
+ .size = PMD_SIZE * 2,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+ struct sp_walk_page_range_info wpr_info = {
+ .uva = alloc_info.addr - PMD_SIZE,
+ .size = alloc_info.size * 10,
+ };
+ pr_info("uva is %lx, size is %lx", wpr_info.uva, wpr_info.size);
+ ret = ioctl_walk_page_range(dev_fd, &wpr_info);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range successed unexpected");
+ return 0;
+ }
+
+ return 0;
+}
+
+/* uva正常,size越界 */
+static int testcase5(void)
+{
+ unsigned long size = 0xffffffffffffffff;
+ int ret = 0;
+ unsigned long addr;
+
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 1) < 0)
+ return -1;
+
+ addr = (unsigned long)wrap_sp_alloc(1, PMD_SIZE, 1);
+ if (addr == -1) {
+ pr_info("alloc failed");
+ return -1;
+ }
+
+ ret = wrap_walk_page_range(addr, size);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range successed unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* uva和size均正常,但vma中存在未建立页表的空洞,预期失败,无泄漏 */
+static int testcase6(void)
+{
+ int ret = 0;
+ void *addr;
+ unsigned long size = 3 * PAGE_SIZE;
+
+ addr = mmap(NULL, size, PROT_WRITE | PROT_READ,
+ MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+ if (addr == MAP_FAILED) {
+ pr_info("mmap failed! errno %d", errno);
+ return -1;
+ }
+
+ /* 先不踩页,walk_page预期失败 */
+ ret = wrap_walk_page_range(addr, size);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range successed unexpected");
+ return -1;
+ }
+
+ /* 踩页,再次walk_page相应的大小,预期成功 */
+ size = 2 * PAGE_SIZE;
+ memset(addr, 0, size);
+
+ struct sp_walk_page_range_info wpr_info = {
+ .uva = addr,
+ .size = size,
+ };
+ pr_info("uva is %lx, size is %lx", wpr_info.uva, wpr_info.size);
+ ret = ioctl_walk_page_range(dev_fd, &wpr_info);
+ if (ret < 0) {
+ pr_info("ioctl_walk_page_range failed unexpected, errno %d", ret);
+ return -1;
+ }
+ ret = ioctl_walk_page_free(dev_fd, &wpr_info);
+ if (ret < 0) {
+ pr_info("ioctl_walk_page_free failed unexpected, errno %d", ret);
+ return -1;
+ }
+
+ return 0;
+}
+
+/* uva和size均正常,但vma中存在未建立页表的空洞,预期失败,无泄漏 */
+static int testcase7(void)
+{
+ int ret = 0;
+ void *addr;
+ unsigned long size = 20 * PMD_SIZE;
+
+ addr = mmap(NULL, size, PROT_WRITE | PROT_READ,
+ MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+ if (addr == MAP_FAILED) {
+ pr_info("mmap failed! errno %d", errno);
+ return -1;
+ }
+
+ /* 先不踩页,walk_page预期失败 */
+ ret = wrap_walk_page_range(addr, size);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range successed unexpected");
+ return -1;
+ }
+
+ /* 踩页,再次walk_page相应的大小,预期成功 */
+ size = 5 * PMD_SIZE;
+ memset(addr, 0, size);
+ /* 额外多walk_page一页,预期失败 */
+ ret = wrap_walk_page_range(addr, size + PMD_SIZE);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range success unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+#define TASK_SIZE 0xffffffffffff
+/* uva为TASK_SIZE-1,找不到vma,预期失败 */
+static int testcase8(void)
+{
+ int ret = 0;
+ void *addr = (void *)(TASK_SIZE);
+ unsigned long size = 3 * PAGE_SIZE;
+
+ ret = wrap_walk_page_range(addr, size);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range successed unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "遍历小页、大页、dvpp小页、dvpp大页,预期成功")
+ TESTCASE_CHILD(testcase2, "uva 无效 预期失败")
+ TESTCASE_CHILD(testcase3, "size 为0,size超限制 预期失败")
+// TESTCASE_CHILD(testcase4, "size超限制,范围内存在未映射的物理页 预期失败")
+ TESTCASE_CHILD(testcase5, "uva正常,size越界 预期失败")
+ TESTCASE_CHILD(testcase6, "uva和size均正常,但vma未建立页表,预期失败,无泄漏。踩页后再次walk该vma,预期成功")
+ TESTCASE_CHILD(testcase7, "uva和size均正常,但vma未建立页表,预期失败,无泄漏。踩页后再次walk该vma且额外walk一个页大小,预期失败")
+ TESTCASE_CHILD(testcase8, "传入uva = (TASK_SIZE - 1), 预期找不到vma,失败")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/Makefile b/tools/testing/sharepool/testcase/dts_bugfix_test/Makefile
new file mode 100644
index 000000000000..e5f6e448e506
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/Makefile
@@ -0,0 +1,15 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ mkdir -p $(TOOL_BIN_DIR)/dts_bugfix_test
+ cp $(testcases) $(TOOL_BIN_DIR)/dts_bugfix_test
+ cp dts_bugfix_test.sh $(TOOL_BIN_DIR)
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/dts_bugfix_test.sh b/tools/testing/sharepool/testcase/dts_bugfix_test/dts_bugfix_test.sh
new file mode 100755
index 000000000000..f22ee6d0d0db
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/dts_bugfix_test.sh
@@ -0,0 +1,43 @@
+#!/bin/sh
+
+set -x
+
+echo 'test_01_coredump_k2u_alloc
+ test_02_spg_not_alive
+ test_08_addr_offset' | while read line
+do
+ let flag=0
+ ./dts_bugfix_test/$line
+ if [ $? -ne 0 ] ;then
+ echo "testcase dts_bugfix_test/$line failed"
+ let flag=1
+ fi
+
+ sleep 3
+
+ #打印spa_stat
+ ret=`cat /proc/sharepool/spa_stat | wc -l`
+ if [ $ret -ge 15 ] ;then
+ cat /proc/sharepool/spa_stat
+ echo spa_stat not clean
+ let flag=1
+ fi
+ #打印proc_stat
+ ret=`cat /proc/sharepool/proc_stat | wc -l`
+ if [ $ret -ge 15 ] ;then
+ cat /proc/sharepool/proc_stat
+ echo proc_stat not clean
+ let flag=1
+ fi
+
+ cat /proc/sharepool/proc_overview
+ #如果泄漏 -->exit
+ if [ $flag -eq 1 ] ;then
+ exit 1
+ fi
+ echo "testcase dts_bugfix_test/$line success"
+
+ cat /proc/meminfo
+ free -m
+
+done
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_01_coredump_k2u_alloc.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_01_coredump_k2u_alloc.c
new file mode 100644
index 000000000000..4842fde68720
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_01_coredump_k2u_alloc.c
@@ -0,0 +1,603 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 06:59:45 2020
+ */
+#include <stdio.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <signal.h>
+#include <sys/time.h>
+#include <sys/resource.h>
+#include <stdlib.h> /* rand() and srand() */
+#include <time.h> /* time() */
+#include <pthread.h>
+
+#include "sharepool_lib.h"
+#include "sem_use.h"
+
+#define PROC_NUM 128
+#define GROUP_ID 1
+#define K2U_UNSHARE_TIME 2
+#define ALLOC_FREE_TIME 2
+#define VMALLOC_SIZE 4096
+#define PROT (PROT_READ | PROT_WRITE)
+#define GROUP_NUM 4
+#define K2U_CONTINUOUS_TIME 2000
+#define min(a,b) ((a)<(b)?(a):(b))
+
+/* testcase base:
+ * 每个组都拉起一个进程负责k2u/alloc,其他N个进程加多组后依次coredump,所有组k2u/alloc每次都应该返回成功。
+ * 打印维测信息正常。
+ * 所有进程coredump后,测试退出,打印维测信息,组和spa均为0,无泄漏。
+ */
+
+static int semid[PROC_NUM];
+static int sem_task;
+static int group_ids[GROUP_NUM];
+
+struct k2u_args {
+ int with_print;
+ int k2u_whole_times; // repeat times
+ int (*k2u_tsk)(struct k2u_args);
+};
+
+struct task_param {
+ bool with_print;
+};
+
+struct test_setting {
+ int (*task)(struct task_param*);
+ struct task_param *task_param;
+};
+
+static int init_sem();
+static int close_sem();
+static int k2u_unshare_task(struct task_param *task_param);
+static int k2u_continuous_task(struct task_param *task_param);
+static int child_process(int index);
+static int alloc_free_task(struct task_param *task_param);
+static int alloc_continuous_task(struct task_param *task_param);
+static int testcase_combine(int (*task1)(struct task_param*),
+ int (*task2)(struct task_param*), struct task_param *param);
+
+static int testcase_base(struct test_setting test_setting)
+{
+ int status;
+ int pid;
+ int child[PROC_NUM];
+ int ret;
+ int pid_k2u;
+
+ setCore();
+ // 初始化sem
+ ret = init_sem();
+ if (ret < 0) {
+ pr_info("init sem failed");
+ return -1;
+ }
+
+ // 创建组
+ //ret = wrap_add_group(getpid(), PROT, GROUP_ID);
+ ret = create_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ return -1;
+ }
+
+ // 拉起功能进程,负责k2u或者alloc
+ pid_k2u = fork();
+ if (pid_k2u < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid_k2u == 0) {
+ ret = add_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ exit(-1);
+ } else {
+ pr_info("functional process add group succuess.");
+ }
+ exit(test_setting.task(test_setting.task_param));
+ }
+
+ // 拉起子进程
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed, deleting procs...");
+ goto delete_procs;
+ } else if (pid == 0) {
+ // 拉起子进程 hanging
+ exit(child_process(i));
+ } else {
+ child[i] = pid;
+ //ret = wrap_add_group(pid, PROT, GROUP_ID);
+ ret = add_multi_groups(pid, GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", pid);
+ goto delete_procs;
+ }
+ }
+ }
+
+ // 依次让子进程coredump
+ for (int i = 0; i < PROC_NUM; i++) {
+ pr_info("coredump process %d", child[i]);
+ sem_inc_by_one(semid[i]);
+ waitpid(child[i], &status, 0);
+ usleep(200000);
+ }
+
+ // 功能进程退出
+ sem_inc_by_one(sem_task);
+ waitpid(pid_k2u, &status, 0);
+
+ close_sem();
+ return 0;
+
+delete_procs:
+ return -1;
+}
+
+static int testcase_combine(int (*task1)(struct task_param*),
+ int (*task2)(struct task_param*), struct task_param *param)
+{
+ int status;
+ int pid;
+ int child[PROC_NUM];
+ int ret;
+ int pid_k2u, pid_alloc;
+
+ setCore();
+ // 初始化sem
+ ret = init_sem();
+ if (ret < 0) {
+ pr_info("init sem failed");
+ return -1;
+ }
+
+ // 创建组
+ //ret = wrap_add_group(getpid(), PROT, GROUP_ID);
+ ret = create_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ return -1;
+ }
+
+ // 拉起功能进程,负责k2u或者alloc
+ pid_k2u = fork();
+ if (pid_k2u < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid_k2u == 0) {
+ ret = add_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ exit(-1);
+ } else {
+ pr_info("functional process add group succuess.");
+ }
+ exit(task1(param));
+ }
+
+ pid_alloc = fork();
+ if (pid_alloc < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid_alloc == 0) {
+ ret = add_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ exit(-1);
+ } else {
+ pr_info("functional process add group succuess.");
+ }
+ exit(task2(param));
+ }
+
+ // 拉起子进程
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed, deleting procs...");
+ goto delete_procs;
+ } else if (pid == 0) {
+ // 拉起子进程 hanging
+ exit(child_process(i));
+ } else {
+ child[i] = pid;
+ //ret = wrap_add_group(pid, PROT, GROUP_ID);
+ ret = add_multi_groups(pid, GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", pid);
+ goto delete_procs;
+ }
+ }
+ }
+
+ // 依次让子进程coredump
+ for (int i = 0; i < PROC_NUM; i++) {
+ pr_info("coredump process %d", child[i]);
+ sem_inc_by_one(semid[i]);
+ waitpid(child[i], &status, 0);
+ usleep(200000);
+ }
+
+ // 功能进程退出
+ sem_inc_by_val(sem_task, 2);
+ waitpid(pid_k2u, &status, 0);
+ waitpid(pid_alloc, &status, 0);
+
+ close_sem();
+ return 0;
+
+delete_procs:
+ return -1;
+}
+
+static struct task_param task_param_table[] = {
+ {
+ .with_print = false,
+ },
+ {
+ .with_print = true,
+ },
+ {
+ .with_print = false,
+ },
+ {
+ .with_print = false,
+ },
+};
+
+static struct test_setting test_setting_table[] = {
+ {
+ .task_param = &task_param_table[0],
+ .task = k2u_unshare_task,
+ },
+ {
+ .task_param = &task_param_table[1],
+ .task = k2u_unshare_task,
+ },
+ {
+ .task_param = &task_param_table[2],
+ .task = k2u_continuous_task,
+ },
+ {
+ .task_param = &task_param_table[3],
+ .task = alloc_free_task,
+ },
+ {
+ .task_param = &task_param_table[3],
+ .task = alloc_continuous_task,
+ },
+ {
+ .task_param = &task_param_table[1],
+ .task = alloc_free_task,
+ },
+};
+
+/* testcase1
+ * 执行k2u_unshare_task
+ */
+static int testcase1(void)
+{
+ return testcase_base(test_setting_table[0]);
+}
+/* testcase2
+ * 执行k2u_unshare_task, 同时打印维测
+ */
+static int testcase2(void)
+{
+ return testcase_base(test_setting_table[1]);
+}
+/* testcase3
+ * 执行k2u_continuous_task
+ */
+static int testcase3(void)
+{
+ return testcase_base(test_setting_table[2]);
+}
+/* testcase4
+ * 执行alloc_free_task
+ */
+static int testcase4(void)
+{
+ return testcase_base(test_setting_table[3]);
+}
+/* testcase5
+ * 执行alloc_continuous_task
+ */
+static int testcase5(void)
+{
+ return testcase_base(test_setting_table[4]);
+}
+/* testcase6
+ * 执行alloc_free_task, 同时打印维测
+ */
+static int testcase6(void)
+{
+ return testcase_base(test_setting_table[5]);
+}
+/* testcase7
+ * 混合执行k2u_continuous_task和alloc_continuous_task, 同时打印维测
+ */
+static int testcase7(void)
+{
+ return testcase_combine(k2u_continuous_task, alloc_continuous_task, &task_param_table[0]);
+}
+
+static int close_sem()
+{
+ int ret;
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ ret = sem_close(semid[i]);
+ if (ret < 0) {
+ pr_info("sem close failed");
+ return ret;
+ }
+ }
+ sem_close(sem_task);
+ pr_info("all sems deleted.");
+ return 0;
+}
+
+static int init_sem()
+{
+ int i = 0;
+
+ sem_task = sem_create(PROC_NUM, "sem_task");
+
+ for (i = 0; i < PROC_NUM; i++) {
+ key_t key = i;
+ semid[i] = sem_create(key, "sem_child");
+ if (semid[i] < 0) {
+ pr_info("semid %d init failed. errno: %d", i, errno);
+ goto delete_sems;
+ }
+ }
+ pr_info("all sems initialized.");
+ return 0;
+
+delete_sems:
+ for (int j = 0; j < i; j++) {
+ sem_close(semid[j]);
+ }
+ return -1;
+}
+
+static int child_process(int index)
+{
+ pr_info("child process %d created", getpid());
+ // 收到coredump信号后coredump
+ sem_dec_by_one(semid[index]);
+ pr_info("child process %d coredump", getpid());
+ generateCoredump();
+ return 0;
+}
+
+/* k2u_unshare_task
+ * 快速的k2u -> unshare -> k2u -> unshare 循环
+ */
+static int k2u_unshare_task(struct task_param *task_param)
+{
+ int ret;
+ int i;
+ struct vmalloc_info vmalloc_info;
+ struct sp_make_share_info k2u_info;
+ unsigned long uva[K2U_UNSHARE_TIME];
+
+ vmalloc_info.size = VMALLOC_SIZE;
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed.");
+ return -1;
+ } else {
+ pr_info("vmalloc %ld success.", vmalloc_info.size);
+ }
+
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = getpid();
+ k2u_info.spg_id = GROUP_ID;
+
+repeat:
+ memset(uva, 0, sizeof(unsigned long) * K2U_UNSHARE_TIME);
+ for (i = 0; i < K2U_UNSHARE_TIME; i++) {
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("k2u failed at %d time.", i);
+ goto unshare;
+ } else {
+ pr_info("k2u success %d time, addr = %lx", i, k2u_info.addr);
+ uva[i] = k2u_info.addr;
+ }
+ }
+
+ if (task_param->with_print)
+ sharepool_print();
+
+unshare:
+ for (int j = 0; j < i; j++) {
+ pr_info("uva[%d] is %ld", j, uva[j]);
+ k2u_info.addr = uva[j];
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("unshare failed at %d", j);
+ return -1;
+ }
+ }
+
+ if (sem_get_value(sem_task) == 0)
+ goto repeat;
+
+ ioctl_vfree(dev_fd, &vmalloc_info);
+
+ return 0;
+}
+
+/* k2u_continuous_task
+ * 先连续k2u 很多次 再连续unshare 很多次
+ */
+static int k2u_continuous_task(struct task_param *task_param)
+{
+ int ret;
+ int i, h;
+ struct vmalloc_info vmalloc_info;
+ struct sp_make_share_info k2u_info;
+ unsigned long *uva;
+
+ vmalloc_info.size = VMALLOC_SIZE;
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed.");
+ return -1;
+ } else {
+ pr_info("vmalloc %ld success.", vmalloc_info.size);
+ }
+
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = getpid();
+ //k2u_info.spg_id = GROUP_ID;
+
+ uva = malloc(sizeof(unsigned long) * K2U_CONTINUOUS_TIME * GROUP_NUM);
+
+ memset(uva, 0, sizeof(unsigned long) * K2U_CONTINUOUS_TIME * GROUP_NUM);
+ for (i = 0; i < K2U_CONTINUOUS_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ k2u_info.spg_id = group_ids[h];
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("k2u failed at %d time in group %d.", i, group_ids[h]);
+ goto unshare;
+ } else {
+ pr_info("k2u success %d time, addr = %lx", i, k2u_info.addr);
+ uva[i * GROUP_NUM + h] = k2u_info.addr;
+ }
+ }
+ }
+
+unshare:
+ for (int j = 0; j < min((i * GROUP_NUM + h), K2U_CONTINUOUS_TIME * GROUP_NUM); j++) {
+ pr_info("uva[%d] is %ld", j, uva[j]);
+ k2u_info.addr = uva[j];
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("unshare failed at %d", j);
+ return -1;
+ }
+ }
+
+ ioctl_vfree(dev_fd, &vmalloc_info);
+
+ return 0;
+}
+
+/* alloc_free_task
+ * 快速的alloc-free-alloc-free循环*/
+#define ALLOC_SIZE 4096
+#define ALLOC_FLAG 0
+static int alloc_free_task(struct task_param *task_param)
+{
+ int ret = 0;
+ int i, h;
+ unsigned long ret_addr = -1;
+ unsigned long addr[ALLOC_FREE_TIME][GROUP_NUM];
+
+repeat:
+ for (i = 0; i < ALLOC_FREE_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ ret_addr = wrap_sp_alloc(group_ids[h], ALLOC_SIZE, ALLOC_FLAG);
+ if (!ret_addr) {
+ pr_info("alloc failed %d time %d group", i, h + 1);
+ return ret_addr;
+ } else {
+ addr[i][h] = ret_addr;
+ pr_info("alloc success addr %x", ret_addr);
+ }
+ }
+ }
+
+ if (task_param->with_print)
+ sharepool_print();
+
+free:
+ for (i = 0; i < ALLOC_FREE_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ ret = wrap_sp_free(addr[i][h]);
+ if (ret < 0) {
+ pr_info("free failed %d time group %d", i, h + 1);
+ return ret;
+ }
+ }
+ }
+
+ if (sem_get_value(sem_task) == 0)
+ goto repeat;
+
+ return ret;
+}
+
+/* alloc_continuous_task
+ * 先连续alloc 很多次 再连续free 很多次
+ */
+
+#define ALLOC_CONTINUOUS_TIME 2000
+static int alloc_continuous_task(struct task_param *task_param)
+{
+ int ret = 0;
+ int i, h;
+ unsigned long ret_addr = -1;
+ unsigned long addr[ALLOC_FREE_TIME][GROUP_NUM];
+ for (i = 0; i < ALLOC_CONTINUOUS_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ ret_addr = wrap_sp_alloc(group_ids[h], ALLOC_SIZE, ALLOC_FLAG);
+ if (!ret_addr) {
+ pr_info("alloc failed %d time %d group", i, h + 1);
+ return ret_addr;
+ } else {
+ addr[i][h] = ret_addr;
+ pr_info("alloc success addr %x", ret_addr);
+ }
+ }
+ }
+ for (i = 0; i < ALLOC_CONTINUOUS_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ ret = wrap_sp_free(addr[i][h]);
+ if (ret < 0) {
+ pr_info("free failed %d time group %d", i, h + 1);
+ return ret;
+ }
+ }
+ }
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "1")
+ TESTCASE_CHILD(testcase2, "2")
+ TESTCASE_CHILD(testcase3, "3")
+ TESTCASE_CHILD(testcase4, "4")
+ TESTCASE_CHILD(testcase5, "5")
+ TESTCASE_CHILD(testcase6, "6")
+ TESTCASE_CHILD(testcase7, "7")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_02_spg_not_alive.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_02_spg_not_alive.c
new file mode 100644
index 000000000000..0da0356b34d8
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_02_spg_not_alive.c
@@ -0,0 +1,166 @@
+#include <stdio.h>
+#include <stdlib.h>
+#include <errno.h>
+#include "sharepool_lib.h"
+#include "sem_use.h"
+
+#define GROUP_ID 1
+#define ALLOC_SIZE (4096UL)
+#define TEST_SIZE 10
+#define MAX_RETRY 10
+#define REPEAT 30000
+#define PROT (PROT_READ | PROT_WRITE)
+int semid_add, semid_exit, semid_create;
+int sem1, sem2;
+
+static int fault_verify(void)
+{
+ int ret;
+ int test_pid[TEST_SIZE];
+ int pid;
+
+ ret = wrap_sp_alloc(GROUP_ID, ALLOC_SIZE, 0);
+ if (ret < 0) {
+ printf("fault verify --- alloc failed.\n");
+ }
+
+ for (int i = 0; i < TEST_SIZE; i++) {
+ pid = fork();
+ if (pid == 0) {
+ exit(wrap_add_group(getpid(), PROT, GROUP_ID));
+ }
+ }
+
+ printf("fault is verified!\n");
+ return 0;
+}
+
+static int child_process(int semid)
+{
+ int ret = 0;
+ int spg_id = 0;
+ int retry_time = 0;
+
+ printf("process %d created.\n", getpid());
+
+ sem_dec_by_one(semid);
+retry:
+ ret = wrap_add_group(getpid(), PROT, GROUP_ID);
+ if (errno == ENODEV || errno == ENOSPC) {
+ printf("process %d add group failed once, retry...\n", getpid());
+ errno = 0;
+ if (retry_time++ < MAX_RETRY)
+ goto retry;
+ }
+
+ retry_time = 0;
+ if (ret < 0) {
+ printf("process %d add group unexpected error, ret is %d\n",
+ getpid(), ret);
+ sem_dec_by_one(semid);
+ return -1;
+ }
+ printf("process %d add group success\n", getpid());
+
+ errno = 0;
+ spg_id = ioctl_find_first_group(dev_fd, getpid());
+ if (spg_id < 0 && errno == ENODEV) {
+ printf("fault is found.\n");
+ ret = fault_verify();
+ sem_dec_by_one(semid);
+ return -1;
+ }
+ if (spg_id != GROUP_ID) {
+ printf("unexpected find group fault %d\n", spg_id);
+ return -1;
+ }
+
+ sem_dec_by_one(semid);
+ // printf("process %d exit.\n", getpid());
+ return 0;
+}
+
+static int testcase1(void)
+{
+ int ret = 0;
+ int status;
+ int sem;
+ pid_t first, prev, current;
+
+ semid_add = sem_create(2234, "spg not alive test add group");
+ semid_exit = sem_create(3234, "spg not alive test exit group");
+ semid_create = sem_create(4234, "create");
+
+ sem1 = sem_create(1234, "sem lock for prev process");
+ sem2 = sem_create(4321, "sem lock for new process");
+
+ sem_inc_by_one(sem1);
+ first = fork();
+ if (first < 0)
+ goto close_sem;
+ else if (first == 0)
+ exit(child_process(sem1));
+ prev = first;
+ sem_check_zero(sem1);
+ sem = sem2;
+
+ for (int i = 0; i < REPEAT; i++) {
+ current = fork();
+ if (current < 0) {
+ printf("fork failed.\n");
+ kill(prev, SIGKILL);
+ ret = -1;
+ goto close_sem;
+ } else if (current == 0) {
+ exit(child_process(sem));
+ }
+
+ sem_inc_by_one(sem1); // 1退出 2加组
+ sem_inc_by_one(sem2); // 2加组 1退出
+
+ sem_check_zero(sem1);
+ sem_check_zero(sem2);
+
+ waitpid(prev, &status, 0);
+
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("process %d exit unexpected, %d", status);
+ ret = -1;
+ goto end;
+ } else
+ pr_info("process %d exit", prev);
+ prev = current;
+
+ if (sem == sem1) {
+ sem = sem2;
+ } else if (sem == sem2) {
+ sem = sem1;
+ } else {
+ printf("unexpected error: weird sem value: %d\n", sem);
+ goto end;
+ }
+ }
+end:
+ kill(current, SIGKILL);
+ waitpid(current, &status, 0);
+close_sem:
+ sem_close(sem1);
+ sem_close(sem2);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "加组和组销毁并发")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_03_hugepage_rsvd.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_03_hugepage_rsvd.c
new file mode 100644
index 000000000000..a442ab98b45c
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_03_hugepage_rsvd.c
@@ -0,0 +1,84 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sharepool_lib.h"
+
+#define HP_SIZE (3UL * 1024UL * 1024UL) // 1G
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int cleanup(struct sp_alloc_info *alloc_info)
+{
+ int ret = ioctl_free(dev_fd, alloc_info);
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ int i;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_infos[100];
+
+ for (i = 0; i < 4; i++) {
+ alloc_infos[i].flag = 3;
+ alloc_infos[i].size = HP_SIZE;
+ alloc_infos[i].spg_id = 1;
+ ret = ioctl_alloc(dev_fd, &alloc_infos[i]);
+ if (ret != 0) {
+ pr_info("%dth testcase1 ioctl_alloc failed, errno: %d", i+1, errno);
+ goto out;
+ }
+ }
+
+ while(1);
+out:
+ for (int j = 0; j < i; j++) {
+ ret = cleanup(&alloc_infos[j]);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ } else
+ pr_info("free %d success", j+1);
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, true)
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_04_spg_add_del.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_04_spg_add_del.c
new file mode 100644
index 000000000000..f87c7fc4b8f6
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_04_spg_add_del.c
@@ -0,0 +1,100 @@
+#include <stdio.h>
+#include <errno.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/wait.h> // for wait
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+#define PROC_NUM 2
+#define PRINT_NUM 3
+
+/*
+ * testcase1
+ * 测试点:有效pid加组
+ * 预期结果:加组成功,返回正确组id
+ */
+
+static int add_del_child(int group_id)
+{
+ int ret = 0;
+
+ pr_info("child %d created", getpid());
+ while (1) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, group_id);
+ if (ret < 0)
+ pr_info("child %d add group failed, ret %d", getpid(), ret);
+
+ ret = wrap_del_from_group(getpid(), group_id);
+ if (ret < 0)
+ pr_info("child %d del from group failed, ret %d", getpid(), ret);
+ }
+
+ return 0;
+}
+
+static int print_child(void)
+{
+ while (1) {
+ sharepool_print();
+ sleep(2);
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+ int ret = 0;
+ int group_id = 1;
+ pid_t child[PROC_NUM];
+ pid_t printer[PRINT_NUM];
+
+ /*
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, group_id);
+ if (ret < 0) {
+ pr_info(" add group failed");
+ return -1;
+ }
+ */
+
+ pr_info("test 1");
+ for (int i = 0; i < PROC_NUM; i++)
+ FORK_CHILD_ARGS(child[i], add_del_child(group_id));
+
+ for (int i = 0; i < PRINT_NUM; i++)
+ FORK_CHILD_ARGS(printer[i], print_child());
+
+ sleep(30);
+
+ for (int i = 0; i < PROC_NUM; i++)
+ KILL_CHILD(child[i]);
+
+ for (int i = 0; i < PRINT_NUM; i++)
+ KILL_CHILD(printer[i]);
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "有效pid加组,预期成功,返回正确组Id")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_05_cgroup_limit.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_05_cgroup_limit.c
new file mode 100644
index 000000000000..7a4a2be9264a
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_05_cgroup_limit.c
@@ -0,0 +1,76 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <sys/ioctl.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sharepool_lib.h"
+
+#define ALLOC_NUM 2
+
+static int testcase1(void)
+{
+ int ret = 0;
+ void *addr;
+ unsigned long va[ALLOC_NUM];
+ int spg_id = 1;
+ unsigned long size = 1 * 1024UL * 1024UL * 1024UL;
+ int i;
+
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id) < 0) {
+ pr_info("add group failed");
+ return -1;
+ }
+
+ // 申请小页 5次 预计第5次失败,然后释放所有内存
+ for (i = 0; i < ALLOC_NUM; i++) {
+ addr = wrap_sp_alloc(spg_id, size, 0);
+ if (addr == (void *)-1) {
+ pr_info("alloc %d time failed, errno: %d", i + 1, errno);
+ i--;
+ break;
+ } else {
+ pr_info("alloc %d time success, va: 0x%lx", i + 1, addr);
+ va[i] = (unsigned long)addr;
+ }
+ }
+
+ // 将申请的内存释放
+ for ( ; i >= 0; i--) {
+ ret = wrap_sp_free_by_id(va[i], spg_id);
+ if (ret < 0) {
+ pr_info("free %d time failed, errno: %d", i + 1, ret);
+ return -1;
+ } else {
+ pr_info("free %d time success", i + 1);
+ }
+ }
+
+ sleep(1200);
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "将cgroup设置为N个GB + xxx个MB,进程申请N次内存后,预期第N+1次失败;释放已申请的内存,并挂起。观察cgroup中剩余内存")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_06_clone.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_06_clone.c
new file mode 100644
index 000000000000..267112eff125
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_06_clone.c
@@ -0,0 +1,176 @@
+#define _GNU_SOURCE
+#include <sys/wait.h>
+#include <sys/utsname.h>
+#include <sched.h>
+#include <string.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <sys/mman.h>
+
+#include "sharepool_lib.h"
+
+#define STACK_SIZE (1024 * 1024) /* Stack size for cloned child */
+
+/* case 1 */
+static int /* Start function for cloned child */
+childFunc_1(void *arg)
+{
+ printf("child finished\n");
+}
+
+int testcase1(void)
+{
+ char *stack; /* Start of stack buffer */
+ char *stackTop; /* End of stack buffer */
+ pid_t pid;
+ int ret = 0;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 1);
+ if (ret != 1) {
+ printf("Add group failed, ret is %d\n", ret);
+ return -1;
+ }
+
+ /* Allocate memory to be used for the stack of the child. */
+
+ stack = mmap(NULL, STACK_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_STACK, -1, 0);
+ stackTop = stack + STACK_SIZE; /* Assume stack grows downward */
+
+ pid = clone(childFunc_1, stackTop, CLONE_VM, NULL);
+ if (pid == -1)
+ printf("clone failed\n");
+
+ printf("clone() returned %jd\n", (intmax_t) pid);
+
+ return 0;
+}
+
+/* case 2 */
+static volatile int flag_2 = 0;
+static int /* start function for cloned child */
+childFunc_2(void *arg)
+{
+ while(!flag_2) {}
+
+ sleep(5);
+
+ printf("child finished\n");
+}
+
+int testcase2(void)
+{
+ char *stack; /* start of stack buffer */
+ char *stacktop; /* end of stack buffer */
+ pid_t pid;
+ int ret = 0;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 1);
+ if (ret != 1) {
+ printf("Add group [1] failed, ret is %d\n", ret);
+ return -1;
+ }
+
+ /* allocate memory to be used for the stack of the child. */
+ stack = mmap(NULL, STACK_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_STACK, -1, 0);
+ stacktop = stack + STACK_SIZE; /* assume stack grows downward */
+
+ pid = clone(childFunc_2, stacktop, CLONE_VM, NULL);
+ if (pid == -1)
+ printf("clone failed\n");
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 2);
+ if (ret != 2) {
+ printf("Add group [2] failed, ret is %d\n", ret);
+ return -1;
+ }
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 3);
+ if (ret != 3) {
+ printf("Add group [3] failed, ret is %d\n", ret);
+ return -1;
+ }
+
+ printf("clone() returned %jd\n", (intmax_t) pid);
+
+ flag_2 = 1;
+ printf("parent finished\n");
+
+ return 0;
+}
+
+/* case 3 */
+static volatile int flag_3 = 0;
+static int /* start function for cloned child */
+childFunc_3(void *arg)
+{
+ int ret = 0;
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 2);
+ if (ret == 2) {
+ printf("add group [2] should failed, ret is %d\n", ret);
+ return -1;
+ }
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 3);
+ if (ret == 3) {
+ printf("add group [3] should failed, ret is %d\n", ret);
+ return -1;
+ }
+
+ flag_3 = 1;
+
+ printf("child finished\n");
+}
+
+int testcase3(void)
+{
+ char *stack; /* start of stack buffer */
+ char *stacktop; /* end of stack buffer */
+ pid_t pid;
+ int ret;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 1);
+ if (ret != 1) {
+ printf("Add group [1] failed, ret is %d\n", ret);
+ return -1;
+ }
+
+ /* allocate memory to be used for the stack of the child. */
+ stack = mmap(NULL, STACK_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_STACK, -1, 0);
+ stacktop = stack + STACK_SIZE; /* assume stack grows downward */
+
+ pid = clone(childFunc_3, stacktop, CLONE_VM, NULL);
+ if (pid == -1)
+ printf("clone failed\n");
+
+ printf("clone() returned %jd\n", (intmax_t) pid);
+
+ while(!flag_3) {}
+
+ sleep(5);
+
+ printf("parent finished\n");
+
+ return 0;
+}
+
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "进程添加组后, 进程clone(CLONE_VM)创建子进程,父子进程可以正常退出")
+ TESTCASE_CHILD(testcase2, "进程添加组后,进程clone(CLONE_VM)创建子进程,父进程退出后,子进程添加多组,子进程可以正常退出")
+ TESTCASE_CHILD(testcase3, "进程添加组后,进程clone(CLONE_VM)创建子进程,子进程退出后,父进程添加多组,父进程可以正常退出")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
+
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_08_addr_offset.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_08_addr_offset.c
new file mode 100644
index 000000000000..cd19273fd058
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_08_addr_offset.c
@@ -0,0 +1,156 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <sys/ioctl.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sharepool_lib.h"
+
+#define ALIGN_UP(x, align_to) (((x) + ((align_to)-1)) & ~((align_to)-1))
+#define CMD_LEN 100
+#define UNIT 1024
+#define PAGE_NUM 1
+#define HGPAGE_NUM 10
+#define LARGE_PAGE_NUM 1000000
+#define ATOMIC_TEST_SIZE (1024UL * 1024UL * 1024UL) // 1G
+#define SPG_ID_AUTO 200000
+#define DAVINCI_IOCTL_VA_TO_PA 0xfff9
+#define DVPP_START (0x100000000000UL)
+#define DVPP_SIZE 0x400000000UL
+
+
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ } else {
+ ret = ag_info.spg_id;
+ }
+ return ret;
+}
+
+static int cleanup(struct sp_alloc_info *alloc_info)
+{
+ int ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ int fd;
+ long err;
+ unsigned long phys_addr;
+ int spg_id = 0;
+
+ spg_id = addgroup();
+ if (spg_id <= 0) {
+ pr_info("spgid <= 0, value: %d", spg_id);
+ return -1;
+ } else {
+ pr_info("spg id %d", spg_id);
+ }
+
+ struct sp_config_dvpp_range_info cdr_info = {
+ .start = DVPP_START,
+ .size = DVPP_SIZE,
+ .device_id = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_config_dvpp_range(dev_fd, &cdr_info);
+ if (ret < 0) {
+ pr_info("dvpp config failed. errno: %d", errno);
+ return ret;
+ } else
+ pr_info("dvpp config success.");
+
+ struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = 1, // 普通大页
+ .size = PAGE_NUM * PMD_SIZE,
+ .spg_id = spg_id,
+ },
+ {
+ .flag = 1, // 普通大页
+ .size = PAGE_NUM * PMD_SIZE,
+ .spg_id = spg_id,
+ },
+ {
+ .flag = 5, // DVPP大页
+ .size = PAGE_NUM * PMD_SIZE,
+ .spg_id = spg_id,
+ },
+ {
+ .flag = 1, // 普通大页
+ .size = PAGE_NUM * PMD_SIZE,
+ .spg_id = spg_id,
+ },
+ {
+ .flag = 5, // DVPP大页
+ .size = PAGE_NUM * PMD_SIZE,
+ .spg_id = spg_id,
+ },
+ };
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_infos[i]);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ pr_info("alloc success, type: %s, va: %lx",
+ alloc_infos[i].flag == 5 ? "SP_DVPP" : "NORMAL",
+ alloc_infos[i].addr);
+ }
+
+ ret = memset((void *)(alloc_infos[2].addr), 'b', alloc_infos[2].size);
+ ret = memset((void *)(alloc_infos[0].addr), 'a', alloc_infos[0].size);
+ if (ret < 0)
+ pr_info("memset failed");
+ char *p;
+ p = (char *)(alloc_infos[2].addr);
+ if (*p != 'a') {
+ pr_info("pa not same. char: %c", *p);
+ ret = 0;
+ }else {
+ pr_info("pa same. char: %c", *p);
+ ret = -1;
+ }
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++)
+ cleanup(&alloc_infos[i]);
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "测试dvpp文件地址与普通alloc文件地址不重叠")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_09_spg_del_exit.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_09_spg_del_exit.c
new file mode 100644
index 000000000000..73b3d07992e9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_09_spg_del_exit.c
@@ -0,0 +1,150 @@
+#include <stdio.h>
+#include <stdlib.h>
+#include <errno.h>
+#include "sharepool_lib.h"
+#include "sem_use.h"
+
+#define GROUP_ID 1
+#define ALLOC_SIZE (4096UL)
+#define TEST_SIZE 10
+#define MAX_RETRY 10
+#define REPEAT 30000
+#define PROT (PROT_READ | PROT_WRITE)
+int semid_add, semid_exit, semid_create;
+int sem1, sem2;
+
+static int fault_verify(void)
+{
+ int ret;
+ int test_pid[TEST_SIZE];
+ int pid;
+
+ ret = wrap_sp_alloc(GROUP_ID, ALLOC_SIZE, 0);
+ if (ret < 0) {
+ printf("fault verify --- alloc failed.\n");
+ }
+
+ for (int i = 0; i < TEST_SIZE; i++) {
+ pid = fork();
+ if (pid == 0) {
+ exit(wrap_add_group(getpid(), PROT, GROUP_ID));
+ }
+ }
+
+ printf("fault is verified!\n");
+ return 0;
+}
+
+static int child_process(int semid)
+{
+ int ret = 0;
+ int spg_id = 0;
+ int retry_time = 0;
+
+ printf("process %d created.\n", getpid());
+
+ while (1) {
+ sem_dec_by_one(semid);
+retry:
+ ret = wrap_add_group(getpid(), PROT, GROUP_ID);
+ if (errno == ENODEV || errno == ENOSPC) {
+ printf("process %d add group failed once, retry...\n", getpid());
+ errno = 0;
+ if (retry_time++ < MAX_RETRY)
+ goto retry;
+ }
+
+ retry_time = 0;
+ if (ret < 0) {
+ printf("process %d add group unexpected error, ret is %d\n",
+ getpid(), ret);
+ sem_dec_by_one(semid);
+ return -1;
+ }
+ printf("process %d add group%d success!\n", getpid(), GROUP_ID);
+
+ errno = 0;
+ spg_id = ioctl_find_first_group(dev_fd, getpid());
+ if (spg_id < 0 && errno == ENODEV) {
+ printf("fault is found.\n");
+ ret = fault_verify();
+ sem_dec_by_one(semid);
+ return -1;
+ }
+ if (spg_id != GROUP_ID) {
+ printf("unexpected find group fault %d\n", spg_id);
+ return -1;
+ }
+
+ sem_dec_by_one(semid);
+ ret = wrap_del_from_group(getpid(), GROUP_ID);
+ if (ret < 0)
+ pr_info("del failed!");
+ else
+ pr_info("process %d del from group%d success!", getpid(), GROUP_ID);
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+ int ret = 0;
+ int status;
+ int sem;
+ pid_t first, second;
+
+ semid_add = sem_create(2234, "spg not alive test add group");
+ semid_exit = sem_create(3234, "spg not alive test exit group");
+ semid_create = sem_create(4234, "create");
+
+ sem1 = sem_create(1234, "sem lock for first process");
+ sem2 = sem_create(4321, "sem lock for second process");
+
+ sem_inc_by_one(sem1);
+ first = fork();
+ if (first < 0)
+ goto close_sem;
+ else if (first == 0)
+ exit(child_process(sem1));
+ sem_check_zero(sem1);
+
+ second = fork();
+ if (second < 0)
+ goto close_sem;
+ else if (second == 0)
+ exit(child_process(sem2));
+ sem_check_zero(sem2);
+
+ for (int i = 0; i < REPEAT; i++) {
+ sem_inc_by_one(sem1); // 1退组
+ sem_inc_by_one(sem2); // 2加组
+
+ sem_check_zero(sem1); // 1加组
+ sem_check_zero(sem2); // 2退组
+ }
+
+end:
+ kill(first, SIGKILL);
+ kill(second, SIGKILL);
+close_sem:
+ sem_close(sem1);
+ sem_close(sem2);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "加组和退组并发")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_10_walk_page_range_AA_lock.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_10_walk_page_range_AA_lock.c
new file mode 100644
index 000000000000..db70e7cf6718
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_10_walk_page_range_AA_lock.c
@@ -0,0 +1,124 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2023. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri May 19 16:06:03 2023
+ */
+#include <sys/ioctl.h>
+#include <sys/syscall.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+
+#include "sharepool_lib.h"
+
+#define TEST_SIZE 0x200000
+#define NUM 200
+
+/*
+ * sharepool 升级适配的5.10的时候引入一个bug,mg_sp_walk_page_range遍历的内存正在
+ * 发生页迁移的时候触发 AA死锁,重复拿页表的锁;
+ */
+static int case1()
+{
+ int err = 0;
+ int i, count = NUM;
+ unsigned long *addr[NUM] = {0};
+
+ for (i = 0; i < count; i++) {
+ addr[i] = wrap_sp_alloc(SPG_ID_DEFAULT, TEST_SIZE, 0);
+ if (addr[i] == (void *)-1) {
+ printf("ioctl alloc failed, %s.\n", strerror(errno));
+ count = i;
+ err = -1;
+ goto out;
+ }
+ }
+ printf("memory allocation done.\n");
+
+ // 先释放一半内存,更容易构造碎片化场景,增加页迁移概率
+ for (i = 0; i < count; i += 2) {
+ wrap_sp_free(addr[i]);
+ addr[i] = NULL;
+ }
+
+ for (i = 0; i < count; i++) {
+ if (!addr[i])
+ continue;
+
+ struct sp_walk_page_range_info wpr_info = {
+ .uva = addr[i],
+ .size = TEST_SIZE,
+ };
+ err = ioctl_walk_page_range(dev_fd, &wpr_info);
+ if (err < 0) {
+ pr_info("ioctl_walk_page_range failed,XXXXXXX errno: %d", errno);
+ err = -1;
+ goto out;
+ }
+
+ ioctl_walk_page_free(dev_fd, &wpr_info);
+ }
+ printf("walk_page_reange done\n");
+
+out:
+ for (i = 0; i < count; i++) {
+ if (addr[i])
+ wrap_sp_free(addr[i]);
+ }
+ printf("memory free done.\n");
+
+ return err;
+}
+
+static int case1_child(void)
+{
+ int i = 1;
+
+ while (1) {
+ printf("memory compact start: %d\n", i++);
+ system("echo 1 > /proc/sys/vm/compact_memory");
+ sleep(1);
+ }
+
+ return 0;
+}
+
+static int testcase1()
+{
+ int ret = 0;
+ pid_t pid;
+
+ FORK_CHILD_ARGS(pid, case1_child());
+
+ for (int i = 0; i < 100; i++) {
+ printf("loop count: %d\n", i);
+ ret = case1();
+ if (ret < 0)
+ break;
+ }
+
+ KILL_CHILD(pid);
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "申请内存,mg_sp_walk_page_range, 释放内存,循环跑,并且后台执行内存规整")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_dvpp_readonly.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_dvpp_readonly.c
new file mode 100644
index 000000000000..02d3161fb506
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_dvpp_readonly.c
@@ -0,0 +1,71 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 20 01:38:40 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+static int testcase1(void)
+{
+ int ret = 0;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_PROT_RO,
+ .size = 40960,
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = mprotect(alloc_info.addr, 40960, PROT_WRITE);
+ if (ret) {
+ pr_info("mprotect failed, %d, %d\n", ret, errno);
+ return ret;
+ }
+ memset(alloc_info.addr, 0, alloc_info.size);
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, true)
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/Makefile b/tools/testing/sharepool/testcase/function_test/Makefile
new file mode 100644
index 000000000000..0d4d24db842a
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/Makefile
@@ -0,0 +1,36 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+dvpp?=true
+
+ifeq ($(dvpp),true)
+ testcases:=test_two_user_process \
+ test_dvpp_pass_through \
+ test_u2k test_k2u \
+ test_alloc_free_two_process \
+ test_mm_mapped_to_multi_groups \
+ test_sp_ro \
+ test_alloc_readonly \
+ test_dvpp_multi_16G_alloc \
+ test_dvpp_readonly \
+ test_dvpp_multi_16G_k2task \
+ test_non_dvpp_group \
+ test_hugetlb_alloc_hugepage
+else
+ testcases:=test_two_user_process \
+ test_dvpp_pass_through \
+ test_u2k test_k2u \
+ test_alloc_free_two_process \
+ test_mm_mapped_to_multi_groups \
+ test_sp_ro
+endif
+
+default: $(testcases)
+
+install:
+ mkdir -p $(TOOL_BIN_DIR)/function_test
+ cp $(testcases) $(TOOL_BIN_DIR)/function_test
+ cp function_test.sh $(TOOL_BIN_DIR)
+
+clean:
+ rm -rf $(testcases)
+
diff --git a/tools/testing/sharepool/testcase/function_test/function_test.sh b/tools/testing/sharepool/testcase/function_test/function_test.sh
new file mode 100755
index 000000000000..dc49a9cb2e0b
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/function_test.sh
@@ -0,0 +1,32 @@
+#!/bin/sh
+
+for line in test_two_user_process \
+ test_alloc_free_two_process \
+ test_mm_mapped_to_multi_groups \
+ test_alloc_readonly \
+ test_dvpp_pass_through \
+ test_u2k \
+ test_k2u \
+ test_dvpp_multi_16G_alloc \
+ test_dvpp_multi_16G_k2task \
+ test_non_dvpp_group \
+ test_dvpp_readonly
+do
+ ./function_test/$line
+ if [ $? -ne 0 ] ;then
+ echo "testcase function_test/$line failed"
+ exit 1
+ fi
+ cat /proc/meminfo
+ free -m
+done
+
+#echo 100 > /proc/sys/vm/nr_hugepages
+#line=test_hugetlb_alloc_hugepage
+#./function_test/$line
+#if [ $? -ne 0 ] ;then
+# echo "testcase function_test/$line failed"
+# echo 0 > /proc/sys/vm/nr_hugepages
+# exit 1
+#fi
+#echo 0 > /proc/sys/vm/nr_hugepages
diff --git a/tools/testing/sharepool/testcase/function_test/test_alloc_free_two_process.c b/tools/testing/sharepool/testcase/function_test/test_alloc_free_two_process.c
new file mode 100644
index 000000000000..263821bee137
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_alloc_free_two_process.c
@@ -0,0 +1,303 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 两个组,每组两个进程,同时多次申请释放内存
+ *
+ * 用信号量进行同步,进程创建完成后先睡眠
+ */
+
+#define NR_GROUP 100
+#define MAX_PROC_PER_GRP 100
+
+static int group_num = 2;
+static int process_per_group = 3;
+static int alloc_num = 1000;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int arg, sem_t *child_sync, sem_t *grandchild_sync)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", arg, getpid(), ##args)
+ int ret;
+
+ struct sp_alloc_info *alloc_infos = malloc(sizeof(*alloc_infos) * alloc_num);
+ if (!alloc_infos) {
+ pr_local_info("malloc failed");
+ return -1;
+ }
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync);
+ } while ((ret != 0) && errno == EINTR);
+
+ sleep(1); // it seems sem_wait doesn't work as expected
+ pr_local_info("start!!, ret is %d, errno is %d", ret, errno);
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ goto error_out;
+ }
+
+ for (int i = 0; i < alloc_num; i++) {
+ (alloc_infos + i)->flag = 0,
+ (alloc_infos + i)->spg_id = group_id,
+ (alloc_infos + i)->size = alloc_size,
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, alloc_infos + i);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ goto error_out;
+ } else {
+ if (IS_ERR_VALUE(alloc_infos[i].addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", alloc_infos[i].addr);
+ goto error_out;
+ }
+ }
+
+ memset((void *)alloc_infos[i].addr, 'z', alloc_infos[i].size);
+ }
+
+ sem_post(grandchild_sync);
+ do {
+ ret = sem_wait(child_sync);
+ } while (ret < 0 && errno == EINTR);
+
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_free(dev_fd, alloc_infos + i);
+ if (ret < 0) {
+ pr_local_info("ioctl free failed, errno: %d", errno);
+ goto error_out;
+ }
+ }
+ sem_post(grandchild_sync);
+ free(alloc_infos);
+ pr_local_info("exit!!");
+ return 0;
+
+error_out:
+ sem_post(grandchild_sync);
+ free(alloc_infos);
+ return -1;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+ sem_t *child_sync[MAX_PROC_PER_GRP] = {0};
+ sem_t *grandchild_sync[MAX_PROC_PER_GRP] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ // create syncs for grandchilds
+ for (int i = 0; i < process_per_group; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_local_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // create syncs for childs
+ for (int i = 0; i < process_per_group; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_local_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // 创建子进程并将之加组
+ for (int i = 0; i < process_per_group; i++) {
+ int num = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(num, child_sync[i], grandchild_sync[i]);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", num, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", num, group_id);
+ goto error_out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", num, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[i]);
+ }
+ }
+
+ for (int i = 0; i < process_per_group; i++)
+ do {
+ ret = sem_wait(grandchild_sync[i]);
+ } while (ret < 0 && errno == EINTR);
+
+ for (int i = 0; i < process_per_group; i++)
+ sem_post(child_sync[i]);
+ pr_local_info("grandchild-processes start to do sp_free");
+
+ for (int i = 0; i < process_per_group && childs[i] > 0; i++) {
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_local_info("grandchild%d test failed, %d", arg * MAX_PROC_PER_GRP + i, status);
+ ret = -1;
+ }
+ }
+ pr_local_info("exit!!");
+ return ret;
+
+error_out:
+ for (int i = 0; i < process_per_group && childs[i] > 0; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], NULL, 0);
+ };
+ pr_local_info("exit!!");
+ return -1;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_PER_GRP || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > NR_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 内存申请次数
+ alloc_num = atoi(optarg);
+ if (alloc_num > 100000 || alloc_num <= 0) {
+ printf("alloc number invalid\n");
+ return -1;
+ }
+ break;
+ case 's':
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+#define pr_local_info(fmt, args...) printf("[parent , pid:%d] " fmt "\n", getpid(), ##args)
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[NR_GROUP];
+
+ pr_local_info("group: %d, process_per_group: %d", group_num, process_per_group);
+
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+
+ pr_local_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_local_info("child%d test failed, %d", i, status);
+ ret = -1;
+ }
+ }
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "两个组每组两个进程,同时申请释放内存,简单验证")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_alloc_readonly.c b/tools/testing/sharepool/testcase/function_test/test_alloc_readonly.c
new file mode 100644
index 000000000000..3278cbbb2a0e
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_alloc_readonly.c
@@ -0,0 +1,588 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Nov 23 02:17:32 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/shm.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include <pthread.h>
+#include "sharepool_lib.h"
+
+#define GROUP_ID 1
+
+static bool is_addr_ro_range(unsigned long addr)
+{
+ return addr >= MMAP_SHARE_POOL_RO_START && addr < MMAP_SHARE_POOL_DVPP_START;
+}
+
+static jmp_buf sigsegv_env;
+static void sigsegv_handler(int num)
+{
+ pr_info("SIGSEGV 11 received.");
+ longjmp(sigsegv_env, 1);
+}
+
+static unsigned long alloc_readonly(int spg_id, unsigned long size, unsigned long flag)
+{
+ unsigned long va;
+ unsigned sp_flag = flag | SP_PROT_FOCUS | SP_PROT_RO;
+ void *addr = wrap_sp_alloc(spg_id, size, sp_flag);
+ if (addr == (void *)-1) {
+ pr_info("alloc read only memory failed, size %lx, flag %lx",
+ size, sp_flag);
+ return -1;
+ }
+
+ va = (unsigned long)addr;
+ if (!is_addr_ro_range(va)) {
+ pr_info("address not in read only range. %lx", va);
+ return -1;
+ }
+
+ return va;
+}
+
+int GROUP_TYPE[] = {
+ 1,
+ SPG_ID_AUTO,
+ SPG_ID_DEFAULT,
+};
+
+bool ALLOC_FLAG[] = {
+ false,
+ true,
+};
+
+unsigned long ALLOC_TYPE[] = {
+ SP_PROT_FOCUS | SP_PROT_RO,
+ 0,
+ SP_DVPP,
+};
+
+static int test(int spg_id, bool is_hugepage)
+{
+ int ret = 0;
+ unsigned long sp_flag = 0;
+ unsigned long size = PMD_SIZE;
+ void *addr;
+ char *caddr;
+ unsigned long uva, kva;
+ struct sigaction sa = {0};
+ sa.sa_handler = sigsegv_handler;
+ sa.sa_flags |= SA_NODEFER;
+ sigaction(SIGSEGV, &sa, NULL);
+
+ /* 申请只读内存 */
+ sp_flag |= SP_PROT_FOCUS;
+ sp_flag |= SP_PROT_RO;
+ if (is_hugepage)
+ sp_flag |= SP_HUGEPAGE;
+ pr_info("sp_flag: %lx, group_id:%d", sp_flag, spg_id);
+
+ addr = wrap_sp_alloc(spg_id, size, sp_flag);
+ if ((unsigned long)addr == -1) {
+ pr_info("alloc readonly memory failed, errno %d", errno);
+ return ret;
+ }
+
+ /* 是否在只读区间范围内 */
+ pr_info("address is %lx", addr);
+ if (!is_addr_ro_range((unsigned long)addr)) {
+ pr_info("address not in read only range.");
+ return -1;
+ }
+
+ /* 尝试直接读,预期不太清楚 有可能收到SIGSEGV */
+ caddr = (char *)addr;
+ ret = setjmp(sigsegv_env);
+ if (!ret) {
+ pr_info("value at addr[0] is %d", (int)caddr[0]);
+ pr_info("read success expected.");
+ }
+
+ /* 尝试写,预期收到signal11 SIGSEGV */
+ ret = setjmp(sigsegv_env);
+ if (!ret) {
+ memset(caddr, 'A', size);
+ pr_info("memset success unexpected.");
+ return -1;
+ }
+ pr_info("memset failed expected.");
+
+ /* u2k 然后让内核写 */
+ uva = (unsigned long)addr;
+ kva = wrap_u2k(uva, size);
+ if (!kva) {
+ pr_info("u2k failed, errno %d", errno);
+ return -1;
+ }
+ KAREA_ACCESS_SET('A', kva, size, out);
+ pr_info("kernel write success");
+ KAREA_ACCESS_CHECK('A', kva, size, out);
+ pr_info("kernel read success");
+
+ /* 用户进程再尝试读 */
+ for( int i = 0; i < size; i++) {
+ if (caddr[i] != 'A') {
+ pr_info("caddr[%d] is %c, not %c", i, caddr[i], 'A');
+ return -1;
+ }
+ }
+ pr_info("user read success");
+
+ /* 再尝试写,预期收到signal11 SIGSEGV */
+ sigaction(SIGSEGV, &sa, NULL);
+ ret = setjmp(sigsegv_env);
+ if (!ret) {
+ memset(caddr, 'A', size);
+ pr_info("memset success unexpected.");
+ return -1;
+ }
+ pr_info("memset failed expected.");
+
+ /* 从内核unshare 该uva */
+ if (wrap_unshare(kva, size) < 0) {
+ pr_info("unshare failed");
+ return -1;
+ }
+
+ /* free uva*/
+ if (wrap_sp_free_by_id(uva, spg_id) < 0) {
+ pr_info("free failed");
+ return -1;
+ }
+ pr_info("free success");
+
+ return 0;
+out:
+ return ret;
+
+}
+
+static int testcase1(void)
+{
+ int i, j;
+
+ for (i = 0; i < sizeof(GROUP_TYPE) / sizeof(GROUP_TYPE[0]); i++) {
+ /* 读写权限加组 */
+ int group_id = GROUP_TYPE[i];
+ if (group_id != SPG_ID_DEFAULT) {
+ int ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, group_id);
+ if (ret < 0) {
+ pr_info("add group failed, errno %d", errno);
+ return ret;
+ }
+ group_id = ret;
+ }
+
+ for (j = 0; j < sizeof(ALLOC_FLAG) / sizeof(ALLOC_FLAG[0]); j++)
+ if (test(group_id, ALLOC_FLAG[j]))
+ goto out;
+ }
+
+ return 0;
+out:
+ pr_info("test failed for group id %d type %s",
+ GROUP_TYPE[i], ALLOC_FLAG[i] & SP_HUGEPAGE ? "hugepage" : "normal page");
+ return -1;
+}
+
+#define PROC_NUM 10
+static unsigned long UVA[2];
+static int tc2_child(int idx)
+{
+ int ret = 0;
+ unsigned long size = 10 * PMD_SIZE;
+ struct sigaction sa = {0};
+ sa.sa_handler = sigsegv_handler;
+ sa.sa_flags |= SA_NODEFER;
+ sigaction(SIGSEGV, &sa, NULL);
+
+ /* 子进程加组 */
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID) < 0)
+ return -1;
+
+ /* 尝试写内存 */
+ for (int i = 0; i < sizeof(UVA) / sizeof(UVA[0]); i++) {
+ ret = setjmp(sigsegv_env);
+ if(!ret) {
+ memset(UVA[i], 'A', size);
+ pr_info("child process write success unexpected");
+ return -1;
+ }
+ pr_info("child process %d write %s failed expected.",
+ idx, ALLOC_FLAG[i] & SP_HUGEPAGE ? "huge page" : "normal page");
+ }
+
+ return 0;
+}
+static int testcase2(void)
+{
+ int ret = 0;
+ unsigned long size = 10 * PMD_SIZE;
+ unsigned long uva;
+ pid_t child[PROC_NUM];
+
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID) < 0)
+ return -1;
+
+ /* 申请内存 小页/大页/dvpp */
+ for (int i = 0; i < sizeof(ALLOC_FLAG) / sizeof(ALLOC_FLAG[0]); i++) {
+ uva = alloc_readonly(GROUP_ID, size, ALLOC_FLAG[i]);
+ if (uva == -1) {
+ pr_info("alloc uva size %lx, flag %lx failed", size, ALLOC_FLAG[i]);
+ return -1;
+ }
+ UVA[i] = uva;
+ }
+
+ /* fork 子进程加组 */
+ for (int i = 0; i < PROC_NUM; i++) {
+ FORK_CHILD_ARGS(child[i], tc2_child(i));
+ }
+
+ /* 回收子进程 */
+ for (int i = 0; i < PROC_NUM; i++) {
+ WAIT_CHILD_STATUS(child[i], out);
+ }
+
+ /* free 内存*/
+ for (int i = 0; i < sizeof(UVA) / sizeof(UVA[0]); i++) {
+ if(wrap_sp_free_by_id(UVA[i], GROUP_ID) < 0) {
+ pr_info("free uva[%d] failed", i);
+ return -1;
+ }
+ }
+
+ return 0;
+out:
+ return ret;
+}
+
+#define REPEAT 20
+
+static void *thread_alloc_rdonly(void *spg_id)
+{
+ unsigned long size = PMD_SIZE;
+ /* 申请只读内存小页/大页各2M,重复REPEAT次 */
+ for (int i = 0; i < REPEAT; i++) {
+ if (alloc_readonly(spg_id, size, 0) < 0 ||
+ alloc_readonly(spg_id, size, 1) < 0)
+ return (void *)-1;
+ if (i % 10 == 0)
+ pr_info("%dMB RDONLY memory allocated in group %d", i * 4, spg_id);
+ }
+ return (void *)0;
+}
+
+static void *thread_alloc_normal(void *spg_id)
+{
+ unsigned long size = PMD_SIZE;
+ /* 申请普通内存小页/大页各2M,重复REPEAT次 */
+ for (int i = 0; i < REPEAT; i++) {
+ if (wrap_sp_alloc(spg_id, size, 0) == (void *)-1 ||
+ wrap_sp_alloc(spg_id, size, 1) == (void *)-1)
+ return (void *)-1;
+ if (i % 10 == 0)
+ pr_info("%dMB memory allocated in group %d", i * 4, spg_id);
+ }
+ return (void *)0;
+}
+
+static void *thread_alloc_dvpp(void *spg_id)
+{
+ unsigned long size = PMD_SIZE;
+ /* 申请dvpp内存小页/大页各2M,重复REPEAT次 */
+ for (int i = 0; i < REPEAT; i++) {
+ if (wrap_sp_alloc(spg_id, size, SP_DVPP) == (void *)-1 ||
+ wrap_sp_alloc(spg_id, size, SP_DVPP | SP_HUGEPAGE) == (void *)-1)
+ return (void *)-1;
+ if (i % 10 == 0)
+ pr_info("%dMB dvpp memory allocated in group %d", i * 4, spg_id);
+ }
+ return (void *)0;
+}
+
+void * (*thread_func[]) (void *) = {
+ thread_alloc_rdonly,
+ thread_alloc_normal,
+ thread_alloc_dvpp,
+};
+static int testcase3(void)
+{
+ int ret = 0;
+ pthread_t threads[3];
+ void *pret;
+
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID) < 0)
+ return -1;
+
+ for (int i = 0; i < ARRAY_SIZE(thread_func); i++) {
+ pthread_create(threads + i, NULL, thread_func[i], (void *)GROUP_ID);
+ if (ret < 0) {
+ pr_info("pthread create failed.");
+ return -1;
+ }
+ }
+
+ for (int i = 0; i < ARRAY_SIZE(threads); i++) {
+ pthread_join(threads[i], &pret);
+ if (ret < 0) {
+ pr_info("pthread join failed.");
+ return -1;
+ }
+ if (pret == (void *)-1)
+ pr_info("thread %d failed", i);
+ }
+
+
+ pr_info("threads allocating different memory from group success!");
+
+ for (int i = 0; i < ARRAY_SIZE(thread_func); i++) {
+ pthread_create(threads + i, NULL, thread_func[i], (void *)0);
+ if (ret < 0) {
+ pr_info("pthread create failed.");
+ return -1;
+ }
+ }
+
+ for (int i = 0; i < ARRAY_SIZE(threads); i++) {
+ pthread_join(threads[i], &pret);
+ if (ret < 0) {
+ pr_info("pthread join failed.");
+ return -1;
+ }
+ if (pret == (void *)-1)
+ pr_info("thread %d failed", i);
+ }
+
+ pr_info("threads allocating different memory pass-through success!");
+
+ return 0;
+}
+
+static int testcase4(void)
+{
+ int ret = 0;
+ unsigned long size = 4UL * PMD_SIZE;
+ unsigned long va;
+ int count = 0;
+
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID) < 0)
+ return -1;
+
+ while (1) {
+ va = alloc_readonly(GROUP_ID, size, 0);
+ if (va == -1) {
+ pr_info("alloc 4M memory %dth time failed.", count + 1);
+ return -1;
+ }
+ count++;
+ if (count % 100 == 0)
+ pr_info("memory allocated %dMB", 8 * count);
+ }
+
+ return 0;
+}
+
+static int testcase5(void)
+{
+ int ret = 0;
+ unsigned long size = PMD_SIZE;
+ unsigned long sp_flag = SP_DVPP | SP_PROT_RO | SP_PROT_FOCUS;
+
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID) < 0)
+ return -1;
+
+ if (wrap_sp_alloc(GROUP_ID, PMD_SIZE, sp_flag) == (void *)-1) {
+ pr_info("alloc for dvpp readonly memory failed as expected");
+ } else {
+ pr_info("alloc for dvpp readonly memory success unexpected");
+ ret = -1;
+ }
+
+ return ret;
+}
+
+static int testcase6(void)
+{
+ int ret = 0;
+ unsigned long size = PMD_SIZE;
+ unsigned long sp_flag = SP_PROT_RO | SP_PROT_FOCUS;
+
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID) < 0)
+ return -1;
+
+ if (wrap_sp_alloc(GROUP_ID, PMD_SIZE, sp_flag) == (void *)-1) {
+ pr_info("alloc for dvpp readonly memory failed unexpected");
+ return -1;
+ }
+
+ sleep(1);
+ sharepool_print();
+
+ return ret;
+
+}
+
+#define RD_NUM 10
+#define WR_NUM 10
+static int testcase7(void)
+{
+ int ret = 0;
+ int spg_id = 1;
+ unsigned long sp_flag = 0;
+ unsigned long size = PMD_SIZE;
+ void *addr;
+ void *addr_rd[RD_NUM];
+ void *addr_wr[WR_NUM];
+ char *caddr;
+ int count = 0;
+ unsigned long uva, kva;
+ struct sigaction sa = {0};
+ bool is_hugepage = true;
+ pid_t pid;
+
+ sa.sa_handler = sigsegv_handler;
+ sa.sa_flags |= SA_NODEFER;
+ sigaction(SIGSEGV, &sa, NULL);
+
+ /* A进程读写权限加组 */
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id);
+ if (ret < 0) {
+ pr_info("add group failed, errno %d", errno);
+ return ret;
+ }
+
+ /* 申请只读内存 N个 */
+ sp_flag |= SP_PROT_FOCUS;
+ sp_flag |= SP_PROT_RO;
+ if (is_hugepage)
+ sp_flag |= SP_HUGEPAGE;
+ pr_info("sp_flag: %lx", sp_flag);
+
+ for (int i = 0; i < RD_NUM; i++) {
+ addr = wrap_sp_alloc(spg_id, size, sp_flag);
+ if ((unsigned long)addr == -1) {
+ pr_info("alloc readonly memory failed, errno %d", errno);
+ return ret;
+ }
+ /* 是否在只读区间范围内 */
+ pr_info("address is %lx", addr);
+ if (!is_addr_ro_range((unsigned long)addr)) {
+ pr_info("address not in read only range.");
+ return -1;
+ }
+ addr_rd[i] = addr;
+
+ caddr = (char *)addr;
+ /* 尝试写,预期收到signal11 SIGSEGV */
+ ret = setjmp(sigsegv_env);
+ if (!ret) {
+ memset(caddr, 'A', size);
+ pr_info("memset success unexpected.");
+ return -1;
+ }
+ pr_info("memset failed expected.");
+ }
+
+ /* 申请读写内存 N个 */
+ sp_flag = 0;
+ if (is_hugepage)
+ sp_flag |= SP_HUGEPAGE;
+ pr_info("sp_flag: %lx", sp_flag);
+
+ for (int i = 0; i < WR_NUM; i++) {
+ addr = wrap_sp_alloc(spg_id, size, sp_flag);
+ if ((unsigned long)addr == -1) {
+ pr_info("alloc wr memory failed, errno %d", errno);
+ return ret;
+ }
+ addr_wr[i] = addr;
+
+ /* 尝试写,预期成功 */
+ caddr = (char *)addr;
+ memset(caddr, 'Q', size);
+ }
+
+ /* B进程正常加组 */
+ pid = fork();
+ if (pid == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id);
+ if (ret < 0) {
+ pr_info("add child to group failed");
+ exit(-1);
+ }
+
+ sigaction(SIGSEGV, &sa, NULL);
+
+ for (int i = 0; i < WR_NUM; i++) {
+ ret = setjmp(sigsegv_env);
+ if (!ret) {
+ memset(addr_rd[i], 'B', size);
+ pr_info("memset success unexpected.");
+ exit(-1);
+ }
+ pr_info("memset %d RD area failed expected.", i);
+ }
+
+ sharepool_print();
+
+ for (int i = 0; i < WR_NUM; i++) {
+ ret = setjmp(sigsegv_env);
+ if (!ret) {
+ pr_info("gonna memset addr 0x%lx", addr_wr[i]);
+ memset(addr_wr[i], 'B', size);
+ pr_info("memset %d WR area success expected.", i);
+ } else {
+ pr_info("memset %d WR area failed unexpected.", i);
+ exit(-1);
+ }
+ }
+
+ exit(0);
+ }
+
+ ret = 0;
+ WAIT_CHILD_STATUS(pid, out);
+
+out:
+ return ret;
+
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "进程以读写权限加组,申请只读内存,地址在预置地址空间范围内;u2k给内核,内核读写,预期成功;用户态进程再次读写,预期读成功,写失败.")
+ TESTCASE_CHILD(testcase2, "进程A加组并申请只读内存后,进程B以读写权限加组,并尝试读写这块内存,预期读成功,写失败。")
+ TESTCASE_CHILD(testcase3, "进程A循环申请只读内存,普通内存,dvpp内存,预期地址空间均相符")
+ TESTCASE_CHILD_MANUAL(testcase4, "进程A反复申请只读内存,直至地址空间耗尽")
+ TESTCASE_CHILD(testcase5, "尝试申请dvpp只读内存,预期失败")
+ TESTCASE_CHILD(testcase6, "申请只读内存并打印维测查看")
+ TESTCASE_CHILD(testcase7, "A进程加组,申请2块只读内存和2块读写内存;B进程加组,尝试写4块内存,预期前两块失败,后两块成功。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_dvpp_multi_16G_alloc.c b/tools/testing/sharepool/testcase/function_test/test_dvpp_multi_16G_alloc.c
new file mode 100644
index 000000000000..edc189df398c
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_dvpp_multi_16G_alloc.c
@@ -0,0 +1,690 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 20 01:38:40 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+#define ALLOC_LOOP 10
+
+static int dvpp_alloc_group(int spg_id, unsigned long size, struct sp_alloc_info *array, int array_num)
+{
+ int i, ret;
+ for (i = 0; i < array_num; i++) {
+ array[i].flag = SP_DVPP;
+ array[i].spg_id = spg_id;
+ array[i].size = size;
+
+ ret = ioctl_alloc(dev_fd, &array[i]);
+ if (ret < 0) {
+ pr_info("alloc DVPP failed, errno: %d", errno);
+ return -1;
+ }
+ memset(array[i].addr, 0, array[i].size);
+ }
+}
+
+/*
+ * 测试流程:
+ * 1、用户态进程A加组
+ * 2、申请10次DVPP共享内存
+ * 3、申请10次DVPP直调内存
+ * 预期结果:
+ * 1、以上操作均成功;
+ * 2、申请的地址不重复;
+ */
+static int testcase_dvpp_multi_16G_01(void)
+{
+ int ret, i, spg_id = 996;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = spg_id,
+ };
+
+ /* 1、用户态进程A加组 */
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ return -1;
+ }
+
+ /* 2、申请10次DVPP共享内存 */
+ dvpp_alloc_group(spg_id, 40960, alloc_group_info, ALLOC_LOOP);
+
+ /* 3、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ return -1;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ return -1;
+ }
+ }
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+}
+
+static int child(sem_t *sync, sem_t *childsync)
+{
+ int i, ret;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ /* 2、申请10次DVPP共享内存 */
+ dvpp_alloc_group(996, 40960, alloc_group_info, ALLOC_LOOP);
+
+ /* 3、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ sem_post(childsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ sem_post(sync);
+ break;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ sem_post(sync);
+ break;
+ }
+ }
+
+ sem_post(childsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+}
+
+/*
+ * 测试流程:
+ * 1、用户态进程A/B加入相同组,分别执行下面两个动作;
+ * 2、申请10次DVPP共享内存
+ * 3、申请10次DVPP直调内存
+ * 预期结果:
+ * 1、以上操作均成功;
+ * 2、申请的地址不重复;
+ */
+static int testcase_dvpp_multi_16G_02(void)
+{
+ int ret, status, i, spg_id = 996;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = spg_id,
+ };
+
+ char *sync_name = "/dvpp_pass_through";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *childsync_name = "/dvpp_pass_through_child";
+ sem_t *childsync = sem_open(childsync_name, O_CREAT, O_RDWR, 0);
+ if (childsync == SEM_FAILED) {
+ pr_info("sem_open child failed");
+ return -1;
+ }
+ sem_unlink(childsync_name);
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(sync, childsync));
+ }
+
+ /* 1、用户态进程A/B加相同组 */
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group child, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ /* 2、申请10次DVPP共享内存 */
+ dvpp_alloc_group(spg_id, 40960, alloc_group_info, ALLOC_LOOP);
+
+ /* 3、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+ }
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+ sem_post(sync);
+out:
+ status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status))
+ ret = -1;
+
+ return ret;
+}
+
+/*
+ * 测试流程:
+ * 1、用户态进程A/B加入不同组,分别执行下面两个动作;
+ * 2、申请10次DVPP共享内存
+ * 3、申请10次DVPP直调内存
+ * 预期结果:
+ * 1、以上操作均成功;
+ * 2、申请的地址不重复;
+ */
+static int testcase_dvpp_multi_16G_03(void)
+{
+ int ret, status, i, spg_id = 9116;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = spg_id,
+ };
+
+ char *sync_name = "/dvpp_pass_through";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *childsync_name = "/dvpp_pass_through_child";
+ sem_t *childsync = sem_open(childsync_name, O_CREAT, O_RDWR, 0);
+ if (childsync == SEM_FAILED) {
+ pr_info("sem_open child failed");
+ return -1;
+ }
+ sem_unlink(childsync_name);
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(sync, childsync));
+ }
+
+ /* 1、用户态进程A/B加相同组 */
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ag_info.pid = pid;
+ ag_info.spg_id = 996;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group child, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ /* 2、申请10次DVPP共享内存 */
+ dvpp_alloc_group(spg_id, 40960, alloc_group_info, ALLOC_LOOP);
+
+ /* 3、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+ }
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+ sem_post(sync);
+out:
+ status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status))
+ ret = -1;
+
+ return ret;
+}
+
+/*
+ * 测试流程:
+ * 1、申请10次DVPP直调内存
+ * 2、进程A加入共享组
+ * 3、申请10次DVPP直调内存
+ * 预期结果:
+ * 1、以上操作均成功;
+ * 2、申请的地址不重复;
+ */
+static int testcase_dvpp_multi_16G_04(void)
+{
+ int ret, i, spg_id = 996;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = spg_id,
+ };
+
+ /* 1、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ /* 2、用户态进程A加组 */
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ return -1;
+ }
+
+ /* 3、申请10次DVPP共享内存 */
+ dvpp_alloc_group(spg_id, 40960, alloc_group_info, ALLOC_LOOP);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ return -1;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ return -1;
+ }
+ }
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+}
+
+static int child05(sem_t *sync, sem_t *childsync)
+{
+ int ret;
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ sem_post(childsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+ sem_post(childsync);
+}
+
+/*
+ * 测试流程:
+ * 1、进程A申请10次DVPP直调内存
+ * 2、进程B加入共享组
+ * 3、进程A加入共享组
+ * 预期结果:
+ * 1、1和2操作成功,3加组失败;
+ */
+static int testcase_dvpp_multi_16G_05(void)
+{
+ int ret, status, i, spg_id = 9116;
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+
+ struct sp_add_group_info ag_info = {
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = spg_id,
+ };
+
+ char *sync_name = "/dvpp_pass_through";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *childsync_name = "/dvpp_pass_through_child";
+ sem_t *childsync = sem_open(childsync_name, O_CREAT, O_RDWR, 0);
+ if (childsync == SEM_FAILED) {
+ pr_info("sem_open child failed");
+ return -1;
+ }
+ sem_unlink(childsync_name);
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child05(sync, childsync));
+ }
+
+ /* 1、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ /* 2、用户态进程B加组 */
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ /* 3、用户态进程A加组 */
+ ag_info.pid = getpid();
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ pr_info("failed add group child, ret: %d, errno: %d", ret, errno);
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+ }
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+out:
+ status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status))
+ ret = -1;
+
+ return ret;
+}
+
+static int child06(sem_t *sync, sem_t *childsync)
+{
+ int i, ret;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ /* 4、申请10次DVPP共享内存 */
+ dvpp_alloc_group(996, 40960, alloc_group_info, ALLOC_LOOP);
+
+ /* 4、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ sem_post(childsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ sem_post(sync);
+ break;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ sem_post(sync);
+ break;
+ }
+ }
+
+ sem_post(childsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+}
+
+/*
+ * 测试流程:
+ * 1、进程A申请10次DVPP直调内存
+ * 2、进程A加入共享组
+ * 3、进程B加入共享组
+ * 4、进程A申请共享内存,进程B申请共享,直调内存
+ * 预期结果:
+ * 1、以上操作均执行成功
+ * 2、申请的内存地址不相同
+ */
+static int testcase_dvpp_multi_16G_06(void)
+{
+ int ret, status, i, spg_id = 996;
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+
+ struct sp_add_group_info ag_info = {
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = spg_id,
+ };
+
+ char *sync_name = "/dvpp_pass_through";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *childsync_name = "/dvpp_pass_through_child";
+ sem_t *childsync = sem_open(childsync_name, O_CREAT, O_RDWR, 0);
+ if (childsync == SEM_FAILED) {
+ pr_info("sem_open child failed");
+ return -1;
+ }
+ sem_unlink(childsync_name);
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child06(sync, childsync));
+ }
+
+ /* 1、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ /* 2、用户态进程A加共享组 */
+ ag_info.pid = getpid();
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ /* 3、用户态进程B加共享组 */
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0){
+ pr_info("failed add group child, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ /* 4 */
+ dvpp_alloc_group(spg_id, 40960, alloc_group_info, ALLOC_LOOP);
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+ }
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+ sem_post(sync);
+out:
+ status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status))
+ ret = -1;
+
+ return ret;
+}
+
+static int testcase1(void) { return testcase_dvpp_multi_16G_01(); }
+static int testcase2(void) { return testcase_dvpp_multi_16G_02(); }
+static int testcase3(void) { return testcase_dvpp_multi_16G_03(); }
+static int testcase4(void) { return testcase_dvpp_multi_16G_04(); }
+static int testcase5(void) { return testcase_dvpp_multi_16G_05(); }
+static int testcase6(void) { return testcase_dvpp_multi_16G_06(); }
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "用户态进程A加组,申请10次DVPP共享内存,再申请10次DVPP直调内存,预期成功")
+ TESTCASE_CHILD(testcase2, "用户态进程A/B加入相同组,分别申请10次DVPP共享内存,再申请10次DVPP直调内存,预期成功")
+ TESTCASE_CHILD(testcase3, "用户态进程A/B加入不同组,分别申请10次DVPP共享内存,再申请10次DVPP直调内存,预期成功")
+ TESTCASE_CHILD(testcase4, "申请10次DVPP直调内存,然后进程A加入共享组,申请10次DVPP直调内存")
+ TESTCASE_CHILD(testcase5, "1、进程A申请10次DVPP直调内存 2、进程B加入共享组 3、进程A加入共享组")
+ TESTCASE_CHILD(testcase6, "1、进程A申请10次DVPP直调内存 2、进程A加入共享组 3、进程B加入共享组 4、进程A申请共享内存,进程B申请共享,直调内存")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_dvpp_multi_16G_k2task.c b/tools/testing/sharepool/testcase/function_test/test_dvpp_multi_16G_k2task.c
new file mode 100644
index 000000000000..a938ffa0382f
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_dvpp_multi_16G_k2task.c
@@ -0,0 +1,604 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 20 01:38:40 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+#define ALLOC_LOOP 10
+
+static int dvpp_alloc_group(int spg_id, unsigned long size, struct sp_alloc_info *array, int array_num)
+{
+ int i, ret;
+ for (i = 0; i < array_num; i++) {
+ array[i].flag = SP_DVPP;
+ array[i].spg_id = spg_id;
+ array[i].size = size;
+
+ ret = ioctl_alloc(dev_fd, &array[i]);
+ if (ret < 0) {
+ pr_info("alloc DVPP failed, errno: %d", errno);
+ return -1;
+ }
+ memset(array[i].addr, 0, array[i].size);
+ }
+}
+
+static int dvpp_k2u_group(int spg_id, unsigned long size, unsigned long kva,
+ struct sp_make_share_info *array, int array_num)
+{
+ int i, ret;
+
+ for (i = 0; i < array_num; i++) {
+ array[i].kva = kva,
+ array[i].size = size,
+ array[i].spg_id = spg_id,
+ array[i].sp_flags = SP_DVPP,
+ array[i].pid = getpid(),
+
+ ret = ioctl_k2u(dev_fd, &array[i]);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ return -1;;
+ }
+ memset(array[i].addr, 0, array[i].size);
+ }
+}
+
+/*
+ * 测试流程:
+ * 1、用户态进程A加组
+ * 2、申请10次DVPP共享,直调内存
+ * 3、申请10次K2TASK内存
+ * 预期结果:
+ * 1、以上操作均成功;
+ * 2、申请的地址不重复;
+ */
+static int testcase_dvpp_multi_16G_01(void)
+{
+ int ret, i, group_id = 996;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+ struct sp_make_share_info k2u_task_info[ALLOC_LOOP];
+
+ struct vmalloc_info ka_info = {
+ .size = 40960,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ return -1;;
+ }
+
+ /* 1、用户态进程A加组 */
+ struct sp_add_group_info ag_info = {
+ .spg_id = group_id,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ /* 2、申请10次DVPP共享,直调内存 */
+ dvpp_alloc_group(group_id, 40960, alloc_group_info, ALLOC_LOOP);
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ /* 3、申请10次K2TASK内存 */
+ dvpp_k2u_group(SPG_ID_DEFAULT, 40960, ka_info.addr, k2u_task_info, ALLOC_LOOP);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ return -1;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ return -1;
+ }
+
+ ret = ioctl_unshare(dev_fd, &k2u_task_info[i]);
+ if (ret < 0) {
+ pr_info("unshare k2task failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+}
+
+/*
+ * 测试流程:
+ * 1、申请10次DVPP k2task内存
+ * 2、用户态进程A加组
+ * 3、申请10次直调、共享内存
+ * 预期结果:
+ * 1、以上操作均成功;
+ * 2、申请的地址不重复;
+ */
+static int testcase_dvpp_multi_16G_02(void)
+{
+ int ret, i, group_id = 996;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+ struct sp_make_share_info k2u_task_info[ALLOC_LOOP];
+
+ struct vmalloc_info ka_info = {
+ .size = 40960,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ return -1;;
+ }
+
+ /* 1、申请10次K2TASK内存 */
+ dvpp_k2u_group(SPG_ID_DEFAULT, 40960, ka_info.addr, k2u_task_info, ALLOC_LOOP);
+
+ /* 2、用户态进程A加组 */
+ struct sp_add_group_info ag_info = {
+ .spg_id = group_id,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ /* 2、申请10次DVPP共享,直调内存 */
+ dvpp_alloc_group(group_id, 40960, alloc_group_info, ALLOC_LOOP);
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ return -1;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ return -1;
+ }
+
+ ret = ioctl_unshare(dev_fd, &k2u_task_info[i]);
+ if (ret < 0) {
+ pr_info("unshare k2task failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+}
+
+static int child(sem_t *sync, sem_t *childsync)
+{
+ int i, ret;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+ struct sp_make_share_info k2u_task_info[ALLOC_LOOP];
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ struct vmalloc_info ka_info = {
+ .size = 40960,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ return -1;;
+ }
+
+ /* 2、申请10次K2TASK内存 */
+ dvpp_k2u_group(SPG_ID_DEFAULT, 40960, ka_info.addr, k2u_task_info, ALLOC_LOOP);
+
+ /* 3、申请10次DVPP共享内存 */
+ dvpp_alloc_group(996, 40960, alloc_group_info, ALLOC_LOOP);
+
+ /* 4、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ sem_post(childsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ sem_post(sync);
+ break;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ sem_post(sync);
+ break;
+ }
+
+ ret = ioctl_unshare(dev_fd, &k2u_task_info[i]);
+ if (ret < 0) {
+ pr_info("unshare k2task failed, errno: %d", errno);
+ sem_post(sync);
+ break;
+ }
+ }
+ ioctl_vfree(dev_fd, &ka_info);
+
+ sem_post(childsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+}
+
+/*
+ * 测试流程:
+ * 1、用户态进程A/B加入相同组,分别执行下面两个动作;
+ * 2、申请10次k2task内存
+ * 3、申请10次DVPP共享内存
+ * 4、申请10次DVPP直调内存
+ * 预期结果:
+ * 1、以上操作均成功;
+ * 2、申请的地址不重复;
+ */
+static int testcase_dvpp_multi_16G_03(void)
+{
+ int ret, status, i, spg_id = 996;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+ struct sp_make_share_info k2u_task_info[ALLOC_LOOP];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = spg_id,
+ };
+
+ char *sync_name = "/dvpp_pass_through";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *childsync_name = "/dvpp_pass_through_child";
+ sem_t *childsync = sem_open(childsync_name, O_CREAT, O_RDWR, 0);
+ if (childsync == SEM_FAILED) {
+ pr_info("sem_open child failed");
+ return -1;
+ }
+ sem_unlink(childsync_name);
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(sync, childsync));
+ }
+
+ struct vmalloc_info ka_info = {
+ .size = 40960,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ return -1;;
+ }
+
+ /* 1、用户态进程A/B加相同组 */
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group child, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ /* 2、申请10次K2TASK内存 */
+ dvpp_k2u_group(SPG_ID_DEFAULT, 40960, ka_info.addr, k2u_task_info, ALLOC_LOOP);
+
+ /* 3、申请10次DVPP共享内存 */
+ dvpp_alloc_group(spg_id, 40960, alloc_group_info, ALLOC_LOOP);
+
+ /* 4、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ret = ioctl_unshare(dev_fd, &k2u_task_info[i]);
+ if (ret < 0) {
+ pr_info("unshare k2task failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+ sem_post(sync);
+out:
+ status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status))
+ ret = -1;
+
+ return ret;
+}
+
+/*
+ * 测试流程:
+ * 1、用户态进程A/B加入不同组,分别执行下面两个动作;
+ * 2、申请10次k2task内存
+ * 3、申请10次DVPP共享内存
+ * 4、申请10次DVPP直调内存
+ * 预期结果:
+ * 1、以上操作均成功;
+ * 2、申请的地址不重复;
+ */
+static int testcase_dvpp_multi_16G_04(void)
+{
+ int ret, status, i, spg_id = 9116;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+ struct sp_make_share_info k2u_task_info[ALLOC_LOOP];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = spg_id,
+ };
+
+ char *sync_name = "/dvpp_pass_through";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *childsync_name = "/dvpp_pass_through_child";
+ sem_t *childsync = sem_open(childsync_name, O_CREAT, O_RDWR, 0);
+ if (childsync == SEM_FAILED) {
+ pr_info("sem_open child failed");
+ return -1;
+ }
+ sem_unlink(childsync_name);
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(sync, childsync));
+ }
+
+ struct vmalloc_info ka_info = {
+ .size = 40960,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ return -1;;
+ }
+
+ /* 1、用户态进程A/B加不同组 */
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ag_info.pid = pid;
+ ag_info.spg_id = 996;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group child, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ /* 2、申请10次K2TASK内存 */
+ dvpp_k2u_group(SPG_ID_DEFAULT, 40960, ka_info.addr, k2u_task_info, ALLOC_LOOP);
+
+ /* 3、申请10次DVPP共享内存 */
+ dvpp_alloc_group(spg_id, 40960, alloc_group_info, ALLOC_LOOP);
+
+ /* 4、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ret = ioctl_unshare(dev_fd, &k2u_task_info[i]);
+ if (ret < 0) {
+ pr_info("unshare k2task failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+ sem_post(sync);
+out:
+ status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status))
+ ret = -1;
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase_dvpp_multi_16G_01, "1、用户态进程A加组 2、申请10次DVPP共享,直调内存 3、申请10次K2TASK内存")
+ TESTCASE_CHILD(testcase_dvpp_multi_16G_02, "1、申请10次DVPP k2task内存 2、用户态进程A加组 3、申请10次直调、共享内存")
+ TESTCASE_CHILD(testcase_dvpp_multi_16G_03, "1、用户态进程A/B加入相同组,分别执行下面3个动作;2、申请10次k2task内存 3、申请10次DVPP共享内存 4、申请10次DVPP直调内存")
+ TESTCASE_CHILD(testcase_dvpp_multi_16G_04, "1、用户态进程A/B加入相同组,分别执行下面3个动作;2、申请10次k2task内存 3、申请10次DVPP共享内存 4、申请10次DVPP直调内存")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+#include "default_main.c"
+
diff --git a/tools/testing/sharepool/testcase/function_test/test_dvpp_pass_through.c b/tools/testing/sharepool/testcase/function_test/test_dvpp_pass_through.c
new file mode 100644
index 000000000000..010bf0e5bdf6
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_dvpp_pass_through.c
@@ -0,0 +1,191 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 20 01:38:40 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h> // for wait
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * 测试流程:
+ * 1、用户态进程A未加组,直接分配内存N。
+ * 2、进程B查询进程A的组id,尝试加组。
+ * 3、进程A调用u2k将内存N共享给内核,内核模块读N。
+ * 4、进程A释放内存N。
+ * 5、内核再次读内存。
+ * 预期结果:
+ * 1、预期进入DVPP直调场景。
+ * 2、查询失败,加组成功。
+ * 3、共享成功,内核读成功。
+ * 4、释放成功。
+ * 5、内核读成功。
+ */
+static int child(sem_t *sync, sem_t *childsync)
+{
+ int ret;
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ ret = ioctl_find_first_group(dev_fd, getpid());
+ if (!(ret < 0 && errno == ENODEV)) {
+ pr_info("unexpected parent group id. ret %d, errno %d", ret, errno);
+ ret = -1;
+ goto out;
+ } else {
+ pr_info("find first group failed as expected, ret: %d", errno);
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = getppid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 996,
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ ret = -1;
+ goto out;
+ } else {
+ ret = 0;
+ }
+
+out:
+ sem_post(childsync);
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret, status;
+
+ char *sync_name = "/dvpp_pass_through";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ ret = -1;
+ goto out;
+ }
+ sem_unlink(sync_name);
+
+ char *childsync_name = "/dvpp_pass_through_child";
+ sem_t *childsync = sem_open(childsync_name, O_CREAT, O_RDWR, 0);
+ if (childsync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ ret = -1;
+ goto out;
+ }
+ sem_unlink(childsync_name);
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(sync, childsync));
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_DVPP,
+ .spg_id = SPG_ID_DEFAULT,
+ .size = 10000,
+ };
+
+ alloc_info.flag = SP_HUGEPAGE;
+ alloc_info.size = PMD_SIZE;
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed, errno: %d", errno);
+ goto out;
+ }
+
+ memset((void *)alloc_info.addr, 'k', alloc_info.size);
+ struct karea_access_info kaccess_info = {
+ .mod = KAREA_CHECK,
+ .value = 'k',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+
+ ret = ioctl_karea_access(dev_fd, &kaccess_info);
+ if (ret < 0) {
+ pr_info("karea read failed, errno: %d", errno);
+ goto out;
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("unshare u2k failed, errno: %d", errno);
+ goto out;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ goto out;
+ }
+/*
+ ret = ioctl_karea_access(dev_fd, &kaccess_info);
+ if (ret < 0) {
+ pr_info("karea read failed, errno: %d", errno);
+ goto out;
+ }
+ */
+
+out:
+ status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status))
+ ret = -1;
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "1、用户态进程A未加组,直接分配内存N。 2、进程B查询进程A的组id,尝试加组。 3、进程A调用u2k将内存N共享给内核,内核模块读N。 4、进程A释放内存N。 5、内核再次读内存。预期结果:1、预期进入DVPP直调场景。2、查询失败,加组成功。 3、共享成功,内核读成功。4、释放成功。5、内核读成功。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_dvpp_readonly.c b/tools/testing/sharepool/testcase/function_test/test_dvpp_readonly.c
new file mode 100644
index 000000000000..efc51a9411b2
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_dvpp_readonly.c
@@ -0,0 +1,147 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 20 01:38:40 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+static unsigned long kva_size = 0x200000;
+static unsigned long kva_normal;
+static unsigned long kva_huge;
+
+/*
+ * 测试点:
+ * sp_alloc、k2u 加组,直调 大页、小页、DVPP、非DVPP
+ *
+ * 测试步骤:
+ * 1. 加组(或者直调不加组)
+ * 2. 申请用户内存(sp_alloc或k2u),附加只读属性
+ * 3. mprotect() 预期失败
+ * 4. memset 预期进程挂掉
+ */
+static int test_route(bool k2u, bool auto_group, unsigned long sp_flags)
+{
+ int spg_id = 0, ret;
+ unsigned long uva;
+ unsigned long size = 0x1000;
+
+ if (auto_group) {
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0) {
+ pr_info("add group failed, %d, para: %s, %s, %lx",
+ errno, k2u ? "k2u" : "sp_alloc",
+ auto_group ? "add_group" : "passthrough", sp_flags);
+ return -1;
+ }
+ }
+
+ if (k2u) {
+ unsigned long kva = (sp_flags & SP_HUGEPAGE_ONLY) ? kva_huge : kva_normal;
+ uva = wrap_k2u(kva, size, spg_id, sp_flags | SP_PROT_RO);
+ } else
+ uva = (unsigned long)wrap_sp_alloc(spg_id, size, sp_flags | SP_PROT_RO);
+
+ if (!uva) {
+ pr_info("alloc user memory failed, %d, para: %s, %s, %lx",
+ errno, k2u ? "k2u" : "sp_alloc",
+ auto_group ? "add_group" : "passthrough", sp_flags);
+ return -1;
+ }
+
+ ret = mprotect(uva, size, PROT_WRITE);
+ if (!(ret && errno == EACCES)) {
+ pr_info("mprotect failed, ret:%d, err:%d, para: %s, %s, %lx",
+ ret, errno, k2u ? "k2u" : "sp_alloc",
+ auto_group ? "add_group" : "passthrough", sp_flags);
+ return ret;
+ }
+ memset((void *)uva, 0, size);
+
+ // should never access this line
+ return -1;
+}
+
+static void pre_hook()
+{
+ kva_normal = wrap_vmalloc(kva_size, false);
+ if (!kva_normal)
+ exit(1);
+ kva_huge = wrap_vmalloc(kva_size, true);
+ if (!kva_huge) {
+ wrap_vfree(kva_normal);
+ exit(1);
+ }
+}
+#define pre_hook pre_hook
+
+static void post_hook()
+{
+ wrap_vfree(kva_huge);
+ wrap_vfree(kva_normal);
+}
+#define post_hook post_hook
+
+
+// sp_alloc,直调
+static int testcase1() { return test_route(false, false, 0); };
+static int testcase2() { return test_route(false, false, SP_DVPP); };
+static int testcase3() { return test_route(false, false, SP_HUGEPAGE_ONLY); };
+static int testcase4() { return test_route(false, false, SP_DVPP | SP_HUGEPAGE_ONLY); };
+// sp_alloc,非直调
+static int testcase5() { return test_route(false, true, 0); };
+static int testcase6() { return test_route(false, true, SP_DVPP); };
+static int testcase7() { return test_route(false, true, SP_HUGEPAGE_ONLY); };
+static int testcase8() { return test_route(false, true, SP_DVPP | SP_HUGEPAGE_ONLY); };
+// k2task
+static int testcase9() { return test_route(true, false, 0); };
+static int testcase10() { return test_route(true, false, SP_DVPP); };
+static int testcase11() { return test_route(true, false, SP_HUGEPAGE_ONLY); };
+static int testcase12() { return test_route(true, false, SP_DVPP | SP_HUGEPAGE_ONLY); };
+// k2spg
+static int testcase13() { return test_route(true, true, 0); };
+static int testcase14() { return test_route(true, true, SP_DVPP); };
+static int testcase15() { return test_route(true, true, SP_HUGEPAGE_ONLY); };
+static int testcase16() { return test_route(true, true, SP_DVPP | SP_HUGEPAGE_ONLY); };
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD_SIGNAL(testcase1, SIGSEGV, "sp_alloc 直调 普通页")
+ TESTCASE_CHILD_SIGNAL(testcase2, SIGSEGV, "sp_alloc 直调 dvpp")
+ TESTCASE_CHILD_SIGNAL(testcase3, SIGSEGV, "sp_alloc 直调 大页")
+ TESTCASE_CHILD_SIGNAL(testcase4, SIGSEGV, "sp_alloc 直调 dvpp 大页")
+ TESTCASE_CHILD_SIGNAL(testcase5, SIGSEGV, "sp_alloc 非直调 普通页")
+ TESTCASE_CHILD_SIGNAL(testcase6, SIGSEGV, "sp_alloc 非直调 dvpp")
+ TESTCASE_CHILD_SIGNAL(testcase7, SIGSEGV, "sp_alloc 非直调 大页")
+ TESTCASE_CHILD_SIGNAL(testcase8, SIGSEGV, "sp_alloc 非直调 dvpp 大页")
+ TESTCASE_CHILD_SIGNAL(testcase9, SIGSEGV, "k2task")
+ TESTCASE_CHILD_SIGNAL(testcase10, SIGSEGV, "k2task dvpp")
+ TESTCASE_CHILD_SIGNAL(testcase11, SIGSEGV, "k2task 大页")
+ TESTCASE_CHILD_SIGNAL(testcase12, SIGSEGV, "k2task dvpp 大页")
+ TESTCASE_CHILD_SIGNAL(testcase13, SIGSEGV, "k2spg")
+ TESTCASE_CHILD_SIGNAL(testcase14, SIGSEGV, "k2spg dvpp")
+ TESTCASE_CHILD_SIGNAL(testcase15, SIGSEGV, "k2spg 大页")
+ TESTCASE_CHILD_SIGNAL(testcase16, SIGSEGV, "k2spg dvpp 大页")
+};
+
+static struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_hugetlb_alloc_hugepage.c b/tools/testing/sharepool/testcase/function_test/test_hugetlb_alloc_hugepage.c
new file mode 100644
index 000000000000..12fc8ab52cae
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_hugetlb_alloc_hugepage.c
@@ -0,0 +1,113 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2022. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Jan 04 07:33:23 2022
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 测试点在于对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是
+ * 通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理。
+ *
+ * 申请大页内存,读写访问,做u2k,内核check、写,用户态读
+ * 预期用户态读写成功,u2k成功,内核写成功,用户态check成功
+ */
+static int testcase_route(int flags, unsigned long len)
+{
+ int i, ret;
+ char *addr;
+
+ addr = ioctl_alloc_huge_memory(0, flags, 0, len);
+ if (!addr) {
+ pr_info("alloc huge memory failed, %d", errno);
+ return -1;
+ } else
+ pr_info("alloc huge memory success, %p, size:%#lx, flags: %d", addr, len, flags);
+
+ memset(addr, 'b', len);
+
+ for (i = 0; i < len; i++) {
+ if (addr[i] != 'b') {
+ pr_info("memset resut check failed, i:%d, %c", i, addr[i]);
+ return -1;
+ }
+ }
+ pr_info("check memset sueccess");
+
+ unsigned long kaddr = wrap_u2k(addr, len);
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'b',
+ .addr = kaddr,
+ .size = len,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return -1;
+ }
+
+ karea_info.mod = KAREA_SET;
+ karea_info.value = 'c';
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ return -1;
+ }
+
+ for (i = 0; i < len; i++) {
+ if (addr[i] != 'c') {
+ pr_info("memset resut check failed, i:%d, %c", i, addr[i]);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+#define TEST_FUNC(num, flags, len) \
+static int testcase##num() \
+{ \
+ return testcase_route(flags, len); \
+}
+
+TEST_FUNC(1, 0, 0x100)
+TEST_FUNC(2, 0, 0x200000)
+TEST_FUNC(3, 0, 0x2000000)
+TEST_FUNC(4, 1, 0x100)
+TEST_FUNC(5, 1, 0x200000)
+TEST_FUNC(6, 1, 0x2000000)
+TEST_FUNC(7, 2, 0x100)
+TEST_FUNC(8, 2, 0x200000)
+TEST_FUNC(9, 2, 0x2000000)
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理")
+ TESTCASE_CHILD(testcase2, "对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理")
+ TESTCASE_CHILD(testcase3, "对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理")
+ TESTCASE_CHILD(testcase4, "对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理")
+ TESTCASE_CHILD(testcase5, "对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理")
+ TESTCASE_CHILD(testcase6, "对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理")
+ TESTCASE_CHILD(testcase7, "对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理")
+ TESTCASE_CHILD(testcase8, "对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理")
+ TESTCASE_CHILD(testcase9, "对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_k2u.c b/tools/testing/sharepool/testcase/function_test/test_k2u.c
new file mode 100644
index 000000000000..ebae2395ac5d
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_k2u.c
@@ -0,0 +1,804 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Nov 23 02:17:32 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/shm.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+#define tc1_MSG_KEY 20
+#define tc1_MSG_TYPE 100
+struct msgbuf_alloc_info {
+ long type;
+ union {
+ struct sp_alloc_info alloc_info;
+ struct sp_make_share_info share_info;
+ };
+};
+
+/*
+ * 内核模块分配并写内存N,k2u共享给一个进程(进程未加组),该进程读内存N成功,内核模块释放内存N。
+ */
+static int testcase1(void)
+{
+ int ret;
+
+ struct vmalloc_info ka_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto out_free;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto out_free;
+ }
+
+ char *buf = (char *)k2u_info.addr;
+ for (int i = 0; i < k2u_info.size; i++) {
+ if (buf[i] != 'b') {
+ pr_info("check k2u context failed");
+ ret = -1;
+ break;
+ }
+ }
+
+ if (ioctl_unshare(dev_fd, &k2u_info)) {
+ pr_info("unshare memory failed, errno: %d", errno);
+ ret = -1;
+ }
+
+out_free:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret < 0 ? -1 : 0;
+}
+
+/*
+ * 内核模块分配并写内存N,k2u共享给一个进程(进程已加组)。
+ * 该进程读内存成功。新进程B加入该组,进程B读内存N成功。内核模块释放内存N。
+ * 内核执行unshare操作,用户态进程不能访问。
+ */
+static jmp_buf testcase2_env;
+static int testcase2_sigsegv_result = -1;
+static void testcase2_sigsegv_handler(int num)
+{
+ pr_info("segment fault occurs");
+ testcase2_sigsegv_result = 0;
+ longjmp(testcase2_env, 1);
+}
+
+static int testcase2(void)
+{
+ int ret, status = 0, group_id = 10;
+ pid_t pid;
+
+ char *sync_name = "/testcase2_k2u";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *sync_name2 = "/testcase2_k2u2";
+ sem_t *sync2 = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync2 == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name2);
+
+ struct vmalloc_info ka_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto out;
+ }
+
+ struct sp_add_group_info ag_info = {
+ .spg_id = group_id,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ goto out;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = group_id,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto out;
+ }
+
+ char *buf = (char *)k2u_info.addr;
+ for (int i = 0; i < k2u_info.size; i++) {
+ if (buf[i] != 'b') {
+ pr_info("check k2u context failed");
+ ret = -1;
+ goto out;
+ }
+ }
+
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ ret = -1;
+ goto out;
+ } else if (pid == 0) {
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ ret = ioctl_find_first_group(dev_fd, getpid());
+ if (ret != group_id) {
+ pr_info("child: unexpected group_id: %d", ret);
+ sem_post(sync2);
+ exit(-1);
+ }
+
+ char *buf = (char *)k2u_info.addr;
+ for (int i = 0; i < k2u_info.size; i++) {
+ if (buf[i] != 'b') {
+ pr_info("child: check k2u context failed, buf:%lx, i:%d, buf[i]:%d",
+ buf, i, (int)buf[i]);
+ sem_post(sync2);
+ exit(-1);
+ }
+ }
+ sem_post(sync2);
+ exit(0);
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ sem_post(sync);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ goto out_wait;
+ }
+
+ do {
+ ret = sem_wait(sync2);
+ } while (ret < 0 && errno == EINTR);
+
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ goto out_wait;
+ }
+
+ testcase2_sigsegv_result = -1;
+ struct sigaction sa = {0};
+ sa.sa_handler = testcase2_sigsegv_handler;
+ sigaction(SIGSEGV, &sa, NULL);
+ ret = setjmp(testcase2_env);
+ if (!ret) {
+ *(char *)k2u_info.addr = 'a';
+ pr_info("setjmp success, set char as 'a' success.");
+ }
+
+ if (testcase2_sigsegv_result) {
+ pr_info("ioctl unshare has no effect");
+ ret = -1;
+ goto out_wait;
+ } else
+ ret = 0;
+
+out_wait:
+ waitpid(pid, &status, 0);
+ if (ret || !WIFEXITED(status) || WEXITSTATUS(status))
+ ret = -1;
+ if (WIFSIGNALED(status))
+ pr_info("child killed by signal: %d", WTERMSIG(status));
+
+out:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+/*
+ * 内核模块分配并写内存N,k2u共享给一个进程(进程已加组),
+ * 该组内所有进程读内存N成功。内核模块释放内存N。
+ * 内核执行unshare操作,用户态进程不能访问。
+ */
+static int childprocess3(sem_t *sync, sem_t *childsync)
+{
+ int ret;
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ int msgid = msgget(tc1_MSG_KEY, 0);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ sem_post(childsync);
+ return -1;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ struct sp_make_share_info *k2u_info = &msgbuf.share_info;
+ ret = msgrcv(msgid, &msgbuf, sizeof(*k2u_info), tc1_MSG_TYPE, IPC_NOWAIT);
+ if (ret < 0) {
+ pr_info("msgrcv failed, errno: %d", errno);
+ sem_post(childsync);
+ return -1;
+ }
+
+ char *buf = (char *)k2u_info->addr;
+ for (int i = 0; i < k2u_info->size; i++) {
+ if (buf[i] != 'p') {
+ pr_info("child: check k2u context failed, buf:%lx, i:%d, buf[i]:%d",
+ buf, i, (int)buf[i]);
+ sem_post(childsync);
+ return -1;
+ }
+ }
+ sem_post(childsync);
+ return 0;
+}
+
+static int testcase3(void)
+{
+ int ret, status = 0, group_id = 18;
+ pid_t pid;
+
+ char *sync_name = "/testcase2_k2u";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *sync_name2 = "/testcase2_k2u2";
+ sem_t *childsync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (childsync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name2);
+
+ struct vmalloc_info ka_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'p',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto out;
+ }
+
+ struct sp_add_group_info ag_info = {
+ .spg_id = group_id,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ goto out;
+ }
+
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ goto out;
+ } else if (pid == 0) {
+ exit(childprocess3(sync, childsync));
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ goto out_fork;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = group_id,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto out;
+ }
+
+ int msgid = msgget(tc1_MSG_KEY, IPC_CREAT | 0666);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto out_fork;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ msgbuf.type = tc1_MSG_TYPE;
+ memcpy(&msgbuf.share_info, &k2u_info, sizeof(k2u_info));
+ ret = msgsnd(msgid, &msgbuf, sizeof(k2u_info), 0);
+ if (ret < 0) {
+ pr_info("msgsnd failed, errno: %d", errno);
+ goto out_fork;
+ }
+
+ sem_post(sync);
+
+ char *buf = (char *)k2u_info.addr;
+ for (int i = 0; i < k2u_info.size; i++) {
+ if (buf[i] != 'p') {
+ pr_info("check k2u context failed");
+ ret = -1;
+ goto out_fork;
+ }
+ }
+
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ goto out_fork;
+ }
+
+ testcase2_sigsegv_result = -1;
+ struct sigaction sa = {0};
+ sa.sa_handler = testcase2_sigsegv_handler;
+ sigaction(SIGSEGV, &sa, NULL);
+ ret = setjmp(testcase2_env);
+ if (!ret)
+ *(char *)k2u_info.addr = 'a';
+ if (testcase2_sigsegv_result) {
+ pr_info("ioctl unshare has no effect");
+ ret = -1;
+ goto out_fork;
+ }
+
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("childprocess3 exits unexpected");
+ ret = -1;
+ } else
+ ret = 0;
+ goto out;
+
+out_fork:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+out:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+/*
+ * 内核模块分配并写内存N,k2u共享给一个组(进程已加组),
+ * 该组内所有进程读N成功。进程B加入该组,进程B读N成功。内核模块停止共享,
+ * 该组内所有进程读N失败。内核模块再次k2u共享给该组(换用B的pid),该组内
+ * 所有进程读N成功。内核模块释放内存N。
+ */
+
+static pid_t fork_and_add_group(int idx, int group_id, int (*child)(int, sem_t *, sem_t *),
+ sem_t *sync, sem_t *childsync)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(idx, sync, childsync));
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return -1;
+ } else
+ return pid;
+}
+
+static int per_test_init(int i, sem_t **sync, sem_t **childsync)
+{
+ char buf[100];
+ sprintf(buf, "/test_k2u%d", i);
+ *sync = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (*sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+
+ sprintf(buf, "/test_k2u_child%d", i);
+ *childsync = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (*childsync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+
+ return 0;
+}
+
+#define TEST4_SHM_KEY 1348
+#define TEST4_PROC_NUM 5
+
+struct shm_data {
+ struct sp_make_share_info k2u_info;
+ int results[TEST4_PROC_NUM];
+};
+
+static int childprocess4(int idx, sem_t *sync, sem_t *childsync)
+{
+ int ret;
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ int shmid = shmget(TEST4_SHM_KEY, sizeof(struct shm_data), IPC_CREAT | 0666);
+ if (shmid < 0) {
+ pr_info("shmget failed, errno: %d", errno);
+ goto error;
+ }
+
+ struct shm_data *shmd = shmat(shmid, NULL, 0);
+ if (shmd == (void *)-1) {
+ pr_info("shmat failed, errno: %d", errno);
+ goto error;
+ }
+
+ ret = ioctl_find_first_group(dev_fd, getpid());
+ if (ret < 0) {
+ pr_info("get group id failed");
+ goto error;
+ }
+
+ if (!ioctl_judge_addr(dev_fd, shmd->k2u_info.addr)) {
+ pr_info("unexpected k2u addr: 0x%lx", shmd->k2u_info.addr);
+ goto error;
+ }
+
+ char *buf = (char *)shmd->k2u_info.addr;
+ for (int i = 0; i < shmd->k2u_info.size; i++) {
+ if (buf[i] != 'x') {
+ pr_info("memory check failed");
+ goto error;
+ }
+ }
+ shmd->results[idx] = 0;
+
+ sem_post(childsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ testcase2_sigsegv_result = -1;
+ struct sigaction sa = {0};
+ sa.sa_handler = testcase2_sigsegv_handler;
+ sigaction(SIGSEGV, &sa, NULL);
+ ret = setjmp(testcase2_env);
+ if (!ret)
+ *(char *)shmd->k2u_info.addr = 'a';
+ if (testcase2_sigsegv_result) {
+ pr_info("ioctl unshare has no effect");
+ goto error;
+ }
+ shmd->results[idx] = 0;
+
+ sem_post(childsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ if (!ioctl_judge_addr(dev_fd, shmd->k2u_info.addr)) {
+ pr_info("unexpected k2u addr: 0x%lx", shmd->k2u_info.addr);
+ goto error;
+ }
+
+ buf = (char *)shmd->k2u_info.addr;
+ for (int i = 0; i < shmd->k2u_info.size; i++) {
+ if (buf[i] != 'l') {
+ pr_info("memory check failed");
+ goto error;
+ }
+ }
+ shmd->results[idx] = 0;
+ sem_post(childsync);
+
+ return 0;
+
+error:
+ sem_post(childsync);
+ return -1;
+}
+
+static int testcase4(void)
+{
+ int child_num, i, ret;
+ int group_id = 15;
+
+ sem_t *syncs[TEST4_PROC_NUM];
+ sem_t *syncchilds[TEST4_PROC_NUM];
+ pid_t childs[TEST4_PROC_NUM] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ int shmid = shmget(TEST4_SHM_KEY, sizeof(struct shm_data), IPC_CREAT | 0666);
+ if (shmid < 0) {
+ pr_info("shmget failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct shm_data *shmd = shmat(shmid, NULL, 0);
+ if (shmd == (void *)-1) {
+ pr_info("shmat failed, errno: %d", errno);
+ return -1;
+ }
+ memset(shmd, 0, sizeof(*shmd));
+ for (i = 0; i < TEST4_PROC_NUM; i++)
+ shmd->results[i] = -1;
+ for (i = 0; i < TEST4_PROC_NUM - 1; i++) {
+ if (per_test_init(i, syncs + i, syncchilds + i)) {
+ child_num = i;
+ goto unfork;
+ }
+ pid_t pid = fork_and_add_group(i, group_id, childprocess4, syncs[i], syncchilds[i]);
+ if (pid < 0) {
+ child_num = i;
+ goto unfork;
+ }
+ childs[i] = pid;
+ }
+ child_num = i;
+
+ struct vmalloc_info ka_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ goto unfork;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'x',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto vfree;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = group_id,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto vfree;
+ }
+
+ memcpy(&shmd->k2u_info, &k2u_info, sizeof(k2u_info));
+
+ for (i = 0; i < child_num; i++)
+ sem_post(syncs[i]);
+ for (i = 0; i < child_num; i++) {
+ do {
+ ret = sem_wait(syncchilds[i]);
+ } while (ret < 0 && errno == EINTR);
+ if (shmd->results[i]) {
+ pr_info("test4 child%d read k2u memory faild", i);
+ goto unshare;
+ }
+ shmd->results[i] = -1;
+ }
+
+// TODO: 存在问题,后加组的进程不能共享k2spg的内存,待修复后删掉在测试
+#if 1
+ // 创建新进程B,并加组,判断进程B是否能读共享内存成功
+ if (per_test_init(child_num, syncs + child_num, syncchilds + child_num))
+ goto unshare;
+ childs[child_num] = fork_and_add_group(child_num, group_id, childprocess4,
+ syncs[child_num], syncchilds[child_num]);
+ if (childs[child_num] < 0)
+ goto unshare;
+ child_num++;
+
+ sem_post(syncs[child_num - 1]);
+ do {
+ ret = sem_wait(syncchilds[child_num - 1]);
+ } while (ret < 0 && errno == EINTR);
+ if (shmd->results[child_num - 1]) {
+ pr_info("test4 child%d read k2u memory faild", child_num - 1);
+ goto unshare;
+ }
+ shmd->results[child_num - 1] = -1;
+#endif
+
+ // unshare 之后其他进程读失败
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl unshare failed, errno: %d", errno);
+ goto unshare;
+ }
+
+ for (i = 0; i < child_num; i++)
+ sem_post(syncs[i]);
+ for (i = 0; i < child_num; i++) {
+ do {
+ ret = sem_wait(syncchilds[i]);
+ } while (ret < 0 && errno == EINTR);
+ if (shmd->results[i]) {
+ pr_info("test4 child%d read k2u memory faild", i);
+ goto unshare;
+ }
+ shmd->results[i] = -1;
+ }
+
+ // 再次调用k2u,其他进程读内存成功
+ karea_info.value = 'l';
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto vfree;
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto vfree;
+ }
+ memcpy(&shmd->k2u_info, &k2u_info, sizeof(k2u_info));
+
+ for (i = 0; i < child_num; i++)
+ sem_post(syncs[i]);
+ for (i = 0; i < child_num; i++) {
+ do {
+ ret = sem_wait(syncchilds[i]);
+ } while (ret < 0 && errno == EINTR);
+ if (shmd->results[i]) {
+ pr_info("test4 child%d read k2u memory faild", i);
+ goto unshare;
+ }
+ shmd->results[i] = -1;
+ }
+
+ for (i = 0; i < child_num; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], NULL, 0);
+ }
+
+ ret = ioctl_vfree(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("ioctl_vfree failed.");
+ return -1;
+ }
+
+ return 0;
+
+unshare:
+ ioctl_unshare(dev_fd, &k2u_info);
+vfree:
+ ioctl_vfree(dev_fd, &ka_info);
+unfork:
+ for (i = 0; i < child_num; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], NULL, 0);
+ }
+ return -1;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "内核模块分配并写内存N,k2u共享给一个进程(进程未加组),该进程读内存N成功,内核模块释放内存N。")
+ TESTCASE_CHILD(testcase2, "内核模块分配并写内存N,k2u共享给一个进程(进程已加组)。该进程读内存成功。新进程B加入该组,进程B读内存N成功。内核模块释放内存N。内核执行unshare操作,用户态进程不能访问。")
+ TESTCASE_CHILD(testcase3, "内核模块分配并写内存N,k2u共享给一个进程(进程已加组),该组内所有进程读内存N成功。内核模块释放内存N。内核执行unshare操作,用户态进程不能访问。")
+ TESTCASE_CHILD(testcase4, "内核模块分配并写内存N,k2u共享给一个组(进程已加组),该组内所有进程读N成功。进程B加入该组,进程B读N成功。内核模块停止共享,该组内所有进程读N失败。内核模块再次k2u共享给该组(换用B的pid),该组内所有进程读N成功。内核模块释放内存N。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_mm_mapped_to_multi_groups.c b/tools/testing/sharepool/testcase/function_test/test_mm_mapped_to_multi_groups.c
new file mode 100644
index 000000000000..1144655a97b9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_mm_mapped_to_multi_groups.c
@@ -0,0 +1,435 @@
+#include <stdlib.h>
+#include <pthread.h>
+#include <stdbool.h>
+#include "sharepool_lib.h"
+
+#define PROCESS_NUM 20
+#define THREAD_NUM 20
+#define GROUP_NUM 50
+#define ALLOC_TYPES 4
+
+static pthread_mutex_t mutex;
+static int group_ids[GROUP_NUM];
+static int add_success, add_fail;
+
+int query_func(int *group_num, int *ids)
+{
+ int ret = 0;
+ // query groups
+ int spg_ids[GROUP_NUM];
+ if (!ids)
+ ids = spg_ids;
+ *group_num = GROUP_NUM;
+ struct sp_group_id_by_pid_info find_group_info = {
+ .num = group_num,
+ .spg_ids = ids,
+ .pid = getpid(),
+ };
+ ret = ioctl_find_group_by_pid(dev_fd, &find_group_info);
+ if (ret < 0) {
+ pr_info("find group id by pid failed");
+ return ret;
+ } else {
+ return 0;
+ }
+}
+
+int work_func(int group_id)
+{
+ int ret = 0, i;
+ pid_t pid;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[ALLOC_TYPES] = {0};
+ struct sp_make_share_info u2k_info[ALLOC_TYPES] = {0}, k2u_info = {0}, k2u_huge_info = {0};
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ char *addr;
+
+ /* check sp group */
+ pid = getpid();
+
+ // 大页
+ alloc_info[0].flag = SP_HUGEPAGE;
+ alloc_info[0].spg_id = group_id;
+ alloc_info[0].size = 2 * PMD_SIZE;
+
+ // 大页 DVPP
+ alloc_info[1].flag = SP_DVPP | SP_HUGEPAGE;
+ alloc_info[1].spg_id = group_id;
+ alloc_info[1].size = 2 * PMD_SIZE;
+
+ // 普通页 DVPP
+ alloc_info[2].flag = SP_DVPP;
+ alloc_info[2].spg_id = group_id;
+ alloc_info[2].size = 4 * PAGE_SIZE;
+
+ // 普通页
+ alloc_info[3].flag = 0;
+ alloc_info[3].spg_id = group_id;
+ alloc_info[3].size = 4 * PAGE_SIZE;
+
+ /* alloc & u2k */
+ for (i = 0; i < ALLOC_TYPES; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("sp_alloc return addr %lx\n", alloc_info[i].addr);
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(dev_fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("addr %lx is a valid share pool addr\n", alloc_info[i].addr);
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ pr_info("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("u2k return addr %lx, check memory content succ.\n",
+ // u2k_info[i].addr);
+ }
+ }
+ //pr_info("\n");
+ }
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = group_id;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = group_id;
+
+ /* k2u */
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ret = ioctl_vfree(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vfree small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vfree(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vfree huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* unshare kva & sp_free*/
+ for (i = 0; i < ALLOC_TYPES; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ //close_device(dev_fd);
+ return 0;
+
+error:
+ //close_device(dev_fd);
+ return -1;
+}
+
+void *thread_query_and_work(void *arg)
+{
+ int ret = -1;
+ int group_num = 0;
+ int ids[GROUP_NUM];
+
+ for (int i = 0; i < 10 && ret; i++)
+ ret = query_func(&group_num, ids);
+
+ if (ret) {
+ pr_info("query_func failed: %d", ret);
+ pthread_exit((void *)0);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ ret = work_func(0);
+ if (ret != 0) {
+ pr_info("\nthread %lu finish running with error, spg_id: %d\n", pthread_self(), 0);
+ pthread_exit((void *)1);
+ }
+ }
+
+ pthread_exit((void *)0);
+}
+
+void *thread_add_group(void *arg)
+{
+ int ret = 0;
+ for(int i = 0; i < GROUP_NUM; i++)
+ {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, i + 1);
+ if (pthread_mutex_lock(&mutex) != 0) {
+ pr_info("get pthread mutex failed.");
+ }
+ if (ret < 0)
+ add_fail++;
+ else
+ add_success++;
+ pthread_mutex_unlock(&mutex);
+ }
+ pthread_exit((void *)0);
+}
+
+static int process_routine(void)
+{
+ int ret = 0;
+
+ // threads for alloc and u2k k2u
+ pthread_t tid1[THREAD_NUM];
+ for (int i = 0; i < THREAD_NUM; i++) {
+ ret = pthread_create(tid1 + i, NULL, thread_query_and_work, NULL);
+ if (ret < 0) {
+ pr_info("thread create failed.");
+ return -1;
+ }
+ }
+
+ // N threads for add M groups, N*M attempts, only M attempts shall success
+ pthread_t tid2[THREAD_NUM];
+ for (int j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_create(tid2 + j, NULL, thread_add_group, NULL);
+ if (ret < 0) {
+ pr_info("thread create failed.");
+ return -1;
+ }
+ }
+
+ // wait for add_group threads to return
+ for (int j = 0; j < THREAD_NUM; j++) {
+ void *tret;
+ ret = pthread_join(tid2[j], &tret);
+ if (ret < 0) {
+ pr_info("thread join failed.");
+ ret = -1;
+ }
+ if ((long)tret != 0) {
+ pr_info("thread %d failed.", j);
+ ret = -1;
+ } else {
+ pr_info("add group thread %d return success!!", j);
+ }
+ }
+
+ // wait for work threads to return
+ for (int i = 0; i < THREAD_NUM; i++) {
+ void *tret;
+ ret = pthread_join(tid1[i], &tret);
+ if (ret < 0) {
+ pr_info("thread join failed.");
+ ret = -1;
+ }
+ if ((long)tret != 0) {
+ pr_info("thread %d failed.", i);
+ ret = -1;
+ } else {
+ pr_info("work thread %d return success!!", i);
+ }
+ }
+
+ return ret;
+}
+
+/* testcase1: 10个线程对所有加入的组执行查询 + alloc routine,另外10个线程负责加入新组*/
+static int testcase1(void)
+{
+ int ret = 0;
+
+ ret = process_routine();
+
+ int group_query_final;
+ query_func(&group_query_final, NULL);
+ pr_info("group query final is %d", group_query_final);
+ if (group_query_final != GROUP_NUM)
+ ret = -1;
+
+ pr_info("add_success: %d, add_fail: %d", add_success, add_fail);
+ if (add_success != GROUP_NUM || (add_fail + add_success) != THREAD_NUM * GROUP_NUM)
+ ret = -1;
+
+ add_success = add_fail = 0;
+ return ret;
+}
+
+/* testcase2: fork 新进程,令其一边调用alloc/k2u/u2k等API,一边开新线程加入新组 */
+static int testcase2(void)
+{
+ int ret = 0;
+
+ ret = process_routine();
+
+ // fork child processes, they should not copy parent's group
+ pid_t childs[PROCESS_NUM];
+ for (int k = 0; k < PROCESS_NUM; k++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed.");
+ return -1;
+ } else if (pid == 0) {
+ exit(process_routine());
+ }
+ childs[k] = pid;
+ }
+
+ for (int k = 0; k < PROCESS_NUM; k++) {
+ int status;
+ if (waitpid(childs[k], &status, 0) < 0) {
+ pr_info("waitpid failed");
+ ret = -1;
+ }
+ if (status != 0) {
+ pr_info("child process %d pid %d exit unexpected, return value = %d", k, childs[k], status);
+ ret = -1;
+ } else {
+ pr_info("process %d exit success", k);
+ }
+ childs[k] = 0;
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "10个线程对所有加入的组执行查询 + alloc routine,另外10个线程负责加入新组")
+ TESTCASE_CHILD(testcase2, "进程一边调用alloc/k2u/u2k等API,一边开新线程加入新组")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_non_dvpp_group.c b/tools/testing/sharepool/testcase/function_test/test_non_dvpp_group.c
new file mode 100644
index 000000000000..008b0f803cb9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_non_dvpp_group.c
@@ -0,0 +1,167 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2022. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Apr 26 09:20:13 2022
+ */
+
+/*
+ * 针对non_dvpp组进行测试
+ * 1. 进程加组,设置non_dvpp标志,申请普通内存,申请成功
+ * 2. 进程加组,设置non_dvpp标志,申请dvpp内存,申请失败
+ * 3. 进程加组,设置non_dvpp标志,k2u 普通内存,成功
+ * 4. 进程加组,设置non_dvpp标志,k2u DVPP内存,失败
+ * 5. 进程加两个组,分别为普通组和non_dvpp组,两个组分别申请普通内存和DVPP内存,调整申请顺序和加组顺序来一遍
+ * 6. 进程加两个组,分别为普通组和non_dvpp组,两个组分别申请普通内存和DVPP内存
+ * 大页重复一遍
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/shm.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+// 加组内存
+static int case1(unsigned long flag)
+{
+ int ret = wrap_add_group_non_dvpp(getpid(), PROT_READ|PROT_WRITE, SPG_ID_AUTO);
+ if (ret < 0) {
+ pr_info("add group failed: %d", ret);
+ abort();
+ }
+
+ char *buf = wrap_sp_alloc(ret, 1024, flag);
+ if (buf == (void *)-1)
+ return -1;
+
+ *buf = 'a';
+
+ return 0;
+}
+
+static int testcase1(void) { return case1(0); } // 小页,普通内存,成功
+static int testcase2(void) { return case1(SP_HUGEPAGE_ONLY); } // 大页,普通内存,成功
+static int testcase3(void) { return !case1(SP_DVPP); } // 小页,DVPP内存,失败
+static int testcase4(void) { return !case1(SP_DVPP|SP_HUGEPAGE_ONLY); } // 大页,DVPP内存,失败
+
+// 直调内存
+static int case2(unsigned long flag)
+{
+ int ret = wrap_add_group_non_dvpp(getpid(), PROT_READ|PROT_WRITE, SPG_ID_AUTO);
+ if (ret < 0) {
+ pr_info("add group failed: %d", ret);
+ abort();
+ }
+
+ char *buf = wrap_sp_alloc(SPG_ID_DEFAULT, 1024, flag);
+ if (buf == (void *)-1)
+ return -1;
+
+ *buf = 'a';
+
+ return 0;
+}
+
+static int testcase5(void) { return case2(0); } // 小页,普通内存,成功
+static int testcase6(void) { return case2(SP_HUGEPAGE_ONLY); } // 大页,普通内存,成功
+static int testcase7(void) { return case2(SP_DVPP); } // 小页,DVPP内存,成功
+static int testcase8(void) { return case2(SP_DVPP|SP_HUGEPAGE_ONLY); } // 大页,DVPP内存,成功
+
+// k2group
+static int case3(unsigned long flag)
+{
+ int ret = wrap_add_group_non_dvpp(getpid(), PROT_READ|PROT_WRITE, SPG_ID_AUTO);
+ if (ret < 0) {
+ pr_info("add group failed: %d", ret);
+ abort();
+ }
+
+ unsigned long kva = wrap_vmalloc(1024, flag & SP_HUGEPAGE_ONLY);
+ if (!kva) {
+ pr_info("alloc kva failed: %#lx", flag);
+ abort();
+ }
+
+ char *buf = (char *)wrap_k2u(kva, 1024, ret, flag & ~SP_HUGEPAGE_ONLY);
+ if (!buf)
+ return -1;
+
+ *buf = 'a';
+
+ return 0;
+}
+
+static int testcase9(void) { return case3(0); } // 小页,普通内存,成功
+static int testcase10(void) { return case3(SP_HUGEPAGE_ONLY); } // 大页,普通内存,成功
+static int testcase11(void) { return !case3(SP_DVPP); } // 小页,DVPP内存,失败
+static int testcase12(void) { return !case3(SP_DVPP|SP_HUGEPAGE_ONLY); } // 大页,DVPP内存,失败
+
+// k2task
+static int case4(unsigned long flag)
+{
+ int ret = wrap_add_group_non_dvpp(getpid(), PROT_READ|PROT_WRITE, SPG_ID_AUTO);
+ if (ret < 0) {
+ pr_info("add group failed: %d", ret);
+ abort();
+ }
+
+ unsigned long kva = wrap_vmalloc(1024, flag & SP_HUGEPAGE_ONLY);
+ if (!kva) {
+ pr_info("alloc kva failed: %#lx", flag);
+ abort();
+ }
+
+ char *buf = (char *)wrap_k2u(kva, 1024, SPG_ID_DEFAULT, flag & ~SP_HUGEPAGE_ONLY);
+ if ((unsigned long)buf < 0)
+ return -1;
+
+ *buf = 'a';
+
+ return 0;
+}
+static int testcase13(void) { return case4(0); } // 小页,普通内存,成功
+static int testcase14(void) { return case4(SP_HUGEPAGE_ONLY); } // 大页,普通内存,成功
+static int testcase15(void) { return case4(SP_DVPP); } // 小页,DVPP内存,成功
+static int testcase16(void) { return case4(SP_DVPP|SP_HUGEPAGE_ONLY); } // 大页,DVPP内存,成功
+
+static struct testcase_s testcases[] = {
+// TESTCASE_CHILD(testcase1, true)
+// TESTCASE_CHILD(testcase2, true)
+// TESTCASE_CHILD(testcase3, false)
+// TESTCASE_CHILD(testcase4, false)
+// TESTCASE_CHILD(testcase5, true)
+// TESTCASE_CHILD(testcase6, true)
+// TESTCASE_CHILD(testcase7, true)
+// TESTCASE_CHILD(testcase8, true)
+// TESTCASE_CHILD(testcase9, true)
+// TESTCASE_CHILD(testcase10, true)
+// TESTCASE_CHILD(testcase11, false)
+// TESTCASE_CHILD(testcase12, false)
+// TESTCASE_CHILD(testcase13, true)
+// TESTCASE_CHILD(testcase14, true)
+// TESTCASE_CHILD(testcase15, true)
+// TESTCASE_CHILD(testcase16, true)
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_sp_ro.c b/tools/testing/sharepool/testcase/function_test/test_sp_ro.c
new file mode 100644
index 000000000000..769feb7afa94
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_sp_ro.c
@@ -0,0 +1,719 @@
+#include "sharepool_lib.h"
+#include "sem_use.h"
+#include <stdlib.h>
+#include <errno.h>
+#include <assert.h>
+#include <pthread.h>
+#include <sys/types.h>
+
+#define PROC_NUM 8
+#define THREAD_NUM 5
+#define GROUP_NUM 16
+#define ALLOC_TYPE 4
+#define REPEAT_TIMES 2
+#define ALLOC_SIZE PAGE_SIZE
+#define PROT (PROT_READ | PROT_WRITE)
+
+static int group_ids[GROUP_NUM];
+static int default_id = 1;
+static int semid;
+
+static int add_multi_group();
+static int check_multi_group();
+static int delete_multi_group();
+static int process();
+void *thread_and_process_helper(int group_id);
+void *del_group_thread(void *arg);
+void *del_proc_from_group(void *arg);
+
+// SP_PROT_FOCUS 内存申请测试
+static int testcase1(void)
+{
+ int ret = 0;
+ void *pret;
+ unsigned long page_size = PAGE_SIZE;
+
+ // add process to group
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ // alloc memory with SP_PROT_FOCUS | SP_PROT_RO
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS | SP_PROT_RO);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ ret = -1;
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ ret = 0;
+ }
+
+ // mprotect() should fail
+ ret = mprotect(pret, page_size, PROT_WRITE);
+ if (!(ret && errno == EACCES)) {
+ pr_info("mprotect should fail, %d, %d\n", ret, errno);
+ return ret;
+ }
+
+ // memset should fail and generate a SIGSEGV
+ memset((void *)pret, 0, page_size);
+
+ return -1;
+}
+
+// SP_PROT_FOCUS 单独使用测试
+static int testcase2(void)
+{
+ int ret = 0;
+ void *pret;
+ unsigned long page_size = PAGE_SIZE;
+
+ // add process to group
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ // alloc memory with SP_PROT_FOCUS, should fail
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ ret = 0;
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ ret = -1;
+ }
+
+ return ret;
+}
+
+// SP_PROT_FOCUS | SP_HUGEPAGE 错误使用测试
+static int testcase3(void)
+{
+ int ret = 0;
+ void *pret;
+ unsigned long page_size = PAGE_SIZE;
+
+ // add process to group
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ // alloc memory with SP_PROT_FOCUS | SP_HUGEPAGE, should fail
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS | SP_HUGEPAGE);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ ret = 0;
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ ret = -1;
+ }
+
+ return ret;
+}
+
+// SP_PROT_FOCUS | SP_DVPP 错误使用测试
+static int testcase4(void)
+{
+ int ret = 0;
+ void *pret;
+ unsigned long page_size = PAGE_SIZE;
+
+ // add process to group
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ // alloc memory with SP_PROT_FOCUS | SP_DVPP, should fail
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS | SP_DVPP);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ ret = 0;
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ ret = -1;
+ }
+
+ return ret;
+}
+
+// SP_PROT_FOCUS | SP_PROT_RO | SP_HUGEPAGE 联合使用测试,预计成功
+static int testcase5(void)
+{
+ int ret = 0;
+ void *pret;
+ unsigned long page_size = PAGE_SIZE;
+
+ // add process to group
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ // alloc memory with SP_PROT_FOCUS | SP_PROT_RO | SP_HUGEPAGE, should succes
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS | SP_PROT_RO | SP_HUGEPAGE);
+ pr_info("pret is %d", pret);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ ret = -1;
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ ret = 0;
+ }
+
+ return ret;
+}
+
+// SP_PROT_FOCUS | SP_PROT_RO | SP_DVPP 错误使用测试
+static int testcase6(void)
+{
+ int ret = 0;
+ void *pret;
+ unsigned long page_size = PAGE_SIZE;
+
+ // add process to group
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ // alloc memory with SP_PROT_FOCUS | SP_PROT_RO | SP_DVPP, should fail
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS | SP_PROT_RO | SP_DVPP);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ ret = 0;
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ ret = -1;
+ }
+
+ return ret;
+}
+
+// SP_RO区域内存上限测试
+static int testcase7(void)
+{
+ int ret = 0;
+ void *pret;
+ unsigned long page_size = PAGE_SIZE;
+ unsigned long sp_ro_1GB = 1073741824;
+
+ // add process to group
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ // alloc 64GB memory in SP_RO area
+ for (int i = 0; i < 64; i++){
+ pret = wrap_sp_alloc(default_id, sp_ro_1GB, SP_PROT_FOCUS | SP_PROT_RO);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc 1GB %d time failed.", getpid(),i+1);
+ ret = -1;
+ return ret;
+ } else {
+ pr_info("process %d alloc 1GB %d time success.", getpid(),i+1);
+ ret = 0;
+ }
+ }
+
+ // alloc another 4k memory in SP_RO area
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS | SP_PROT_RO);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc another 4k failed.", getpid());
+ ret = 0;
+ } else {
+ pr_info("process %d alloc another 4k success.", getpid());
+ ret = -1;
+ }
+
+ return ret;
+}
+
+// SP_RO区域内存最小粒度测试
+static int testcase8(void)
+{
+ int ret = 0;
+ void *pret;
+ unsigned long page_size = PAGE_SIZE;
+ int times = 32768;
+
+ // add process to group
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ // min memory block in SP_RO area is 2M, alloc 32768 times memory in SP_RO area can exhaust whole 64GB memory
+ for (int i = 0; i < times; i++){
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS | SP_PROT_RO);
+ unsigned long addr_to_print = (unsigned long)pret;
+ pr_info("memory address is %lx",addr_to_print);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc memory %d time failed.", getpid(),i+1);
+ ret = -1;
+ return ret;
+ } else {
+ pr_info("process %d alloc memory %d time success.", getpid(),i+1);
+ ret = 0;
+ }
+ }
+
+ // alloc another 4k memory in SP_RO area
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS | SP_PROT_RO);
+
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc another 4k failed.", getpid());
+ ret = 0;
+ } else {
+ pr_info("process %d alloc another 4k success.", getpid());
+ ret = -1;
+ }
+
+ return ret;
+}
+
+//多进程SP_RO内存申请释放并发压力测试
+static int testcase9(void)
+{
+ int proc_num = 1000;
+ int prints_num = 3;
+ int ret = 0;
+ unsigned long page_size = PAGE_SIZE;
+ int childs[proc_num];
+ int prints[prints_num];
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("parent %d add into group failed. errno: %d", getpid(), ret);
+ return -1;
+ }
+
+ // create process alloc and free SP_RO memory
+ for (int i = 0; i < proc_num; i++) {
+ int pid = fork();
+ if (pid == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ while (1) {
+ void * pret;
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS | SP_PROT_RO);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ }
+ ret = wrap_sp_free(pret);
+ if (ret < 0) {
+ pr_info("process %d free failed.", getpid());
+ } else {
+ pr_info("process %d free success.", getpid());
+ }
+ }
+ }
+ else {
+ childs[i] = pid;
+ }
+ }
+
+ // print sharepool maintenance interface
+ for(int i = 0; i < prints_num; i++){
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ sharepool_print();
+ usleep(10000);
+ }
+ }
+ else {
+ prints[i] = pid;
+ }
+ }
+
+ sleep(1);
+
+ // clear the process
+ for (int i = 0; i < PROC_NUM; i++) {
+ kill(childs[i], SIGKILL);
+ int status;
+ waitpid(childs[i], &status, 0);
+ }
+ for (int i = 0; i < prints_num; i++) {
+ kill(prints[i], SIGKILL);
+ int status;
+ waitpid(prints[i], &status, 0);
+ }
+
+ return 0;
+}
+
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD_SIGNAL(testcase1, SIGSEGV, "SP_PROT_FOCUS 内存申请测试")
+ TESTCASE_CHILD(testcase2, "SP_PROT_FOCUS 单独使用测试")
+ TESTCASE_CHILD(testcase3, "SP_PROT_FOCUS | SP_HUGEPAGE 错误使用测试")
+ TESTCASE_CHILD(testcase4, "SP_PROT_FOCUS | SP_DVPP 错误使用测试")
+ TESTCASE_CHILD(testcase5, "SP_PROT_FOCUS | SP_PROT_RO | SP_HUGEPAGE 错误使用测试")
+ TESTCASE_CHILD(testcase6, "SP_PROT_FOCUS | SP_PROT_RO | SP_DVPP 错误使用测试")
+ TESTCASE_CHILD(testcase7, "SP_RO区域内存上限测试")
+ TESTCASE_CHILD(testcase8, "SP_RO区域内存最小粒度测试")
+ //TESTCASE_CHILD(testcase9, "多进程SP_RO内存申请释放并发压力测试")
+};
+
+
+static int add_multi_group()
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+ for (int i = 0; i < GROUP_NUM; i++) {
+ group_ids[i] = i + 1;
+ ag_info.spg_id = group_ids[i];
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("proc%d add group%d failed", getpid(), group_ids[i]);
+ return -1;
+ }
+ }
+
+ return ret;
+}
+
+static int check_multi_group()
+{
+ int ret;
+ // query groups
+ int spg_ids[GROUP_NUM];
+ int group_num = GROUP_NUM;
+ struct sp_group_id_by_pid_info find_group_info = {
+ .num = &group_num,
+ .spg_ids = spg_ids,
+ .pid = getpid(),
+ };
+ ret = ioctl_find_group_by_pid(dev_fd, &find_group_info);
+ if (ret < 0) {
+ pr_info("find group id by pid failed");
+ return ret;
+ } else
+ for (int i = 0; i < GROUP_NUM; i++)
+ if (spg_ids[i] != group_ids[i]) {
+ pr_info("group id %d not consistent", i);
+ ret = -1;
+ }
+ return ret;
+}
+
+static int delete_multi_group()
+{
+ int ret = 0;
+ int fail = 0, suc = 0;
+ // delete from all groups
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = wrap_del_from_group(getpid(), group_ids[i]);
+ if (ret < 0) {
+ //pr_info("process %d delete from group %d failed, errno: %d", getpid(), group_ids[i], errno);
+ fail++;
+ }
+ else {
+ pr_info("process %d delete from group %d success", getpid(), group_ids[i]);
+ suc++;
+ }
+ }
+
+ return fail;
+}
+
+static int process()
+{
+ int ret = 0;
+ for (int j = 0; j < REPEAT_TIMES; j++) {
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = thread_and_process_helper(group_ids[i]);
+ if (ret < 0) {
+ pr_info("thread_and_process_helper failed");
+ return -1;
+ }
+ }
+ }
+
+ return ret;
+}
+
+static int try_del_from_group(int group_id)
+{
+ int ret = wrap_del_from_group(getpid(), group_id);
+ return -errno;
+}
+
+void *thread_and_process_helper(int group_id)
+{
+ int ret = 0, i;
+ pid_t pid;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[ALLOC_TYPE] = {0};
+ struct sp_make_share_info u2k_info[ALLOC_TYPE] = {0}, k2u_info = {0}, k2u_huge_info = {0};
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ char *addr;
+
+ /* check sp group */
+ pid = getpid();
+
+ // 大页
+ alloc_info[0].flag = SP_HUGEPAGE;
+ alloc_info[0].spg_id = group_id;
+ alloc_info[0].size = 2 * PMD_SIZE;
+
+ // 大页 DVPP
+ alloc_info[1].flag = SP_DVPP | SP_HUGEPAGE;
+ alloc_info[1].spg_id = group_id;
+ alloc_info[1].size = 2 * PMD_SIZE;
+
+ // 普通页 DVPP
+ alloc_info[2].flag = SP_DVPP;
+ alloc_info[2].spg_id = group_id;
+ alloc_info[2].size = 4 * PAGE_SIZE;
+
+ // 普通页
+ alloc_info[3].flag = 0;
+ alloc_info[3].spg_id = group_id;
+ alloc_info[3].size = 4 * PAGE_SIZE;
+
+ /* alloc & u2k */
+ for (i = 0; i < ALLOC_TYPE; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("sp_alloc return addr %lx\n", alloc_info[i].addr);
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(dev_fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("addr %lx is a valid share pool addr\n", alloc_info[i].addr);
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ pr_info("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("u2k return addr %lx, check memory content succ.\n",
+ // u2k_info[i].addr);
+ }
+ }
+ }
+ assert(try_del_from_group(group_id) == -EINVAL);
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = group_id;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = group_id;
+
+ /* k2u */
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+ assert(try_del_from_group(group_id) == -EINVAL);
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ ioctl_vfree(dev_fd, &vmalloc_huge_info);
+ assert(try_del_from_group(group_id) == -EINVAL);
+
+ /* unshare kva & sp_free*/
+ for (i = 0; i < ALLOC_TYPE; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ return 0;
+
+error:
+ return -1;
+}
+
+void *del_group_thread(void *arg)
+{
+ int ret = 0;
+ int i = (int)arg;
+
+ sem_inc_by_one(semid);
+ sem_check_zero(semid);
+
+ pr_info("thread %d now tries to exit from group %d", getpid() + i + 1, default_id);
+ ret = wrap_del_from_group(getpid() + i + 1, default_id);
+ if (ret < 0)
+ pthread_exit((void *)-1);
+ pthread_exit((void *)0);
+}
+
+void *del_proc_from_group(void *arg)
+{
+ sem_dec_by_one(semid);
+ pthread_exit((void *)(wrap_del_from_group((int)arg, default_id)));
+}
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_two_user_process.c b/tools/testing/sharepool/testcase/function_test/test_two_user_process.c
new file mode 100644
index 000000000000..62d44eaf154f
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_two_user_process.c
@@ -0,0 +1,626 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 18 06:46:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h> // exit
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+
+static jmp_buf testcase1_env;
+static int testcase1_sigsegv_result = -1;
+static void testcase1_sigsegv_handler(int num)
+{
+ pr_info("segment fault occurs");
+ testcase1_sigsegv_result = 0;
+ longjmp(testcase1_env, 1);
+}
+
+/*
+ * 测试点1:用户态进程A加组后分配内存,A创建B,并且给B加组,进程B加组成功后写内存成功,
+ * 进程A读内存成功,然后释放内存,进程B写失败。
+ * (申请者读,free,其他进程写)
+ */
+static int testcase1_grandchild1(sem_t *sync, sem_t *grandsync, int group_id,
+ unsigned long addr, unsigned long size)
+{
+ int ret;
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ ret = ioctl_find_first_group(dev_fd, getpid());
+ if (ret != group_id) {
+ pr_info("unexpected group_id: %d", ret);
+ sem_post(grandsync);
+ return -1;
+ }
+
+ if (!ioctl_judge_addr(dev_fd, addr)) {
+ pr_info("invalid address");
+ sem_post(grandsync);
+ return -1;
+ }
+
+ memset((void *)addr, 'm', size);
+
+ sem_post(grandsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ testcase1_sigsegv_result = -1;
+ struct sigaction sa = {0};
+ sa.sa_handler = testcase1_sigsegv_handler;
+ sigaction(SIGSEGV, &sa, NULL);
+ ret = setjmp(testcase1_env);
+ if (!ret)
+ *(char *)addr = 'a';
+ if (testcase1_sigsegv_result) {
+ pr_info("sp_free has no effect");
+ ret = -1;
+ } else
+ ret = 0;
+
+ return ret;
+}
+
+static int testcase1_child1(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = alloc_info->spg_id;
+
+ char *sync_name = "/testcase1_sync1";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *grandsync_name = "/testcase1_grandsync1";
+ sem_t *grandsync = sem_open(grandsync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(grandsync_name);
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ return -1;
+ }
+
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ return -1;
+ } else if (pid == 0) {
+ exit(testcase1_grandchild1(sync, grandsync, group_id, alloc_info->addr, alloc_info->size));
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ sem_post(sync);
+ ret = sem_wait(grandsync);
+ if (ret < 0) {
+ pr_info("sem wait failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ char *buf = (char *)alloc_info->addr;
+ for (unsigned long i = 0; i < alloc_info->size; i++) {
+ if (buf[i] != 'm') {
+ pr_info("data check failed");
+ goto error_out;
+ }
+ }
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ sem_post(sync);
+
+ int status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status))
+ return -1;
+
+ ret = ioctl_find_first_group(dev_fd, pid);
+ if (!(ret < 0)) {
+ pr_info("ioctl_find_group_by_pid failed, ret: %d, errno: %d", ret, errno);
+ return -1;
+ } else
+ return 0;
+
+error_out:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+
+ return ret;
+}
+
+/*
+ * 测试点2:用户态进程A加组,A创建B,并且给B加组,A申请内存,并写入内容,
+ * B读内存成功,然后释放内存N,进程A写失败。
+ * (申请者写,其他进程读,free)
+ */
+#define tc1_MSG_KEY 20
+#define tc1_MSG_TYPE 100
+struct msgbuf_alloc_info {
+ long type;
+ struct sp_alloc_info alloc_info;
+};
+
+static int testcase1_grandchild2(sem_t *sync, sem_t *grandsync, int group_id)
+{
+ int ret;
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ ret = ioctl_find_first_group(dev_fd, getpid());
+ if (ret != group_id) {
+ pr_info("unexpected group_id: %d", ret);
+ goto error_out;
+ }
+
+ int msgid = msgget(tc1_MSG_KEY, 0);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ struct sp_alloc_info *alloc_info = &msgbuf.alloc_info;
+ ret = msgrcv(msgid, &msgbuf, sizeof(*alloc_info), tc1_MSG_TYPE, IPC_NOWAIT);
+ if (ret < 0) {
+ pr_info("msgrcv failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ if (!ioctl_judge_addr(dev_fd, alloc_info->addr)) {
+ pr_info("invalid address");
+ goto error_out;
+ }
+
+ char *buf = (char *)alloc_info->addr;
+ for (unsigned long i = 0; i < alloc_info->size; i++) {
+ if (buf[i] != 'a') {
+ pr_info("data check failed");
+ goto error_out;
+ }
+ }
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ sem_post(grandsync);
+
+ return 0;
+
+error_out:
+ sem_post(grandsync);
+ return -1;
+}
+
+static int testcase1_child2(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = ++(alloc_info->spg_id);
+
+ char *sync_name = "/testcase1_sync2";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *grandsync_name = "/testcase1_grandsync2";
+ sem_t *grandsync = sem_open(grandsync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(grandsync_name);
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ return -1;
+ }
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ return -1;
+ } else if (pid == 0) {
+ exit(testcase1_grandchild2(sync, grandsync, group_id));
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ int msgid = msgget(tc1_MSG_KEY, IPC_CREAT | 0666);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ msgbuf.type = tc1_MSG_TYPE;
+ memcpy(&msgbuf.alloc_info, alloc_info, sizeof(*alloc_info));
+ ret = msgsnd(msgid, &msgbuf, sizeof(*alloc_info), 0);
+ if (ret < 0) {
+ pr_info("msgsnd failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ memset((void *)alloc_info->addr, 'a', alloc_info->size);
+
+ sem_post(sync);
+ ret = sem_wait(grandsync);
+ if (ret < 0) {
+ pr_info("sem wait failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ testcase1_sigsegv_result = -1;
+ struct sigaction sa = {0};
+ sa.sa_handler = testcase1_sigsegv_handler;
+ sigaction(SIGSEGV, &sa, NULL);
+ ret = setjmp(testcase1_env);
+ if (!ret)
+ *(char *)alloc_info->addr = 'a';
+ if (testcase1_sigsegv_result) {
+ pr_info("sp_free has no effect");
+ ret = -1;
+ goto error_out;
+ }
+
+ waitpid(pid, NULL, 0);
+ ret = ioctl_find_first_group(dev_fd, pid);
+ if (!(ret < 0)) {
+ pr_info("ioctl_find_group_by_pid failed, ret: %d, errno: %d", ret, errno);
+ ret = -1;
+ return ret;
+ } else
+ return 0;
+
+error_out:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+
+ return ret;
+}
+
+/*
+ * 测试点3:用户态进程,A创建B,并且给B和A加组,B申请内存,并写入内容,
+ * A读内存成功,然后释放内存,进程B写失败。
+ * (其他进程申请内存,写入,owner读,free)
+ */
+static int testcase1_grandchild3(sem_t *sync, sem_t *grandsync, struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = alloc_info->spg_id;
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ ret = ioctl_find_first_group(dev_fd, getpid());
+ if (ret != group_id) {
+ pr_info("unexpected group_id: %d", ret);
+ goto error_out;
+ }
+
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ int msgid = msgget(tc1_MSG_KEY, IPC_CREAT | 0666);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ msgbuf.type = tc1_MSG_TYPE;
+ memcpy(&msgbuf.alloc_info, alloc_info, sizeof(*alloc_info));
+ ret = msgsnd(msgid, &msgbuf, sizeof(*alloc_info), 0);
+ if (ret < 0) {
+ pr_info("msgsnd failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ memset((void *)alloc_info->addr, 'x', alloc_info->size);
+
+ sem_post(grandsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ sem_post(grandsync);
+
+ return 0;
+
+error_out:
+ sem_post(grandsync);
+ return -1;
+}
+
+static int testcase1_child3(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = alloc_info->spg_id;
+
+ char *sync_name = "/testcase1_sync3";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *grandsync_name = "/testcase1_grandsync3";
+ sem_t *grandsync = sem_open(grandsync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(grandsync_name);
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ return -1;
+ } else if (pid == 0) {
+ exit(testcase1_grandchild3(sync, grandsync, alloc_info));
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ sem_post(sync);
+ ret = sem_wait(grandsync);
+ if (ret < 0) {
+ pr_info("sem wait failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ int msgid = msgget(tc1_MSG_KEY, 0);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+ struct msgbuf_alloc_info msgbuf = {0};
+ alloc_info = &msgbuf.alloc_info;
+ ret = msgrcv(msgid, &msgbuf, sizeof(*alloc_info), tc1_MSG_TYPE, IPC_NOWAIT);
+ if (ret < 0) {
+ pr_info("msgrcv failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ if (!ioctl_judge_addr(dev_fd, alloc_info->addr)) {
+ pr_info("invalid address");
+ goto error_out;
+ }
+
+ char *buf = (char *)alloc_info->addr;
+ for (unsigned long i = 0; i < alloc_info->size; i++) {
+ if (buf[i] != 'x') {
+ pr_info("data check failed");
+ goto error_out;
+ }
+ }
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(grandsync);
+ } while (ret < 0 && errno == EINTR);
+
+ testcase1_sigsegv_result = -1;
+ struct sigaction sa = {0};
+ sa.sa_handler = testcase1_sigsegv_handler;
+ sigaction(SIGSEGV, &sa, NULL);
+ ret = setjmp(testcase1_env);
+ if (!ret)
+ *(char *)alloc_info->addr = 'a';
+ if (testcase1_sigsegv_result) {
+ pr_info("sp_free has no effect");
+ ret = -1;
+ goto error_out;
+ }
+
+ waitpid(pid, NULL, 0);
+ ret = ioctl_find_first_group(dev_fd, pid);
+ if (!(ret < 0)) {
+ pr_info("ioctl_find_group_by_pid failed, ret: %d, errno: %d", ret, errno);
+ ret = -1;
+ return ret;
+ }
+
+ return 0;
+
+error_out:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+
+ return ret;
+}
+
+static int testcase1(void)
+{
+ struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = 0,
+ .spg_id = 10,
+ .size = 100 * PAGE_SIZE,
+ },
+ {
+ .flag = SP_HUGEPAGE,
+ .spg_id = 12,
+ .size = 10 * PMD_SIZE,
+ },
+ {
+ .flag = SP_DVPP,
+ .spg_id = 19,
+ .size = 100000,
+ },
+ {
+ .flag = SP_DVPP | SP_HUGEPAGE,
+ .spg_id = 19,
+ .size = 10000000,
+ },
+ };
+
+ int (*child_funcs[])(struct sp_alloc_info *) = {
+ testcase1_child1,
+ testcase1_child2,
+ testcase1_child3,
+ };
+
+ for (int j = 0; j < sizeof(child_funcs) / sizeof(child_funcs[0]); j++) {
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ return -1;
+ } else if (pid == 0) {
+ exit(child_funcs[j](alloc_infos + i));
+ }
+
+ int status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("testcase1 failed!!, i: %d, j: %d", i, j);
+ return -1;
+ }
+ }
+ }
+
+ pr_info("testcase1 success!!");
+ return 0;
+}
+
+/*
+int main()
+{
+ int ret = 0;
+ dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ ret += testcase1();
+
+ close_device(dev_fd);
+ return ret;
+}
+*/
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "测试点1:用户态进程A加组后分配内存,A创建B,并且给B加组,进程B加组成功后写内存成功,进程A读内存成功,然后释放内存,进程B写失败。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_u2k.c b/tools/testing/sharepool/testcase/function_test/test_u2k.c
new file mode 100644
index 000000000000..c8d525db8929
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_u2k.c
@@ -0,0 +1,490 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Sat Nov 21 02:21:35 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+
+#define tc1_MSG_KEY 20
+#define tc1_MSG_TYPE 100
+struct msgbuf_alloc_info {
+ long type;
+ union {
+ struct sp_alloc_info alloc_info;
+ struct sp_make_share_info share_info;
+ };
+};
+
+/*
+ * 用户态进程A加组后分配并写内存N,u2k共享给内核,内核模块读N成功,
+ * 进程B加组后读N成功。进程A停止共享N后,内核模块读N失败,进程B读N成功。进程A释放内存N。
+ */
+static int grandchild1(sem_t *sync, sem_t *childsync)
+{
+ int ret;
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ int msgid = msgget(tc1_MSG_KEY, 0);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ struct sp_alloc_info *alloc_info = &msgbuf.alloc_info;
+ ret = msgrcv(msgid, &msgbuf, sizeof(*alloc_info), tc1_MSG_TYPE, IPC_NOWAIT);
+ if (ret < 0) {
+ pr_info("msgrcv failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ ret = ioctl_find_first_group(dev_fd, getpid());
+ if (ret != alloc_info->spg_id) {
+ pr_info("unexpected group_id: %d", ret);
+ goto error_out;
+ }
+
+ if (!ioctl_judge_addr(dev_fd, alloc_info->addr)) {
+ pr_info("invalid address");
+ goto error_out;
+ }
+
+ char *buf = (char *)alloc_info->addr;
+ for (int i = 0; i < alloc_info->size; i++) {
+ if (buf[i] != 'z') {
+ pr_info("memory check failed");
+ goto error_out;
+ }
+ }
+
+ sem_post(childsync);
+ return 0;
+
+error_out:
+ sem_post(childsync);
+ return -1;
+}
+
+static int child1(pid_t pid, sem_t *sync, sem_t *childsync)
+{
+ int group_id = 10, ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = group_id,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ goto error_out;
+ }
+ memset((void *)alloc_info.addr, 'z', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'z',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ goto error_out;
+ }
+
+ pr_info("unshare u2k");
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ /*
+ * unshare之后再访问内存会导致内核挂掉,在手工测试的时候执行
+ */
+#if 0
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ goto error_out;
+ }
+
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ goto error_out;
+ }
+
+ pr_info("recheck karea");
+ karea_info.mod = KAREA_SET;
+ karea_info.value = 'a';
+ karea_info.size = 1;
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ goto error_out;
+ }
+ pr_info("after karea set, the value is '%c%c'",
+ ((char *)alloc_info.addr)[0], ((char *)alloc_info.addr)[3]);
+#endif
+
+ int msgid = msgget(tc1_MSG_KEY, IPC_CREAT | 0666);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ msgbuf.type = tc1_MSG_TYPE;
+ memcpy(&msgbuf.alloc_info, &alloc_info, sizeof(alloc_info));
+ ret = msgsnd(msgid, &msgbuf, sizeof(alloc_info), 0);
+ if (ret < 0) {
+ pr_info("msgsnd failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ int status = 0;
+ waitpid(pid, &status, 0);
+
+ return !WIFEXITED(status) || WEXITSTATUS(status) ? -1 : 0;
+
+error_out:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return -1;
+}
+
+static int grandchild2(sem_t *sync, sem_t *childsync)
+{
+ int ret;
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ int msgid = msgget(tc1_MSG_KEY, 0);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf1 = {0};
+ struct sp_alloc_info *alloc_info = &msgbuf1.alloc_info;
+ ret = msgrcv(msgid, &msgbuf1, sizeof(*alloc_info), tc1_MSG_TYPE, IPC_NOWAIT);
+ if (ret < 0) {
+ pr_info("msgrcv failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf2 = {0};
+ struct sp_make_share_info *u2k_info = &msgbuf2.share_info;
+ ret = msgrcv(msgid, &msgbuf2, sizeof(*u2k_info), tc1_MSG_TYPE, IPC_NOWAIT);
+ if (ret < 0) {
+ pr_info("msgrcv failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ ret = ioctl_find_first_group(dev_fd, getpid());
+ if (ret != alloc_info->spg_id) {
+ pr_info("unexpected group_id: %d", ret);
+ goto error_out;
+ }
+
+ if (!ioctl_judge_addr(dev_fd, alloc_info->addr)) {
+ pr_info("invalid address");
+ goto error_out;
+ }
+
+ char *buf = (char *)alloc_info->addr;
+ for (int i = 0; i < alloc_info->size; i++) {
+ if (buf[i] != 'k') {
+ pr_info("memory check failed");
+ goto error_out;
+ }
+ }
+
+ pr_info("unshare u2k");
+ ret = ioctl_unshare(dev_fd, u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ /*
+ * unshare之后再访问内存会导致内核挂掉,在手工测试的时候执行
+ */
+#if 0
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'k',
+ .addr = u2k_info->addr,
+ .size = u2k_info->size,
+ };
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ goto error_out;
+ }
+
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ goto error_out;
+ }
+
+ pr_info("recheck karea");
+ karea_info.mod = KAREA_SET;
+ karea_info.value = 'a';
+ karea_info.size = 1;
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ goto error_out;
+ }
+ pr_info("after karea set, the value is '%c%c'",
+ ((char *)alloc_info->addr)[0], ((char *)alloc_info->addr)[3]);
+#endif
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ sem_post(childsync);
+ return 0;
+
+error_out:
+ sem_post(childsync);
+ return -1;
+}
+
+static int child2(pid_t pid, sem_t *sync, sem_t *childsync)
+{
+ int group_id = 10, ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = group_id,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ goto error_out;
+ }
+ memset((void *)alloc_info.addr, 'k', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'k',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ goto error_out;
+ }
+
+ int msgid = msgget(tc1_MSG_KEY, IPC_CREAT | 0666);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ msgbuf.type = tc1_MSG_TYPE;
+ memcpy(&msgbuf.alloc_info, &alloc_info, sizeof(alloc_info));
+ ret = msgsnd(msgid, &msgbuf, sizeof(alloc_info), 0);
+ if (ret < 0) {
+ pr_info("msgsnd failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ memcpy(&msgbuf.share_info, &u2k_info, sizeof(u2k_info));
+ ret = msgsnd(msgid, &msgbuf, sizeof(u2k_info), 0);
+ if (ret < 0) {
+ pr_info("msgsnd failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ int status = 0;
+ waitpid(pid, &status, 0);
+
+ return !WIFEXITED(status) || WEXITSTATUS(status) ? -1 : 0;
+
+error_out:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return -1;
+}
+static int per_test_init(int i, sem_t **sync, sem_t **childsync)
+{
+ char buf[100];
+ sprintf(buf, "/test_u2k%d", i);
+ *sync = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (*sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+
+ sprintf(buf, "/test_u2k_child%d", i);
+ *childsync = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (*childsync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+
+ return 0;
+}
+
+static int testcase(int i, int (*child)(pid_t, sem_t *, sem_t *), int (*grandchild)(sem_t *, sem_t *))
+{
+ sem_t *sync, *childsync;
+ if (per_test_init(i, &sync, &childsync))
+ return -1;
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed i=%d", i);
+ return -1;
+ } else if (pid == 0) {
+ exit(grandchild(sync, childsync));
+ }
+
+ return child(pid, sync, childsync);
+}
+
+static struct {
+ int (*child)(pid_t, sem_t *, sem_t *);
+ int (*grandchild)(sem_t *, sem_t *);
+} functions[] = {
+ {
+ .child = child1,
+ .grandchild = grandchild1,
+ },
+ {
+ .child = child2,
+ .grandchild = grandchild2,
+ },
+};
+
+static int testcase1(void) { return testcase(0, functions[0].child, functions[0].grandchild); }
+static int testcase2(void) { return testcase(1, functions[1].child, functions[1].grandchild); }
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "用户态进程A加组后分配并写内存N,u2k共享给内核,内核模块读N成功,进程B加组后读N成功。进程A停止共享N后,内核模块读N失败,进程B读N成功。进程A释放内存N。")
+ TESTCASE_CHILD(testcase2, "用户态进程A加组后分配并写内存N,u2k共享给内核,内核模块读N成功,进程B加组后读N成功。进程A停止共享N后,内核模块读N失败,进程B读N成功。进程A释放内存N。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/generate_list.sh b/tools/testing/sharepool/testcase/generate_list.sh
new file mode 100755
index 000000000000..caba42837a54
--- /dev/null
+++ b/tools/testing/sharepool/testcase/generate_list.sh
@@ -0,0 +1,46 @@
+#!/bin/bash
+
+# 在顶层文件夹中
+ # 收集子文件夹中的list文件
+ # 收集文件夹中.c文件的comment
+ # 形成一个list文件
+name=tc_list
+collect_comments()
+{
+ curdir=$1
+ echo $curdir
+
+ cd $curdir
+ rm -rf $name
+
+ subdirs=`ls -d */`
+
+# echo "" >> $name
+ echo "===============================" >> $name
+ echo $curdir >> $name
+ echo "===============================" >> $name
+
+ for dir in $subdirs
+ do
+ dir=${dir%*/}
+ local tmp_dir=$dir
+ collect_comments $dir
+ cat $tmp_dir/$name >> $name
+ echo "" >> $name
+ done
+
+ cfiles=`ls | grep '\.'c`
+ echo $cfiles
+
+ for cfile in $cfiles
+ do
+ echo $cfile >> $name
+ grep "TESTCASE" $cfile -r >> $name
+ echo "" >> $name
+ done
+
+ cd ..
+ echo "back to `pwd`"
+}
+
+collect_comments `pwd`
diff --git a/tools/testing/sharepool/testcase/performance_test/Makefile b/tools/testing/sharepool/testcase/performance_test/Makefile
new file mode 100644
index 000000000000..258bd6582414
--- /dev/null
+++ b/tools/testing/sharepool/testcase/performance_test/Makefile
@@ -0,0 +1,17 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+testcases:=test_perf_sp_alloc \
+ test_perf_sp_k2u \
+ test_perf_sp_add_group \
+ test_perf_process_kill
+
+default: $(testcases)
+
+install: $(testcases) performance_test.sh
+ mkdir -p $(TOOL_BIN_DIR)/performance_test
+ cp $(testcases) $(TOOL_BIN_DIR)/performance_test
+ cp performance_test.sh $(TOOL_BIN_DIR)
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/performance_test/performance_test.sh b/tools/testing/sharepool/testcase/performance_test/performance_test.sh
new file mode 100755
index 000000000000..d3944d7431e7
--- /dev/null
+++ b/tools/testing/sharepool/testcase/performance_test/performance_test.sh
@@ -0,0 +1,5 @@
+#!/bin/sh
+
+./performance_test/test_perf_sp_alloc | grep result | awk -F ']' '{print $2}'
+./performance_test/test_perf_sp_k2u | grep result | awk -F ']' '{print $2}'
+./performance_test/test_perf_sp_add_group | grep result | awk -F ']' '{print $2}'
diff --git a/tools/testing/sharepool/testcase/performance_test/test_perf_process_kill.c b/tools/testing/sharepool/testcase/performance_test/test_perf_process_kill.c
new file mode 100644
index 000000000000..e6abbf8c2d1d
--- /dev/null
+++ b/tools/testing/sharepool/testcase/performance_test/test_perf_process_kill.c
@@ -0,0 +1,174 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Jan 18 08:58:29 2021
+ */
+
+#include <stdlib.h>
+#include <time.h>
+#include "sharepool_lib.h"
+#include "sem_use.h"
+
+#define ALLOC_SIZE (6UL * 1024UL * 1024UL * 1024UL)
+#define CHILD_PROCESS_NUM 12
+
+static int semid;
+
+struct sp_alloc_memory_type {
+ unsigned long normal;
+ unsigned long huge;
+};
+
+struct sp_alloc_memory_type test_memory_types[] = {
+ {
+ .normal = 0,
+ .huge = ALLOC_SIZE,
+ },
+ {
+ .normal = ALLOC_SIZE * (1.0 / 6),
+ .huge = ALLOC_SIZE * (5.0 / 6),
+ },
+ {
+ .normal = ALLOC_SIZE * (3.0 / 6),
+ .huge = ALLOC_SIZE * (3.0 / 6),
+ },
+ {
+ .normal = ALLOC_SIZE * (5.0 / 6),
+ .huge = ALLOC_SIZE * (1.0 / 6),
+ },
+ {
+ .normal = ALLOC_SIZE,
+ .huge = 0,
+ },
+};
+
+static int testcase1_child(int test_index)
+{
+ int ret = 0;
+ int spg_id = test_index + 1;
+ unsigned long normal = test_memory_types[test_index].normal;
+ unsigned long huge = test_memory_types[test_index].huge;
+ time_t kill_start, now;
+
+ sem_set_value(semid, 0);
+
+ // 创建组,申请6G内存,变量为小页占比
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id);
+ if (ret < 0) {
+ pr_info("add group failed.");
+ return ret;
+ }
+
+ // 申请test-memory-type的内存
+ pr_info("test-momery-type %d test started, allocating memory...", test_index);
+ unsigned long addr = 0;
+ if (normal != 0) {
+ addr = wrap_sp_alloc(spg_id, normal, 0);
+ if (addr == 0) {
+ pr_info("alloc normal memory failed.");
+ return -1;
+ }
+ }
+ if (huge != 0) {
+ addr = wrap_sp_alloc(spg_id, huge, SP_HUGEPAGE_ONLY);
+ if (addr == 0) {
+ pr_info("alloc huge memory failed.");
+ return -1;
+ }
+ }
+ pr_info("child %d alloc memory %lx normal, %lx huge success.", test_index, normal, huge);
+
+ // 将12个子进程加组
+ pid_t child[CHILD_PROCESS_NUM];
+ sem_check_zero(semid);
+ for (int i = 0; i < CHILD_PROCESS_NUM; i++) {
+ int pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed.");
+ return -1;
+ } else if (pid == 0) {
+ wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id);
+ sem_inc_by_one(semid);
+ while (1) {
+
+ }
+ } else {
+ child[i] = pid;
+ }
+ }
+ pr_info("fork all child processes success.");
+
+ // ----- DO -----
+
+ // ----- END -----
+
+ // 并发kill所有子进程
+ sem_dec_by_val(semid, CHILD_PROCESS_NUM);
+ pr_info("all child processes add group success.");
+ time(&now);
+ pr_info("time before kill signal sent is: %d", now);
+ for (int i = 0; i < CHILD_PROCESS_NUM; i++)
+ kill(child[i], SIGKILL);
+ time(&kill_start);
+ pr_info("time after kill signal sent is: %d", kill_start);
+
+ // 记录waitpid完成后所需的时间
+ for (int i = 0; i < CHILD_PROCESS_NUM; i++) {
+ int status;
+ ret = waitpid(child[i], &status, 0);
+
+ time(&now);
+ pr_info("time when child %d exit is %d, time taken is %d seconds.", i, now, now - kill_start);
+ if (ret < 0) {
+ pr_info("waitpid failed.");
+ }
+
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", i, status);
+ ret = -1;
+ }
+ }
+ sem_check_zero(semid);
+ time(&now);
+ pr_info("time when all child processes exit is %d, time taken is %d seconds.", now, now - kill_start);
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+ int ret = 0;
+ semid = sem_create(1234, "process_sync");
+
+ for (int i = 0; i < sizeof(test_memory_types) / sizeof(test_memory_types[0]); i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork testcase1 child failed.");
+ return -1;
+ } else if (pid == 0) {
+ exit(testcase1_child(i));
+ }
+ int status;
+ waitpid(pid, &status, 0);
+ pr_info("test-momery-type %d test finished.", i);
+ }
+
+ sem_close(semid);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, true)
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
\ No newline at end of file
diff --git a/tools/testing/sharepool/testcase/performance_test/test_perf_sp_add_group.c b/tools/testing/sharepool/testcase/performance_test/test_perf_sp_add_group.c
new file mode 100644
index 000000000000..c2882be7b0e1
--- /dev/null
+++ b/tools/testing/sharepool/testcase/performance_test/test_perf_sp_add_group.c
@@ -0,0 +1,375 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Description:
+ * Author: Huawei OS Kernel Lab
+ * Create: Sat Mar 13 15:17:32 2021
+ */
+#define _GNU_SOURCE
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sys/sem.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <sys/sysinfo.h>
+#include <sched.h> /* sched_setaffinity */
+#include <pthread.h>
+
+#include "sharepool_lib.h"
+
+#define NSEC2SEC 1000000000
+
+
+static int cpu_num;
+static int thread_sync;
+
+struct test_perf {
+ int (*perf_start)(void *);
+ int (*perf_point)(void *);
+ int (*perf_end)(void *);
+
+ int count;
+ void *arg;
+ char *name;
+};
+
+struct test_params {
+ int spg_id;
+ unsigned int process_num;
+ /* 申请太小的内存不具生产环境意义,现网至少按GB开始 */
+ unsigned long mem_size_normal; /* unit: MB */
+ unsigned long mem_size_huge; /* unit: MB */
+};
+
+static int test_perf_child(struct test_perf *test_perf)
+{
+ int ret = 0;
+ long dur;
+ struct timespec ts_start, ts_end;
+
+ if (!test_perf->perf_point) {
+ pr_info("you must supply a perf_point routine");
+ return -1;
+ }
+
+ if (test_perf->perf_start) {
+ if (test_perf->perf_start(test_perf->arg)) {
+ pr_info("testcase init failed");
+ ret = -1;
+ goto end;
+ }
+ }
+
+ pr_info(">> testcase %s start <<", test_perf->name);
+ clock_gettime(CLOCK_MONOTONIC, &ts_start);
+ if (test_perf->perf_point(test_perf->arg)) {
+ pr_info("testcase point failed");
+ ret = -1;
+ goto end;
+ }
+ clock_gettime(CLOCK_MONOTONIC, &ts_end);
+
+ dur = (ts_end.tv_sec - ts_start.tv_sec) * NSEC2SEC + (ts_end.tv_nsec - ts_start.tv_nsec);
+
+end:
+ if (test_perf->perf_end) {
+ if (test_perf->perf_end(test_perf->arg)) {
+ pr_info("testcase exit failed");
+ return -1;
+ }
+ }
+
+ if (!ret) {
+ pr_info("%50s result: %10ld", test_perf->name, dur);
+ return 0;
+ } else {
+ return ret;
+ }
+}
+
+static int test_perf_routing(struct test_perf *test_perf)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(test_perf_child(test_perf));
+ }
+
+ int status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("testcase %s failed", test_perf->name);
+ return -1;
+ }
+
+ return 0;
+}
+
+#define MAX_CHILD_NR 100
+static pid_t childs[MAX_CHILD_NR];
+
+static int sp_add_group_start(void *arg)
+{
+ cpu_set_t mask;
+ struct test_params *params = arg;
+ struct sp_alloc_info alloc_info;
+ unsigned long i, times;
+
+ CPU_ZERO(&mask);
+ CPU_SET(0, &mask); /* parent process runs on CPU0 */
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("parent process sched_setaffinity failed, errno: %d", errno);
+ return -1;
+ }
+ cpu_num = get_nprocs();
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = params->spg_id,
+ };
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ /* allocation begins */
+ alloc_info.spg_id = params->spg_id;
+ alloc_info.flag = 0;
+ alloc_info.size = 16 * PAGE_SIZE;
+ times = params->mem_size_normal * 16; /* from MB to 16 pages */
+ for (i = 0; i < times; i++) {
+ if (ioctl_alloc(dev_fd, &alloc_info)) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ alloc_info.flag = SP_HUGEPAGE;
+ alloc_info.size = 2 * PMD_SIZE;
+ times = params->mem_size_huge / 4; /* from MB to 2 hugepages */
+ for (i = 0; i < times; i++) {
+ if (ioctl_alloc(dev_fd, &alloc_info)) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+ }
+ /* end of allocation */
+
+ int semid = semget(0xabcd9116, 1, IPC_CREAT | 0644);
+ if (semid < 0) {
+ pr_info("open systemV semaphore failed, errno: %s", strerror(errno));
+ return -1;
+ }
+ int ret = semctl(semid, 0, SETVAL, 0);
+ if (ret < 0) {
+ pr_info("sem setval failed, %s", strerror(errno));
+ goto sem_remove;
+ }
+
+ for (i = 0; i < params->process_num; i++) {
+ pid_t pid = fork();
+
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = 1,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+
+ CPU_ZERO(&mask);
+ CPU_SET(i % cpu_num, &mask);
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("child process %d sched_setaffinity failed, errno: %d", i, errno);
+ return -1;
+ }
+
+ sleep(3600);
+ }
+
+ childs[i] = pid;
+ }
+
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = -params->process_num,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+
+sem_remove:
+ if (semctl(semid, IPC_RMID, NULL) < 0)
+ pr_info("sem remove failed, %s", strerror(errno));
+
+ return 0;
+}
+
+static int sp_add_group_point(void *arg)
+{
+ struct test_params *params = arg;
+ struct sp_add_group_info ag_info = {
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = params->spg_id,
+ };
+
+ for (int i = 0; i < params->process_num; i++) {
+ ag_info.pid = childs[i];
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+void *thread_add_group(void *arg)
+{
+ struct sp_add_group_info *ag_info = arg;
+ ag_info->prot = PROT_READ | PROT_WRITE;
+
+ __sync_fetch_and_add(&thread_sync, 1);
+ __sync_synchronize();
+ while (1) {
+ if (thread_sync == 0) {
+ break;
+ }
+ }
+
+ if (ioctl_add_group(dev_fd, ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ pthread_exit((void *)1);
+ }
+ pthread_exit((void *)0);
+}
+
+static int sp_add_group_concurrent_point(void *arg)
+{
+ struct test_params *params = arg;
+ struct sp_add_group_info ag_info[MAX_CHILD_NR];
+ pthread_t tid[MAX_CHILD_NR];
+ cpu_set_t mask;
+ int i, ret;
+ void *tret;
+
+ thread_sync = -params->process_num;
+
+ for (i = 0; i < params->process_num; i++) {
+ ag_info[i].spg_id = params->spg_id;
+ ag_info[i].pid = childs[i];
+ ret = pthread_create(tid + i, NULL, thread_add_group, ag_info + i);
+ if (ret != 0) {
+ pr_info("create thread %d error\n", i);
+ return -1;
+ }
+ CPU_ZERO(&mask);
+ CPU_SET(i % cpu_num, &mask);
+ if (pthread_setaffinity_np(tid[i], sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("set thread %d affinity failed, errno: %d", i, errno);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < params->process_num; i++) {
+ ret = pthread_join(tid[i], &tret);
+ if (ret != 0) {
+ pr_info("can't join thread %d\n", i);
+ return -1;
+ }
+ if ((long)tret != 0) {
+ pr_info("testcase execution failed, thread %d exited unexpectedly\n", i);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+static int sp_add_group_end(void *arg)
+{
+ int ret;
+
+ for (int i = 0; i < MAX_CHILD_NR && childs[i]; i++) {
+ kill(childs[i], SIGINT);
+ }
+ return 0;
+}
+
+static struct test_params params[] = {
+ {
+ .spg_id = 1,
+ .process_num = 30,
+ .mem_size_normal = 32,
+ .mem_size_huge = 32,
+ },
+ {
+ .spg_id = 1,
+ .process_num = 30,
+ .mem_size_normal = 1024 * 3.5,
+ .mem_size_huge = 1024 * 5,
+ },
+};
+
+static struct test_perf testcases[] = {
+ {
+ .perf_start = sp_add_group_start,
+ .perf_point = sp_add_group_point,
+ .perf_end = sp_add_group_end,
+ .name = "sp_add_group_P30_N0G_H0G",
+ .arg = ¶ms[0],
+ },
+ {
+ .perf_start = sp_add_group_start,
+ .perf_point = sp_add_group_concurrent_point,
+ .perf_end = sp_add_group_end,
+ .name = "sp_add_group_C_P4_N1G_H1G",
+ .arg = ¶ms[0],
+ },
+ {
+ .perf_start = sp_add_group_start,
+ .perf_point = sp_add_group_point,
+ .perf_end = sp_add_group_end,
+ .name = "sp_add_group_P30_N3.5G_H5G",
+ .arg = ¶ms[1],
+ },
+ {
+ .perf_start = sp_add_group_start,
+ .perf_point = sp_add_group_concurrent_point,
+ .perf_end = sp_add_group_end,
+ .name = "sp_add_group_C_P30_N3.5G_H5G",
+ .arg = ¶ms[1],
+ },
+};
+
+#define STRLENGTH 500
+static char filename[STRLENGTH];
+
+int main()
+{
+ int ret = 0;
+ dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ int passed = 0, failed = 0;
+
+ for (int i = 0; i < sizeof(testcases) / sizeof(testcases[0]); i++) {
+ ret = test_perf_routing(&testcases[i]);
+ ret == 0? passed++: failed++;
+ }
+
+ close_device(dev_fd);
+
+ pr_info("----------------------------");
+ printf("%s All %d testcases finished, passing: %d, failing: %d", extract_filename(filename, __FILE__), passed + failed, passed, failed);
+ printf("-------------------------\n");
+
+ return failed == 0? 0: -1;
+}
diff --git a/tools/testing/sharepool/testcase/performance_test/test_perf_sp_alloc.c b/tools/testing/sharepool/testcase/performance_test/test_perf_sp_alloc.c
new file mode 100644
index 000000000000..1f61dd47719b
--- /dev/null
+++ b/tools/testing/sharepool/testcase/performance_test/test_perf_sp_alloc.c
@@ -0,0 +1,618 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Jan 18 08:58:29 2021
+ */
+
+#define _GNU_SOURCE
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sys/sem.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <sys/sysinfo.h>
+#include <sched.h> /* sched_setaffinity */
+
+#include "sharepool_lib.h"
+
+#define NSEC2SEC 1000000000
+
+
+static int nr_child_process = 8;
+
+struct test_perf {
+ int (*perf_start)(void *);
+ int (*perf_point)(void *);
+ int (*perf_end)(void *);
+
+ int count;
+ void *arg;
+ char *name;
+};
+
+static int test_perf_child(struct test_perf *test_perf)
+{
+ long sum, max, min, dur;
+ struct timespec ts_start, ts_end;
+
+ sum = max = 0;
+ min = ((unsigned long)-1) >> 1;
+
+ if (!test_perf->perf_point) {
+ pr_info("you must supply a perf_point routine");
+ return -1;
+ }
+
+ if (test_perf->perf_start) {
+ if (test_perf->perf_start(test_perf->arg)) {
+ pr_info("testcase init failed");
+ if (test_perf->perf_end) {
+ if (test_perf->perf_end(test_perf->arg)) {
+ pr_info("testcase exit failed");
+ return -1;
+ }
+ }
+ return -1;
+ }
+ }
+
+ pr_info(">> testcase %s start <<", test_perf->name);
+ for (int i = 0; i < test_perf->count; i++) {
+ clock_gettime(CLOCK_MONOTONIC, &ts_start);
+ if (test_perf->perf_point(test_perf->arg)) {
+ pr_info("testcase point failed, i: %d", i);
+ return -1;
+ }
+ clock_gettime(CLOCK_MONOTONIC, &ts_end);
+
+ dur = (ts_end.tv_sec - ts_start.tv_sec) * NSEC2SEC + (ts_end.tv_nsec - ts_start.tv_nsec);
+ sum += dur;
+ max = max > dur ? max : dur;
+ min = min < dur ? min : dur;
+ }
+ if (test_perf->perf_end) {
+ if (test_perf->perf_end(test_perf->arg)) {
+ pr_info("testcase exit failed");
+ return -1;
+ }
+ }
+
+ pr_info("%50s result: avg: %10ld, max: %10ld, min: %10ld", test_perf->name, sum / test_perf->count, max, min);
+
+ return 0;
+}
+
+static int test_perf_routing(struct test_perf *test_perf)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(test_perf_child(test_perf));
+ }
+
+ int status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("testcase %s failed", test_perf->name);
+ return -1;
+ } else {
+ pr_info("testcase %s success", test_perf->name);
+ }
+
+ return 0;
+}
+
+static int sp_alloc_start(void *arg)
+{
+ struct sp_alloc_info *alloc_info = arg;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = alloc_info->spg_id,
+ };
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int sp_alloc_point(void *arg)
+{
+ struct sp_alloc_info *alloc_info = arg;
+
+ if (ioctl_alloc(dev_fd, alloc_info)) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int sp_alloc_and_free_point(void *arg)
+{
+ struct sp_alloc_info *alloc_info = arg;
+
+ if (ioctl_alloc(dev_fd, alloc_info)) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ if (ioctl_free(dev_fd, alloc_info)) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ return -1;
+ }
+
+ return 0;
+}
+
+#define MAX_CHILD_NR 100
+static pid_t childs[MAX_CHILD_NR];
+
+/*
+ * 同时创建N个进程加组,后续只有父进程分配内存,预期各个子进程由于建页表
+ * 的开销,会慢一些
+ */
+static int sp_alloc_mult_process_start(void *arg)
+{
+ cpu_set_t mask;
+ struct sp_alloc_info *alloc_info = arg;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = alloc_info->spg_id,
+ };
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ CPU_ZERO(&mask);
+ CPU_SET(0, &mask); /* parent process runs on CPU0 */
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("parent process sched_setaffinity failed, errno: %d", errno);
+ return -1;
+ }
+
+ for (int i = 0; i < nr_child_process; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ CPU_ZERO(&mask);
+ int cpu_count = get_nprocs();
+ CPU_SET((i + 1) % cpu_count, &mask);
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("child process %d sched_setaffinity failed, errno: %d", i, errno);
+ exit(-1);
+ }
+
+ while (1) {};
+ }
+
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+/* 同时创建7进程 */
+static int sp_alloc_mult_process_start_7(void *arg)
+{
+ nr_child_process = 7;
+ sp_alloc_mult_process_start(arg);
+}
+
+/* 同时创建15进程 */
+static int sp_alloc_mult_process_start_15(void *arg)
+{
+ nr_child_process = 15;
+ sp_alloc_mult_process_start(arg);
+}
+
+static int sp_alloc_end(void *arg)
+{
+ for (int i = 0; i < MAX_CHILD_NR && childs[i]; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], NULL, 0);
+ childs[i] = 0;
+ }
+
+ return 0;
+}
+
+// 同时创建N个进程加组,并且子进程也在申请内存
+static int sp_alloc_mult_alloc_start(void *arg)
+{
+ cpu_set_t mask;
+ struct sp_alloc_info *alloc_info = arg;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = alloc_info->spg_id,
+ };
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ CPU_ZERO(&mask);
+ CPU_SET(0, &mask); /* parent process runs on CPU0 */
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("parent process sched_setaffinity failed, errno: %d\n", errno);
+ return -1;
+ }
+
+ int semid = semget(0xabcd996, 1, IPC_CREAT | 0644);
+ if (semid < 0) {
+ pr_info("open systemV semaphore failed, errno: %s", strerror(errno));
+ return -1;
+ }
+ int ret = semctl(semid, 0, SETVAL, 0);
+ if (ret < 0) {
+ pr_info("sem setval failed, %s", strerror(errno));
+ goto sem_remove;
+ }
+
+ for (int i = 0; i < nr_child_process; i++) {
+ struct timespec delay;
+ pid_t pid = fork();
+
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = -1,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+
+ CPU_ZERO(&mask);
+ int cpu_count = get_nprocs();
+ CPU_SET((i + 1) % cpu_count, &mask);
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("child process %d sched_setaffinity failed, errno: %d\n", i, errno);
+ return -1;
+ }
+
+ delay.tv_sec = 0;
+ delay.tv_nsec = 3000000; /* 3ms */
+
+ while (1) {
+ sp_alloc_and_free_point(alloc_info);
+ nanosleep(&delay, NULL);
+ }
+ }
+
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = nr_child_process,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+
+sem_remove:
+ if (semctl(semid, IPC_RMID, NULL) < 0)
+ pr_info("sem remove failed, %s", strerror(errno));
+
+ return 0;
+}
+
+/* 同时创建7进程 */
+static int sp_alloc_mult_alloc_start_7(void *arg)
+{
+ nr_child_process = 7;
+ sp_alloc_mult_alloc_start(arg);
+}
+
+/* 同时创建15进程 */
+static int sp_alloc_mult_alloc_start_15(void *arg)
+{
+ nr_child_process = 15;
+ sp_alloc_mult_alloc_start(arg);
+}
+
+static struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = 0,
+ .size = 2 * PAGE_SIZE, // 8K
+ .spg_id = 1,
+ },
+ {
+ .flag = SP_HUGEPAGE,
+ .size = 2 * PMD_SIZE, // 4M
+ .spg_id = 1,
+ },
+ {
+ .flag = 0,
+ .size = 1024 * PAGE_SIZE, // 4M
+ .spg_id = 1,
+ },
+ {
+ .flag = 0,
+ .size = 512 * PAGE_SIZE, // 2M
+ .spg_id = 1,
+ },
+ {
+ .flag = SP_HUGEPAGE,
+ .size = PMD_SIZE, // 2M
+ .spg_id = 1,
+ },
+};
+
+static struct test_perf testcases[] = {
+ /* 单进程 */
+ {
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_point,
+ .perf_end = NULL,
+ .count = 1000,
+ .name = "sp_alloc_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ /*
+ * 如果只做sp_alloc不做sp_free,随内存消耗的增加,分配大页
+ * 内存的速率将越来越慢
+ */
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_point,
+ .perf_end = NULL,
+ .count = 1000, // 4G 大页 预计很慢
+ .name = "sp_alloc_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+ {
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = NULL,
+ .count = 1000,
+ .name = "sp_alloc_and_free_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = NULL,
+ .count = 1000,
+ .name = "sp_alloc_and_free_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+ {
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_point,
+ .perf_end = NULL,
+ .count = 1000,
+ .name = "sp_alloc_1024_pages",
+ .arg = &alloc_infos[2],
+ },
+ {
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = NULL,
+ .count = 1000,
+ .name = "sp_alloc_and_free_1024_pages",
+ .arg = &alloc_infos[2],
+ },
+ {
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_point,
+ .perf_end = NULL,
+ .count = 1000,
+ .name = "sp_alloc_512_pages",
+ .arg = &alloc_infos[3],
+ },
+ {
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = NULL,
+ .count = 1000,
+ .name = "sp_alloc_and_free_512_pages",
+ .arg = &alloc_infos[3],
+ },
+ {
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_point,
+ .perf_end = NULL,
+ .count = 1000, // 2G 大页 预计很慢
+ .name = "sp_alloc_1_hugepage",
+ .arg = &alloc_infos[4],
+ },
+ {
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = NULL,
+ .count = 1000,
+ .name = "sp_alloc_and_free_1_hugepage",
+ .arg = &alloc_infos[4],
+ },
+ /* 父进程申请,子进程仅建页表 */
+ /* 总共8进程 */
+ {
+ .perf_start = sp_alloc_mult_process_start_7,
+ .perf_point = sp_alloc_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_mult_7_proc_populate_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ .perf_start = sp_alloc_mult_process_start_7,
+ .perf_point = sp_alloc_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_mult_7_proc_populate_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+ {
+ .perf_start = sp_alloc_mult_process_start_7,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_and_free_mult_7_proc_populate_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ .perf_start = sp_alloc_mult_process_start_7,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_and_free_mult_7_proc_populate_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+ /* 16进程 */
+ {
+ .perf_start = sp_alloc_mult_process_start_15,
+ .perf_point = sp_alloc_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_mult_15_proc_populate_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ .perf_start = sp_alloc_mult_process_start_15,
+ .perf_point = sp_alloc_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_mult_15_proc_populate_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+ {
+ .perf_start = sp_alloc_mult_process_start_15,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_and_free_mult_15_proc_populate_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ .perf_start = sp_alloc_mult_process_start_15,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_and_free_mult_15_proc_populate_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+ /* 父子进程都加组并申请内存 */
+ /* 8进程 */
+ {
+ .perf_start = sp_alloc_mult_alloc_start_7,
+ .perf_point = sp_alloc_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_mult_7_alloc_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ .perf_start = sp_alloc_mult_alloc_start_7,
+ .perf_point = sp_alloc_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_mult_7_alloc_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+ {
+ .perf_start = sp_alloc_mult_alloc_start_7,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_and_free_mult_7_alloc_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ .perf_start = sp_alloc_mult_alloc_start_7,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_and_free_mult_7_alloc_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+ /* 16进程 */
+ {
+ .perf_start = sp_alloc_mult_alloc_start_15,
+ .perf_point = sp_alloc_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_mult_15_alloc_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ .perf_start = sp_alloc_mult_alloc_start_15,
+ .perf_point = sp_alloc_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_mult_15_alloc_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+ {
+ .perf_start = sp_alloc_mult_alloc_start_15,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_and_free_mult_15_alloc_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ .perf_start = sp_alloc_mult_alloc_start_15,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_and_free_mult_15_alloc_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+};
+
+
+#define STRLENGTH 500
+static char filename[STRLENGTH];
+
+int main()
+{
+ int ret = 0;
+ dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ int passed = 0, failed = 0;
+
+ for (int i = 0; i < sizeof(testcases) / sizeof(testcases[0]); i++) {
+ ret = test_perf_routing(&testcases[i]);
+ ret == 0? passed++: failed++;
+ }
+
+ close_device(dev_fd);
+
+ pr_info("----------------------------");
+ printf("%s All %d testcases finished, passing: %d, failing: %d", extract_filename(filename, __FILE__), passed + failed, passed, failed);
+ printf("-------------------------\n");
+
+ return failed == 0? 0: -1;
+}
diff --git a/tools/testing/sharepool/testcase/performance_test/test_perf_sp_k2u.c b/tools/testing/sharepool/testcase/performance_test/test_perf_sp_k2u.c
new file mode 100644
index 000000000000..21b7ae4d97e0
--- /dev/null
+++ b/tools/testing/sharepool/testcase/performance_test/test_perf_sp_k2u.c
@@ -0,0 +1,860 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Jan 18 08:58:29 2021
+ */
+
+#define _GNU_SOURCE
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sys/sem.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <sched.h> /* sched_setaffinity */
+#include <sys/sysinfo.h>
+
+#include "sharepool_lib.h"
+
+#define NSEC2SEC 1000000000
+
+
+static int nr_child_process = 8;
+static unsigned long vm_addr;
+
+struct test_perf {
+ int (*perf_start)(void *);
+ int (*perf_point)(void *);
+ int (*perf_end)(void *);
+
+ int count;
+ void *arg;
+ char *name;
+};
+
+static int test_perf_child(struct test_perf *test_perf)
+{
+ long sum, max, min, dur;
+ struct timespec ts_start, ts_end;
+
+ sum = max = 0;
+ min = ((unsigned long)-1) >> 1;
+
+ if (!test_perf->perf_point) {
+ pr_info("you must supply a perf_point routine");
+ return -1;
+ }
+
+ if (test_perf->perf_start) {
+ if (test_perf->perf_start(test_perf->arg)) {
+ pr_info("testcase init failed");
+ if (test_perf->perf_end) {
+ if (test_perf->perf_end(test_perf->arg)) {
+ pr_info("testcase exit failed");
+ return -1;
+ }
+ }
+ return -1;
+ }
+ }
+
+ //pr_info(">> testcase %s start <<", test_perf->name);
+ for (int i = 0; i < test_perf->count; i++) {
+ pr_info(">> testcase %s %dth time begins, %d times left. <<",
+ test_perf->name, i + 1, test_perf->count - (i + 1));
+ clock_gettime(CLOCK_MONOTONIC, &ts_start);
+ if (test_perf->perf_point(test_perf->arg)) {
+ pr_info("testcase %s %dth point failed.", test_perf->name, i + 1);
+ return -1;
+ }
+ clock_gettime(CLOCK_MONOTONIC, &ts_end);
+
+ dur = (ts_end.tv_sec - ts_start.tv_sec) * NSEC2SEC + (ts_end.tv_nsec - ts_start.tv_nsec);
+ sum += dur;
+ max = max > dur ? max : dur;
+ min = min < dur ? min : dur;
+ }
+
+ if (test_perf->perf_end) {
+ if (test_perf->perf_end(test_perf->arg)) {
+ pr_info("testcase exit failed");
+ return -1;
+ }
+ }
+
+ pr_info("%50s result: avg: %10ld, max: %10ld, min: %10ld", test_perf->name, sum / test_perf->count, max, min);
+
+ return 0;
+}
+
+static int test_perf_routing(struct test_perf *test_perf)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(test_perf_child(test_perf));
+ }
+
+ int status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ return -1;
+ }
+
+ return 0;
+}
+
+static int vmalloc_and_store(unsigned long size)
+{
+ struct vmalloc_info vmalloc_info = {
+ .size = size,
+ };
+
+ if (ioctl_vmalloc(dev_fd, &vmalloc_info)) {
+ pr_info("vmalloc small page failed, errno: %d", errno);
+ return -1;
+ } else {
+ pr_info("vmalloc success: %lx", vmalloc_info.addr);
+ pr_info("vm_addr before: %lx", vm_addr);
+ vm_addr = vmalloc_info.addr;
+ pr_info("vm_addr after: %lx", vm_addr);
+ }
+
+ return 0;
+}
+
+static int vmalloc_hugepage_and_store(unsigned long size)
+{
+ struct vmalloc_info vmalloc_info = {
+ .size = size,
+ };
+
+ if (ioctl_vmalloc_hugepage(dev_fd, &vmalloc_info)) {
+ pr_info("vmalloc small page failed, errno: %d", errno);
+ return -1;
+ } else {
+ pr_info("vmalloc success: %lx", vmalloc_info.addr);
+ pr_info("vm_addr before: %lx", vm_addr);
+ vm_addr = vmalloc_info.addr;
+ pr_info("vm_addr after: %lx", vm_addr);
+ }
+
+ return 0;
+}
+
+static int vfree(unsigned long size)
+{
+ struct vmalloc_info vmalloc_info = {
+ .addr = vm_addr,
+ .size = size,
+ };
+ pr_info("gonna vfree address: %lx", vm_addr);
+ if (ioctl_vfree(dev_fd, &vmalloc_info)) {
+ pr_info("vfree failed, errno: %d", errno);
+ return -1;
+ } else {
+ pr_info("vfree success.");
+ return 0;
+ }
+}
+
+static int sp_k2u_start(void *arg)
+{
+ pid_t pid = getpid();
+
+ struct sp_make_share_info *k2u_info = arg;
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = k2u_info->spg_id,
+ };
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ vmalloc_and_store(k2u_info->size);
+
+ k2u_info->pid = pid;
+ k2u_info->kva = vm_addr;
+
+ return 0;
+}
+
+static int sp_k2u_huge_start(void *arg)
+{
+ pid_t pid = getpid();
+
+ struct sp_make_share_info *k2u_info = arg;
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = k2u_info->spg_id,
+ };
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ vmalloc_hugepage_and_store(k2u_info->size);
+
+ k2u_info->pid = pid;
+ k2u_info->kva = vm_addr;
+ return 0;
+}
+
+/*
+ * 这样做无法完全回收k2u的uva,进而做vfree会提示错误。因此,不调用本函数,
+ * 本用例暂定为会导致内存泄漏
+ */
+static int sp_k2u_end(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ if (vfree(k2u_info->size) < 0)
+ return -1;
+
+ return 0;
+}
+
+static int sp_k2u_point(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ if (ioctl_k2u(dev_fd, k2u_info)) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int sp_k2u_and_unshare_point(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ if (ioctl_k2u(dev_fd, k2u_info)) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ return -1;
+ }
+
+ if (ioctl_unshare(dev_fd, k2u_info)) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return -1;
+ }
+
+ return 0;
+}
+
+#define MAX_CHILD_NR 100
+static pid_t childs[MAX_CHILD_NR];
+
+static int sp_k2u_mult_end(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+ for (int i = 0; i < MAX_CHILD_NR && childs[i]; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], NULL, 0);
+ childs[i] = 0;
+ }
+
+ if (vfree(k2u_info->size) < 0)
+ return -1;
+
+ return 0;
+}
+
+/* 同时创建N个进程加组,并且子进程也在死循环 */
+static int sp_k2u_1_vs_mult_start(void *arg)
+{
+ cpu_set_t mask;
+ pid_t pid = getpid();
+ struct sp_make_share_info *k2u_info = arg;
+ k2u_info->pid = pid;
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = k2u_info->spg_id,
+ };
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ } else {
+ pr_info("test parent process %d add group %d success.", getpid(), ag_info.spg_id);
+ }
+
+ CPU_ZERO(&mask);
+ CPU_SET(0, &mask); /* parent process runs on CPU0 */
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("parent process sched_setaffinity failed, errno: %d", errno);
+ return -1;
+ }
+
+ int cpu_count = get_nprocs();
+ pr_info("cpu count is %d", cpu_count);
+ int i;
+ for (i = 0; i < nr_child_process; i++) {
+ pid_t pid = fork();
+
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ CPU_ZERO(&mask);
+ CPU_SET((i + 1) % cpu_count, &mask);
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("set %dth child process %d sched_setaffinity failed, errno: %d", i, getpid(), errno);
+ exit(-1);
+ } else {
+ pr_info("set %dth child process %d sched_setaffinity success", i, getpid());
+ }
+ while (1) {};
+ exit(0);
+ }
+
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("add %dth child process %d to group failed, errno: %d", i, pid, errno);
+ return -1;
+ } {
+ pr_info("add %dth child process %d to group success", i, pid);
+ }
+ }
+
+ return 0;
+/*
+error:
+ //回收子进程,不在这里回收,则需要在end中回收
+ while (--i >= 0) {
+ kill(childs[i], SIGKILL);
+ int status;
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child process %d ended unexpected.", childs[i]);
+ }
+ childs[i] = 0;
+ }
+ return ret;
+*/
+}
+
+/* 同时创建7睡眠进程,小页 */
+static int sp_k2u_1_vs_mult_start_7(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ vmalloc_and_store(k2u_info->size);
+ k2u_info->kva = vm_addr;
+
+ nr_child_process = 7;
+ return sp_k2u_1_vs_mult_start(arg);
+}
+
+/* 同时创建7睡眠进程,大页 */
+static int sp_k2u_huge_1_vs_mult_start_7(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ vmalloc_hugepage_and_store(k2u_info->size);
+ k2u_info->kva = vm_addr;
+
+ nr_child_process = 7;
+ return sp_k2u_1_vs_mult_start(arg);
+}
+
+/* 同时创建15睡眠进程,小页 */
+static int sp_k2u_1_vs_mult_start_15(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ vmalloc_and_store(k2u_info->size);
+ k2u_info->kva = vm_addr;
+
+ nr_child_process = 15;
+ return sp_k2u_1_vs_mult_start(arg);
+}
+
+/* 同时创建15睡眠进程,大页 */
+static int sp_k2u_huge_1_vs_mult_start_15(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ vmalloc_hugepage_and_store(k2u_info->size);
+ k2u_info->kva = vm_addr;
+
+ nr_child_process = 15;
+ return sp_k2u_1_vs_mult_start(arg);
+}
+/*
+ * 同时创建N个进程加组,后续只有父进程和子进程同时做sp_k2u和unshare(如果有)
+ */
+static int sp_k2u_mult_start(void *arg)
+{
+ cpu_set_t mask;
+ pid_t pid = getpid();
+ struct sp_make_share_info *k2u_info = arg;
+ k2u_info->pid = pid;
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = k2u_info->spg_id,
+ };
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ CPU_ZERO(&mask);
+ CPU_SET(0, &mask); /* parent process runs on CPU0 */
+ int cpu_count = get_nprocs();
+
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("parent process sched_setaffinity failed, errno: %d", errno);
+ return -1;
+ }
+
+ int semid = semget(0xabcd996, 1, IPC_CREAT | 0644);
+ if (semid < 0) {
+ pr_info("open systemV semaphore failed, errno: %s", strerror(errno));
+ return -1;
+ }
+ int ret = semctl(semid, 0, SETVAL, 0);
+ if (ret < 0) {
+ pr_info("sem setval failed, %s", strerror(errno));
+ goto sem_remove;
+ }
+
+ for (int i = 0; i < nr_child_process; i++) {
+ struct timespec delay;
+ pid_t pid = fork();
+
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = -1,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+
+ CPU_ZERO(&mask);
+ CPU_SET((i + 1) % cpu_count, &mask);
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("child process %d sched_setaffinity failed, errno: %d", i, errno);
+ return -1;
+ }
+
+ delay.tv_sec = 0;
+ delay.tv_nsec = 300000; /* 300us */
+
+ while (1) {
+ sp_k2u_and_unshare_point(k2u_info);
+ nanosleep(&delay, NULL);
+ }
+ }
+
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = nr_child_process,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+
+sem_remove:
+ if (semctl(semid, IPC_RMID, NULL) < 0)
+ pr_info("sem remove failed, %s", strerror(errno));
+
+ return 0;
+}
+
+/* 同时创建7进程,小页 */
+static int sp_k2u_mult_start_7(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ vmalloc_and_store(k2u_info->size);
+ k2u_info->kva = vm_addr;
+
+ nr_child_process = 7;
+ return sp_k2u_mult_start(arg);
+}
+
+/* 同时创建7进程,大页 */
+static int sp_k2u_huge_mult_start_7(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ vmalloc_hugepage_and_store(k2u_info->size);
+ k2u_info->kva = vm_addr;
+
+ nr_child_process = 7;
+ return sp_k2u_mult_start(arg);
+}
+
+/* 同时创建15进程,小页 */
+static int sp_k2u_mult_start_15(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ vmalloc_and_store(k2u_info->size);
+ k2u_info->kva = vm_addr;
+
+ nr_child_process = 15;
+ return sp_k2u_mult_start(arg);
+}
+
+/* 同时创建15进程,大页 */
+static int sp_k2u_huge_mult_start_15(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ vmalloc_hugepage_and_store(k2u_info->size);
+ k2u_info->kva = vm_addr;
+
+ nr_child_process = 15;
+ return sp_k2u_mult_start(arg);
+}
+
+static struct sp_make_share_info k2u_infos[] = {
+ /* 每个用例一个数组元素,防止用例修改kva和pid带来潜在干扰 */
+ /* 单进程 */
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE, /* =vmalloc size */
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+ /* 单进程k2u,其他同组进程sleep */
+ /* 8进程 */
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+ /* 16进程 */
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+ /* 单进程k2u,其他同组进程也做k2u */
+ /* 8进程 */
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+ /* 16进程 */
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+};
+
+static struct test_perf testcases[] = {
+ /* 单进程 */
+ {
+ .perf_start = sp_k2u_start,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_end,
+ .count = 1000,
+ .name = "sp_k2u_2_pages",
+ .arg = &k2u_infos[0],
+ },
+ {
+ .perf_start = sp_k2u_huge_start,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_end,
+ .count = 1000,
+ .name = "sp_k2u_2_hugepage",
+ .arg = &k2u_infos[1],
+ },
+ {
+ .perf_start = sp_k2u_start,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_2_pages",
+ .arg = &k2u_infos[2],
+ },
+ {
+ .perf_start = sp_k2u_huge_start,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_2_hugepage",
+ .arg = &k2u_infos[3],
+ },
+ /* 父进程申请,子进程仅建页表 */
+ /* 总共8进程 */
+ {
+ .perf_start = sp_k2u_1_vs_mult_start_7,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_1_vs_mult7_proc2_pages",
+ .arg = &k2u_infos[4],
+ },
+ {
+ .perf_start = sp_k2u_huge_1_vs_mult_start_7,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_1_vs_mult7_proc2_hugepages",
+ .arg = &k2u_infos[5],
+ },
+ {
+ .perf_start = sp_k2u_1_vs_mult_start_7,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_1_vs_mult7_proc2_pages",
+ .arg = &k2u_infos[6],
+ },
+ {
+ .perf_start = sp_k2u_huge_1_vs_mult_start_7,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_1_vs_mult7_proc2_hugepages",
+ .arg = &k2u_infos[7],
+ },
+ /* 总共16进程 */
+ {
+ .perf_start = sp_k2u_1_vs_mult_start_15,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_1_vs_mult15_proc2_pages",
+ .arg = &k2u_infos[8],
+ },
+ {
+ .perf_start = sp_k2u_huge_1_vs_mult_start_15,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_1_vs_mult15_proc2_hugepages",
+ .arg = &k2u_infos[9],
+ },
+ {
+ .perf_start = sp_k2u_1_vs_mult_start_15,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_1_vs_mult15_proc2_pages", // failed.
+ .arg = &k2u_infos[10],
+ },
+ {
+ .perf_start = sp_k2u_huge_1_vs_mult_start_15,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_1_vs_mult15_proc2_hugepages",
+ .arg = &k2u_infos[11],
+ },
+ /* 父子进程都加组并申请内存 */
+ /* 8进程 */
+ {
+ .perf_start = sp_k2u_mult_start_7,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_mult7_2_pages",
+ .arg = &k2u_infos[12],
+ },
+ {
+ .perf_start = sp_k2u_huge_mult_start_7,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_mult7_2_hugepage",
+ .arg = &k2u_infos[13],
+ },
+ {
+ .perf_start = sp_k2u_mult_start_7,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_mult7_2_pages",
+ .arg = &k2u_infos[14],
+ },
+ {
+ .perf_start = sp_k2u_huge_mult_start_7,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_mult7_2_hugepage",
+ .arg = &k2u_infos[15],
+ },
+ /* 16进程 */
+ {
+ .perf_start = sp_k2u_mult_start_15,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_mult15_2_pages",
+ .arg = &k2u_infos[16],
+ },
+ {
+ .perf_start = sp_k2u_huge_mult_start_15,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_mult15_2_hugepage",
+ .arg = &k2u_infos[17],
+ },
+ {
+ .perf_start = sp_k2u_mult_start_15,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_mult15_2_pages",
+ .arg = &k2u_infos[18],
+ },
+ {
+ .perf_start = sp_k2u_huge_mult_start_15,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_mult15_2_hugepage",
+ .arg = &k2u_infos[19],
+ },
+};
+
+#define STRLENGTH 500
+static char filename[STRLENGTH];
+
+int main(int argc, char *argv[])
+{
+ int ret = 0;
+ dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ int passed = 0, failed = 0;
+
+ if (argc == 1) {
+ for (int i = 0; i < sizeof(testcases) / sizeof(testcases[0]); i++) {
+ printf(">>>> start testcase%d: %s", i + 1, testcases[i].name);
+ ret = test_perf_routing(&testcases[i]);
+ ret == 0? passed++: failed++;
+ printf("<<<< end testcase%d: %s, result: %s\n", i + 1, testcases[i].name, ret != 0 ? "failed" : "passed");
+ }
+ pr_info("----------------------------");
+ printf("%s All %d testcases finished, passing: %d, failing: %d", extract_filename(filename, __FILE__), passed + failed, passed, failed);
+ printf("-------------------------\n");
+ } else {
+ int testnum = atoi(argv[1]);
+ printf(">>>> start testcase%d: %s", testnum, testcases[testnum - 1].name);
+ ret = test_perf_routing(&testcases[testnum - 1]);
+ printf("<<<< end testcase%d: %s, result: %s\n", testnum, testcases[testnum - 1].name, ret != 0 ? "failed" : "passed");
+ pr_info("----------------------------");
+ printf("%s testcase%d finished, %s", extract_filename(filename, __FILE__), testnum, ret == 0 ? "passed." : "failed.");
+ printf("-------------------------\n");
+ }
+
+
+
+ close_device(dev_fd);
+
+
+
+ return failed == 0? 0: -1;
+}
diff --git a/tools/testing/sharepool/testcase/reliability_test/Makefile b/tools/testing/sharepool/testcase/reliability_test/Makefile
new file mode 100644
index 000000000000..aaee60d8872b
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/Makefile
@@ -0,0 +1,11 @@
+MODULEDIR:=coredump fragment k2u_u2k sp_add_group sp_unshare kthread others
+
+all:tooldir
+
+tooldir:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n; done
+install:
+ mkdir -p $(TOOL_BIN_DIR)/reliability_test && cp reliability_test.sh $(TOOL_BIN_DIR)
+ for n in $(MODULEDIR); do $(MAKE) -C $$n install; done
+clean:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n clean; done
diff --git a/tools/testing/sharepool/testcase/reliability_test/coredump/Makefile b/tools/testing/sharepool/testcase/reliability_test/coredump/Makefile
new file mode 100644
index 000000000000..5370208dbd64
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/coredump/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/reliability_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump.c b/tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump.c
new file mode 100644
index 000000000000..bc488692a08a
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump.c
@@ -0,0 +1,581 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 06:59:45 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <signal.h>
+#include <sys/time.h>
+#include <sys/resource.h>
+#include <stdlib.h> /* rand() and srand() */
+#include <time.h> /* time() */
+#include <pthread.h>
+
+#include "sharepool_lib.h"
+#include "sem_use.h"
+
+#define ALLOC_TEST_TYPES 4
+#define GROUP_ID 1
+#define THREAD_NUM 3
+#define KILL_TIME 1000
+static int semid;
+
+void *alloc_thread(void *arg)
+{
+ int ret;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[ALLOC_TEST_TYPES] = {
+ {
+ // 大页
+ .flag = SP_HUGEPAGE,
+ .spg_id = GROUP_ID,
+ .size = 2 * PMD_SIZE,
+ },
+ {
+ // 大页 DVPP
+ .flag = SP_DVPP | SP_HUGEPAGE,
+ .spg_id = GROUP_ID,
+ .size = 2 * PMD_SIZE,
+ },
+ {
+ // 普通页 DVPP
+ .flag = SP_DVPP,
+ .spg_id = GROUP_ID,
+ .size = 4 * PAGE_SIZE,
+ },
+ {
+ // 普通页
+ .flag = 0,
+ .spg_id = GROUP_ID,
+ .size = 4 * PAGE_SIZE,
+ },
+};
+ while (1) {
+ pr_info("%s run time %d", __FUNCTION__, sem_get_value(semid));
+ for (int i = 0; i < ALLOC_TEST_TYPES; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ return -1;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ return -1;
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(dev_fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ return -1;
+ }
+ }
+ for (int i = 0; i < ALLOC_TEST_TYPES; i++) {
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ return -1;
+ }
+ }
+ sem_inc_by_one(semid);
+ if (ret)
+ break;
+ }
+ return ret;
+}
+
+struct vmalloc_info vmalloc_infos[THREAD_NUM] = {0}, vmalloc_huge_infos[THREAD_NUM] = {0};
+sig_atomic_t thread_index = 0;
+void *k2u_thread(void *arg)
+{
+ int ret;
+
+ struct sp_make_share_info k2u_info = {0}, k2u_huge_info = {0};
+ int vm_index = thread_index++;
+ while (1) {
+ pr_info("k2u_thread run time %d", sem_get_value(semid));
+ pr_info("atomic index is %d, thread index is %d", vm_index, thread_index);
+
+ int group_id = (int)arg;
+ int pid = getpid();
+ k2u_info.kva = vmalloc_infos[vm_index].addr;
+ k2u_info.size = vmalloc_infos[vm_index].size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = group_id;
+
+ k2u_huge_info.kva = vmalloc_huge_infos[vm_index].addr;
+ k2u_huge_info.size = vmalloc_huge_infos[vm_index].size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = group_id;
+
+ /* k2u */
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ }
+ }
+
+ /* check k2u memory content */
+ char *addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ sem_inc_by_one(semid);
+ }
+
+ return ret;
+error:
+ return -1;
+}
+
+void *k2task_thread(void *arg)
+{
+ k2u_thread(SPG_ID_DEFAULT);
+}
+
+void *addgroup_thread(void *arg)
+{
+ int ret = 0;
+ while (1) {
+ pr_info("add_group_thread run time %d", sem_get_value(semid));
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ int num = KILL_TIME;
+ int spg_id[KILL_TIME];
+ ret = wrap_sp_group_id_by_pid(getpid(), spg_id, &num);
+ pr_info("add to %d groups", num);
+ sem_inc_by_one(semid);
+ }
+ return ret;
+}
+
+void *u2k_thread(void *arg)
+{
+ int ret = 0;
+ bool judge_ret = true;
+ char *addr;
+ int group_id = (int)arg;
+ int pid = getpid();
+ struct sp_alloc_info alloc_info[ALLOC_TEST_TYPES] = {
+ {
+ // 大页
+ .flag = SP_HUGEPAGE,
+ .spg_id = GROUP_ID,
+ .size = 2 * PMD_SIZE,
+ },
+ {
+ // 大页 DVPP
+ .flag = SP_DVPP | SP_HUGEPAGE,
+ .spg_id = GROUP_ID,
+ .size = 2 * PMD_SIZE,
+ },
+ {
+ // 普通页 DVPP
+ .flag = SP_DVPP,
+ .spg_id = GROUP_ID,
+ .size = 4 * PAGE_SIZE,
+ },
+ {
+ // 普通页
+ .flag = 0,
+ .spg_id = GROUP_ID,
+ .size = 4 * PAGE_SIZE,
+ },
+};
+ struct sp_make_share_info u2k_info[ALLOC_TEST_TYPES] = {0};
+ while (1) {
+ pr_info("u2k_thread run time %d", sem_get_value(semid));
+ for (int i = 0; i < ALLOC_TEST_TYPES; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(dev_fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ pr_info("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ }
+ }
+ }
+
+ for (int i = 0; i < ALLOC_TEST_TYPES; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ sem_inc_by_one(semid);
+ }
+error:
+ //close_device(dev_fd);
+ return -1;
+}
+
+void *walkpagerange_thread(void *arg)
+{
+ int ret = 0;
+ // alloc
+ struct sp_alloc_info alloc_info[ALLOC_TEST_TYPES] = {
+ {
+ // 大页
+ .flag = SP_HUGEPAGE,
+ .spg_id = GROUP_ID,
+ .size = 2 * PMD_SIZE,
+ },
+ {
+ // 大页 DVPP
+ .flag = SP_DVPP | SP_HUGEPAGE,
+ .spg_id = GROUP_ID,
+ .size = 2 * PMD_SIZE,
+ },
+ {
+ // 普通页 DVPP
+ .flag = SP_DVPP,
+ .spg_id = GROUP_ID,
+ .size = 4 * PAGE_SIZE,
+ },
+ {
+ // 普通页
+ .flag = 0,
+ .spg_id = GROUP_ID,
+ .size = 4 * PAGE_SIZE,
+ },
+};
+ for (int i = 0; i < ALLOC_TEST_TYPES; i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ return -1;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ return -1;
+ }
+ }
+ }
+
+ struct sp_walk_page_range_info wpr_info[ALLOC_TEST_TYPES] = {0};
+ while (1) {
+ pr_info("%s run time %d", __FUNCTION__, sem_get_value(semid));
+ for (int i = 0; i < ALLOC_TEST_TYPES; i++) {
+ wpr_info[i].uva = alloc_info[i].addr;
+ wpr_info[i].size = alloc_info[i].size;
+ ret = ioctl_walk_page_range(dev_fd, wpr_info + i);
+ if (ret < 0) {
+ pr_info("ioctl_walk_page_range failed, errno: %d", errno);
+ return -1;
+ }
+ }
+ for (int i = 0; i < ALLOC_TEST_TYPES; i++) {
+ ret = ioctl_walk_page_free(dev_fd, wpr_info + i);
+ if (ret < 0) {
+ pr_info("ioctl_walk_page_range_free failed, errno: %d", errno);
+ return -1;
+ }
+ }
+ sem_inc_by_one(semid);
+ }
+
+ // free
+ for (int i = 0; i < ALLOC_TEST_TYPES; i++) {
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+int vmallocAll()
+{
+ int ret;
+ for (int i = 0; i < THREAD_NUM; i++) {
+ vmalloc_infos[i].size = 3 * PAGE_SIZE;
+ ret = ioctl_vmalloc(dev_fd, vmalloc_infos + i);
+ if(ret < 0) {
+ pr_info("vmalloc failed");
+ } else {
+ pr_info("vmalloc success");
+ }
+ vmalloc_huge_infos[i].size = 3 * PMD_SIZE;
+ ret = ioctl_vmalloc_hugepage(dev_fd, vmalloc_huge_infos + i);
+ if(ret < 0) {
+ pr_info("vmalloc hugepage failed");
+ } else {
+ pr_info("vmalloc hugepage success");
+ }
+ }
+ return ret;
+}
+
+int vfreeAll()
+{
+ pr_info("now inside %s, thread index is %d", __FUNCTION__, thread_index);
+
+ int ret;
+
+ for (int i = 0; i < THREAD_NUM; i++) {
+ ret = ioctl_vfree(dev_fd, vmalloc_infos + i);
+ if (ret != 0) {
+ pr_info("vfree failed, errno is %d", errno);
+ } else {
+ pr_info("vfree success");
+ }
+ ret = ioctl_vfree(dev_fd, vmalloc_huge_infos + i);
+ if (ret != 0) {
+ pr_info("vfree failed, errno is %d", errno);
+ } else {
+ pr_info("vfree hugepage success");
+ }
+ }
+ thread_index = 0;
+ return 0;
+}
+
+int startThreads(void *(thread)(void *))
+{
+ int ret = 0;
+ semid = sem_create(1234, "core_dump_after_xxx_time_count");
+ setCore();
+
+ // add group
+ int group_id = GROUP_ID;
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, group_id);
+ if (ret < 0) {
+ printf("add task(pid%d) to group(%d) failed, err: %s\n", getpid(), group_id, strerror(errno));
+ return -1;
+ } else
+ printf("add task(pid%d) to group(%d) success\n", getpid(), group_id);
+
+ // 重复
+ pthread_t threads[THREAD_NUM];
+ thread_index = 0;
+ for (int i = 0; i < THREAD_NUM; i++) {
+ pthread_create(threads + i, NULL, thread, (void *)group_id);
+ }
+
+ sem_dec_by_val(semid, KILL_TIME);
+
+ sem_close(semid);
+ if (thread_index > THREAD_NUM) {
+ pr_info("failure, thread index: %d not correct!!", thread_index);
+ return -1;
+ }
+ ret = generateCoredump();
+
+ sleep(3);
+
+ return ret;
+}
+
+/* testcase1: alloc后产生coredump */
+static int testcase1(void)
+{
+ int status;
+ int pid = fork();
+ if (pid == 0) {
+ exit(startThreads(alloc_thread));
+ } else if (pid > 0){
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ // expected status is 139 = 128 + 11 (SIGSEGV)
+ pr_info("coredump as expected, return value is %d", status);
+ }
+ }
+ return 0;
+}
+
+/* testcase2: k2spg中coredump */
+static int testcase2(void)
+{
+ int pid;
+ vmallocAll();
+ FORK_CHILD_ARGS(pid, startThreads(k2u_thread));
+ int status;
+ waitpid(pid, &status, 0);
+ vfreeAll();
+}
+
+/* testcase3: u2k中coredump */
+static int testcase3(void)
+{
+ int status;
+ int pid = fork();
+ if (pid == 0) {
+ exit(startThreads(u2k_thread));
+ } else if (pid > 0){
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ // expected status is 139 = 128 + 11 (SIGSEGV)
+ pr_info("coredump as expected, return value is %d", status);
+ }
+ }
+ return 0;
+}
+
+/* testcase4: add group and query中coredump */
+static int testcase4(void)
+{
+ int status;
+ int pid = fork();
+ if (pid == 0) {
+ exit(startThreads(addgroup_thread));
+ } else if (pid > 0){
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ // expected status is 139 = 128 + 11 (SIGSEGV)
+ pr_info("coredump as expected, return value is %d", status);
+ }
+ }
+ return 0;
+}
+
+/* testcase5: k2task中coredump */
+static int testcase5(void)
+{
+ int pid;
+ vmallocAll();
+ FORK_CHILD_ARGS(pid, startThreads(k2task_thread));
+ int status;
+ waitpid(pid, &status, 0);
+ vfreeAll();
+}
+
+/* testcase6: walkpagerange中coredump - 会有kmemory leak*/
+static int testcase6(void)
+{
+ int status;
+ int pid = fork();
+ if (pid == 0) {
+ exit(startThreads(walkpagerange_thread));
+ } else if (pid > 0){
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ // expected status is 139 = 128 + 11 (SIGSEGV)
+ pr_info("coredump as expected, return value is %d", status);
+ }
+ }
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "alloc后产生coredump")
+ TESTCASE_CHILD(testcase2, "k2spg中coredump")
+ TESTCASE_CHILD(testcase3, "u2k中coredump")
+ TESTCASE_CHILD(testcase4, "add group and query中coredump")
+ TESTCASE_CHILD(testcase5, "k2task中coredump")
+ TESTCASE_CHILD(testcase6, "walkpagerange中coredump - 会有kmemory leak")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump2.c b/tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump2.c
new file mode 100644
index 000000000000..9ff36b1e68e4
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump2.c
@@ -0,0 +1,202 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri May 21 07:23:31 2021
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <sys/mman.h>
+#include <time.h> /* time() */
+#include <sys/resource.h>
+
+#include "sharepool_lib.h"
+
+
+static int init_env(void)
+{
+ struct rlimit core_lim;
+
+ if (getrlimit(RLIMIT_CORE, &core_lim)) {
+ printf("getrlimit failed, err: %s\n", strerror(errno));
+ return -1;
+ } else
+ printf("current rlimit for RLIMIT_CORE is: %lx, %lx\n", core_lim.rlim_cur, core_lim.rlim_max);
+
+ core_lim.rlim_cur = RLIM_INFINITY;
+ if (setrlimit(RLIMIT_CORE, &core_lim)) {
+ printf("setrlimit failed, err: %s\n", strerror(errno));
+ return -1;
+ } else
+ printf("setrlimit for RLIMIT_CORE to unlimited\n");
+
+ return 0;
+}
+
+/* do nothing */
+static int child_do_nothing(sem_t *sync)
+{
+ int ret;
+
+ SEM_WAIT(sync);
+
+ pr_info("child pid: %d", getpid());
+
+ while (1);
+
+ return 0;
+}
+
+static int child_sp_alloc(sem_t *sync)
+{
+ int ret;
+
+ SEM_WAIT(sync);
+
+ pr_info("child pid: %d wake up.", getpid());
+out:
+ while (1) {
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 1024,
+ .spg_id = SPG_ID_DEFAULT,
+ };
+ TEST_CHECK(ioctl_alloc(dev_fd, &alloc_info), out);
+ TEST_CHECK(ioctl_free(dev_fd, &alloc_info), out);
+ }
+
+ return 0;
+}
+
+static int child_sp_alloc_and_free(sem_t *sync)
+{
+ int ret;
+
+ SEM_WAIT(sync);
+
+ pr_info("child pid: %d", getpid());
+out:
+ while (1) {
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 1024,
+ .spg_id = SPG_ID_DEFAULT,
+ };
+ TEST_CHECK(ioctl_alloc(dev_fd, &alloc_info), out);
+ TEST_CHECK(ioctl_free(dev_fd, &alloc_info), out);
+ }
+
+ return 0;
+}
+
+#define test_child_num 20
+#define test_group_num 10
+
+static int fork_or_coredump(int (*child)(sem_t *))
+{
+ int ret, i, j;
+ pid_t pid[test_child_num];
+ int groups[test_group_num];
+ sem_t *sync[test_child_num];
+ int repeat = 100;
+
+ for (i = 0; i < test_child_num; i++)
+ SEM_INIT(sync[i], i);
+
+ for (i = 0; i < test_child_num; i++)
+ FORK_CHILD_ARGS(pid[i], child(sync[i]));
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid[0],
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+
+ for (i = 0; i < test_group_num; i++) {
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out_kill);
+ groups[i] = ag_info.spg_id;
+ }
+
+ for (i = 1; i < test_child_num; i++) {
+ ag_info.pid = pid[i];
+ for (j = 0; j < test_group_num; j++) {
+ ag_info.spg_id = groups[j];
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out_kill);
+ }
+ }
+
+ for (i = 0; i < test_child_num; i++)
+ sem_post(sync[i]);
+
+ int alive_process = test_child_num;
+ srand((unsigned)time(NULL));
+
+ for (i = 0; i < repeat; i++) {
+ pr_info("kill time %dth, %d times left.", i + 1, repeat - (i + 1));
+ int idx = rand() % test_child_num;
+ /* 不能把所有进程都杀掉, 否则进程退出,组正在销毁,后面加组会失败 */
+ if (pid[idx] && alive_process > 1) {
+ kill(pid[idx], SIGSEGV);
+ waitpid(pid[idx], 0, NULL);
+ pid[idx] = 0;
+ alive_process--;
+ } else {
+ FORK_CHILD_ARGS(pid[idx], child(sync[idx]));
+ ag_info.pid = pid[idx];
+ for (j = 0; j < test_group_num; j++) {
+ ag_info.spg_id = groups[j];
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out_kill);
+ }
+ sem_post(sync[idx]);
+ alive_process++;
+ }
+ }
+
+ return 0;
+
+out_kill:
+ for (i = 0; i < test_child_num; i++)
+ kill(pid[i], SIGKILL);
+out:
+ return ret;
+}
+
+static int testcase1(void)
+{
+ setCore();
+ return fork_or_coredump(child_do_nothing);
+}
+
+static int testcase2(void)
+{
+ setCore();
+ return fork_or_coredump(child_sp_alloc);
+}
+
+static int testcase3(void)
+{
+ init_env();
+ return fork_or_coredump(child_sp_alloc_and_free);
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "N个加组后什么也不做,coredump")
+ TESTCASE(testcase2, "N个进程加组后alloc,然后coredump")
+ TESTCASE(testcase3, "N个进程加组后alloc-free,然后coredump")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump_k2u_alloc.c b/tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump_k2u_alloc.c
new file mode 100644
index 000000000000..782615493a85
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump_k2u_alloc.c
@@ -0,0 +1,562 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 06:59:45 2020
+ */
+#include <stdio.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <signal.h>
+#include <sys/time.h>
+#include <sys/resource.h>
+#include <stdlib.h> /* rand() and srand() */
+#include <time.h> /* time() */
+#include <pthread.h>
+
+#include "sharepool_lib.h"
+#include "sem_use.h"
+
+#define PROC_NUM 64
+#define GROUP_ID 1
+#define K2U_UNSHARE_TIME 2
+#define ALLOC_FREE_TIME 2
+#define VMALLOC_SIZE 4096
+#define PROT (PROT_READ | PROT_WRITE)
+#define GROUP_NUM 4
+#define K2U_CONTINUOUS_TIME 200
+#define min(a,b) ((a)<(b)?(a):(b))
+
+/* testcase1:
+ * 每个组都拉起一个进程负责k2u,其他N个进程加多组后依次coredump,所有组k2u每次都应该返回成功。
+ * 打印维测信息,k2u正常。
+ * 所有进程coredump后,测试退出,打印维测信息,组和spa均为0,无泄漏。
+ */
+
+static int semid[PROC_NUM];
+static int sem_task;
+static int group_ids[GROUP_NUM];
+
+struct k2u_args {
+ int with_print;
+ int k2u_whole_times; // repeat times
+ int (*k2u_tsk)(struct k2u_args);
+};
+
+struct task_param {
+ bool with_print;
+};
+
+struct test_setting {
+ int (*task)(struct task_param*);
+ struct task_param *task_param;
+};
+
+static int init_sem();
+static int close_sem();
+static int k2u_unshare_task(struct task_param *task_param);
+static int k2u_continuous_task(struct task_param *task_param);
+static int child_process(int index);
+static int alloc_free_task(struct task_param *task_param);
+static int alloc_continuous_task(struct task_param *task_param);
+static int testcase_combine(int (*task1)(struct task_param*),
+ int (*task2)(struct task_param*), struct task_param *param);
+
+static int testcase_base(struct test_setting test_setting)
+{
+ int status;
+ int pid;
+ int child[PROC_NUM];
+ int ret;
+ int pid_k2u;
+
+ setCore();
+ // 初始化sem
+ ret = init_sem();
+ if (ret < 0) {
+ pr_info("init sem failed");
+ return -1;
+ }
+
+ // 创建组
+ //ret = wrap_add_group(getpid(), PROT, GROUP_ID);
+ ret = create_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ return -1;
+ }
+
+ // 拉起功能进程,负责k2u或者alloc
+ pid_k2u = fork();
+ if (pid_k2u < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid_k2u == 0) {
+ ret = add_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ exit(-1);
+ } else {
+ pr_info("functional process add group succuess.");
+ }
+ exit(test_setting.task(test_setting.task_param));
+ }
+
+ // 拉起子进程
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed, deleting procs...");
+ goto delete_procs;
+ } else if (pid == 0) {
+ // 拉起子进程 hanging
+ exit(child_process(i));
+ } else {
+ child[i] = pid;
+ //ret = wrap_add_group(pid, PROT, GROUP_ID);
+ ret = add_multi_groups(pid, GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", pid);
+ goto delete_procs;
+ }
+ }
+ }
+
+ // 依次让子进程coredump
+ for (int i = 0; i < PROC_NUM; i++) {
+ pr_info("coredump process %d", child[i]);
+ sem_inc_by_one(semid[i]);
+ waitpid(child[i], &status, 0);
+ usleep(200000);
+ }
+
+ // 功能进程退出
+ sem_inc_by_one(sem_task);
+ waitpid(pid_k2u, &status, 0);
+
+ close_sem();
+ return 0;
+
+delete_procs:
+ return -1;
+}
+
+static int testcase_combine(int (*task1)(struct task_param*),
+ int (*task2)(struct task_param*), struct task_param *param)
+{
+ int status;
+ int pid;
+ int child[PROC_NUM];
+ int ret;
+ int pid_k2u, pid_alloc;
+
+ setCore();
+ // 初始化sem
+ ret = init_sem();
+ if (ret < 0) {
+ pr_info("init sem failed");
+ return -1;
+ }
+
+ // 创建组
+ //ret = wrap_add_group(getpid(), PROT, GROUP_ID);
+ ret = create_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ return -1;
+ }
+
+ // 拉起功能进程,负责k2u或者alloc
+ pid_k2u = fork();
+ if (pid_k2u < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid_k2u == 0) {
+ ret = add_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ exit(-1);
+ } else {
+ pr_info("functional process add group succuess.");
+ }
+ exit(task1(param));
+ }
+
+ pid_alloc = fork();
+ if (pid_alloc < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid_alloc == 0) {
+ ret = add_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ exit(-1);
+ } else {
+ pr_info("functional process add group succuess.");
+ }
+ exit(task2(param));
+ }
+
+ // 拉起子进程
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed, deleting procs...");
+ goto delete_procs;
+ } else if (pid == 0) {
+ // 拉起子进程 hanging
+ exit(child_process(i));
+ } else {
+ child[i] = pid;
+ //ret = wrap_add_group(pid, PROT, GROUP_ID);
+ ret = add_multi_groups(pid, GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", pid);
+ goto delete_procs;
+ }
+ }
+ }
+
+ // 依次让子进程coredump
+ for (int i = 0; i < PROC_NUM; i++) {
+ pr_info("coredump process %d", child[i]);
+ sem_inc_by_one(semid[i]);
+ waitpid(child[i], &status, 0);
+ usleep(200000);
+ }
+
+ // 功能进程退出
+ sem_inc_by_val(sem_task, 2);
+ waitpid(pid_k2u, &status, 0);
+ waitpid(pid_alloc, &status, 0);
+
+ close_sem();
+ return 0;
+
+delete_procs:
+ return -1;
+}
+
+static struct task_param task_param_table[] = {
+ {
+ .with_print = false, // 不打印维测
+ },
+ {
+ .with_print = true, // 打印维测
+ },
+};
+
+static struct test_setting test_setting_table[] = {
+ {
+ .task_param = &task_param_table[0],
+ .task = k2u_unshare_task, // k2u->unshare 重复N次
+ },
+ {
+ .task_param = &task_param_table[1],
+ .task = k2u_unshare_task, // k2u->unshare 重复N次
+ },
+ {
+ .task_param = &task_param_table[0],
+ .task = k2u_continuous_task, // k2u重复N次,再unshare重复N次
+ },
+ {
+ .task_param = &task_param_table[0],
+ .task = alloc_free_task, // alloc->free 重复N次
+ },
+ {
+ .task_param = &task_param_table[0],
+ .task = alloc_continuous_task, // alloc N块内存,再free掉,重复M次
+ },
+};
+
+static int testcase1(void)
+{
+ return testcase_base(test_setting_table[0]);
+}
+
+static int testcase2(void)
+{
+ return testcase_base(test_setting_table[1]);
+}
+
+static int testcase3(void)
+{
+ return testcase_base(test_setting_table[2]);
+}
+
+static int testcase4(void)
+{
+ return testcase_base(test_setting_table[3]);
+}
+
+static int testcase5(void)
+{
+ return testcase_base(test_setting_table[4]);
+}
+
+static int testcase6(void)
+{
+ return testcase_combine(k2u_continuous_task, alloc_continuous_task, &task_param_table[0]);
+}
+
+/* testcase4: k2u负责进程coredump */
+static int close_sem()
+{
+ int ret;
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ ret = sem_close(semid[i]);
+ if (ret < 0) {
+ pr_info("sem close failed");
+ return ret;
+ }
+ }
+ sem_close(sem_task);
+ pr_info("all sems deleted.");
+ return 0;
+}
+
+static int init_sem()
+{
+ int i = 0;
+
+ sem_task = sem_create(PROC_NUM, "sem_task");
+
+ for (i = 0; i < PROC_NUM; i++) {
+ key_t key = i;
+ semid[i] = sem_create(key, "sem_child");
+ if (semid[i] < 0) {
+ pr_info("semid %d init failed. errno: %d", i, errno);
+ goto delete_sems;
+ }
+ }
+ pr_info("all sems initialized.");
+ return 0;
+
+delete_sems:
+ for (int j = 0; j < i; j++) {
+ sem_close(semid[j]);
+ }
+ return -1;
+}
+
+static int child_process(int index)
+{
+ pr_info("child process %d created", getpid());
+ // 收到coredump信号后coredump
+ sem_dec_by_one(semid[index]);
+ pr_info("child process %d coredump", getpid());
+ generateCoredump();
+ return 0;
+}
+
+static int k2u_unshare_task(struct task_param *task_param)
+{
+ int ret;
+ int i;
+ struct vmalloc_info vmalloc_info;
+ struct sp_make_share_info k2u_info;
+ unsigned long uva[K2U_UNSHARE_TIME];
+
+ vmalloc_info.size = VMALLOC_SIZE;
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed.");
+ return -1;
+ } else {
+ pr_info("vmalloc %ld success.", vmalloc_info.size);
+ }
+
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = getpid();
+ k2u_info.spg_id = GROUP_ID;
+
+repeat:
+ memset(uva, 0, sizeof(unsigned long) * K2U_UNSHARE_TIME);
+ for (i = 0; i < K2U_UNSHARE_TIME; i++) {
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("k2u failed at %d time.", i);
+ goto unshare;
+ } else {
+ pr_info("k2u success %d time, addr = %lx", i, k2u_info.addr);
+ uva[i] = k2u_info.addr;
+ }
+ }
+
+ if (task_param->with_print)
+ sharepool_print();
+
+unshare:
+ for (int j = 0; j < i; j++) {
+ pr_info("uva[%d] is %ld", j, uva[j]);
+ k2u_info.addr = uva[j];
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("unshare failed at %d", j);
+ return -1;
+ }
+ }
+
+ if (sem_get_value(sem_task) == 0)
+ goto repeat;
+
+ ioctl_vfree(dev_fd, &vmalloc_info);
+
+ return 0;
+}
+
+static int k2u_continuous_task(struct task_param *task_param)
+{
+ int ret;
+ int i, h;
+ struct vmalloc_info vmalloc_info;
+ struct sp_make_share_info k2u_info;
+ unsigned long *uva;
+
+ vmalloc_info.size = VMALLOC_SIZE;
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed.");
+ return -1;
+ } else {
+ pr_info("vmalloc %ld success.", vmalloc_info.size);
+ }
+
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = getpid();
+ //k2u_info.spg_id = GROUP_ID;
+
+ uva = malloc(sizeof(unsigned long) * K2U_CONTINUOUS_TIME * GROUP_NUM);
+
+ memset(uva, 0, sizeof(unsigned long) * K2U_CONTINUOUS_TIME * GROUP_NUM);
+ for (i = 0; i < K2U_CONTINUOUS_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ k2u_info.spg_id = group_ids[h];
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("k2u failed at %d time in group %d.", i, group_ids[h]);
+ goto unshare;
+ } else {
+ pr_info("k2u success %d time, addr = %lx", i, k2u_info.addr);
+ uva[i * GROUP_NUM + h] = k2u_info.addr;
+ }
+ }
+ }
+
+unshare:
+ for (int j = 0; j < min((i * GROUP_NUM + h), K2U_CONTINUOUS_TIME * GROUP_NUM); j++) {
+ pr_info("uva[%d] is %ld", j, uva[j]);
+ k2u_info.addr = uva[j];
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("unshare failed at %d", j);
+ return -1;
+ }
+ }
+
+ ioctl_vfree(dev_fd, &vmalloc_info);
+
+ return 0;
+}
+
+/* 已加组,持续alloc-free*/
+#define ALLOC_SIZE 4096
+#define ALLOC_FLAG 0
+static int alloc_free_task(struct task_param *task_param)
+{
+ int ret = 0;
+ int i, h;
+ unsigned long ret_addr = -1;
+ unsigned long addr[ALLOC_FREE_TIME][GROUP_NUM];
+
+repeat:
+ for (i = 0; i < ALLOC_FREE_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ ret_addr = wrap_sp_alloc(group_ids[h], ALLOC_SIZE, ALLOC_FLAG);
+ if (!ret_addr) {
+ pr_info("alloc failed %d time %d group", i, h + 1);
+ return ret_addr;
+ } else {
+ addr[i][h] = ret_addr;
+ pr_info("alloc success addr %x", ret_addr);
+ }
+ }
+ }
+
+ if (task_param->with_print)
+ sharepool_print();
+
+free:
+ for (i = 0; i < ALLOC_FREE_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ ret = wrap_sp_free(addr[i][h]);
+ if (ret < 0) {
+ pr_info("free failed %d time group %d", i, h + 1);
+ return ret;
+ }
+ }
+ }
+
+ if (sem_get_value(sem_task) == 0)
+ goto repeat;
+
+ return ret;
+}
+
+#define ALLOC_CONTINUOUS_TIME 200
+static int alloc_continuous_task(struct task_param *task_param)
+{
+ int ret = 0;
+ int i, h;
+ unsigned long ret_addr = -1;
+ unsigned long addr[ALLOC_CONTINUOUS_TIME][GROUP_NUM];
+ for (i = 0; i < ALLOC_CONTINUOUS_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ ret_addr = wrap_sp_alloc(group_ids[h], ALLOC_SIZE, ALLOC_FLAG);
+ if (!ret_addr) {
+ pr_info("alloc failed %d time %d group", i, h + 1);
+ return ret_addr;
+ } else {
+ addr[i][h] = ret_addr;
+ pr_info("alloc success addr %x", ret_addr);
+ }
+ }
+ }
+ for (i = 0; i < ALLOC_CONTINUOUS_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ ret = wrap_sp_free(addr[i][h]);
+ if (ret < 0) {
+ pr_info("free failed %d time group %d", i, h + 1);
+ return ret;
+ }
+ }
+ }
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "每个组都拉起一个进程负责k2u,其他N个进程加多组后依次coredump,所有组k2u每次都应该返回成功。")
+ TESTCASE_CHILD(testcase2, "testcase1, 但打印维测")
+ TESTCASE_CHILD(testcase3, "持续地k2u并且coredump")
+ TESTCASE_CHILD(testcase4, "alloc-free循环的coredump")
+ TESTCASE_CHILD(testcase5, "alloc-coredump, free-coredump")
+ TESTCASE_CHILD(testcase6, "持续地k2u, alloc, 无维测")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/fragment/Makefile b/tools/testing/sharepool/testcase/reliability_test/fragment/Makefile
new file mode 100644
index 000000000000..5370208dbd64
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/fragment/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/reliability_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/reliability_test/fragment/test_external_fragmentation.c b/tools/testing/sharepool/testcase/reliability_test/fragment/test_external_fragmentation.c
new file mode 100644
index 000000000000..1e1d88705a00
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/fragment/test_external_fragmentation.c
@@ -0,0 +1,37 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Description: test external fragmentation
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Apr 20 22:23:51 2021
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+
+/*
+ * I recomend you run this test twice in parallel then kill one of them
+ * so that external fragmentation is created.
+ */
+
+int main(int argc, char *argv[]) {
+ char *p;
+ int i, times;
+ pid_t pid = getpid();
+
+ times = atoi(argv[1]);
+ printf("Fragmentation test pid %d will allocate %d 4K pages\n", pid, times);
+
+ p = sbrk(0);
+
+ for (i = 0; i < times; i++) {
+ sbrk(4096);
+ memset(p + i * 4096, 'a', 4096);
+ }
+
+ printf("Test %d allocation finished. begin sleep.\n", pid);
+ sleep(1200);
+ return 0;
+}
diff --git a/tools/testing/sharepool/testcase/reliability_test/fragment/test_external_fragmentation_trigger.c b/tools/testing/sharepool/testcase/reliability_test/fragment/test_external_fragmentation_trigger.c
new file mode 100644
index 000000000000..310c7c240177
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/fragment/test_external_fragmentation_trigger.c
@@ -0,0 +1,58 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Description: trigger
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Apr 20 23:17:22 2021
+ */
+
+#include <stdio.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <sys/types.h>
+
+#include "sharepool_lib.h"
+
+#define GROUP_ID 1
+
+int main(void) {
+
+ int i, fd, ret;
+ fd = open_device();
+ if (fd < 0) {
+ printf("open fd error\n");
+ return -1;
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = GROUP_ID,
+ };
+ ret = ioctl_add_group(fd, &ag_info);
+ if (ret < 0) {
+ printf("add group failed, ret is %d, error is %d\n", ret, errno);
+ goto error;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_HUGEPAGE,
+ .spg_id = GROUP_ID,
+ .size = PMD_SIZE,
+ };
+
+ // alloc 400MB try to trigger memory compaction
+ for (i = 0; i < 200; i++) {
+ ret = ioctl_alloc(fd, &alloc_info);
+ if (ret) {
+ printf("alloc failed, ret is %d, error is %d\n", ret, errno);
+ goto error;
+ }
+ }
+
+ close_device(fd);
+ return 0;
+error:
+ close_device(fd);
+ return -1;
+}
diff --git a/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/Makefile b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/Makefile
new file mode 100644
index 000000000000..5370208dbd64
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/reliability_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_k2u_and_kill.c b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_k2u_and_kill.c
new file mode 100644
index 000000000000..e0b7287faba0
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_k2u_and_kill.c
@@ -0,0 +1,276 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Thu Dec 17 03:09:02 2020
+ */
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+#include <sys/ipc.h>
+#include <sys/msg.h>
+
+#include <pthread.h>
+
+#include "sharepool_lib.h"
+
+#define CHILD_NUM 50
+#define THREAD_PER_PROCESS 20
+#define KILL_NUM 50
+#define VMALLOC_SIZE (3 * PAGE_SIZE)
+
+static int vmalloc_count;
+static int vfree_count;
+static struct vmalloc_info vm_data[CHILD_NUM];
+
+struct msgbuf {
+ long mtype;
+ struct vmalloc_info vmalloc_info;
+};
+
+static void send_msg(int msgid, int msgtype, struct vmalloc_info ka_info)
+{
+ struct msgbuf msg = {
+ .mtype = msgtype,
+ .vmalloc_info = ka_info,
+ };
+
+ if(msgsnd(msgid, (void *) &msg, sizeof(msg.vmalloc_info),
+ IPC_NOWAIT) == -1) {
+ perror("msgsnd error");
+ exit(EXIT_FAILURE);
+ } else {
+ pr_info("child %d message sent success: size: %x, addr: %lx",
+ msgtype - 1, ka_info.size, ka_info.addr);
+ }
+}
+
+static void get_msg(int msgid, int msgtype)
+{
+ struct msgbuf msg;
+ if (msgrcv(msgid, (void *) &msg, sizeof(msg.vmalloc_info), msgtype,
+ MSG_NOERROR) == -1) {
+ if (errno != ENOMSG) {
+ perror("msgrcv");
+ exit(EXIT_FAILURE);
+ }
+ pr_info("No message available for msgrcv()");
+ } else {
+ pr_info("child %d message received success: size: %x, addr: %lx",
+ msgtype - 1, msg.vmalloc_info.size, msg.vmalloc_info.addr);
+ vm_data[msgtype - 1] = msg.vmalloc_info;
+ vmalloc_count++;
+ }
+}
+
+static void *child_thread(void *arg)
+{
+ int ret;
+ struct sp_make_share_info k2u_info = *(struct sp_make_share_info *)arg;
+ while (1) {
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ } else {
+ //pr_info("ioctl_k2u success");
+ }
+
+ memset((void *)k2u_info.addr, 'a', k2u_info.size);
+
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ return NULL;
+}
+
+static int child_process(int idx, int msgid)
+{
+ int ret;
+
+ struct vmalloc_info ka_info = {
+ .size = VMALLOC_SIZE,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+
+ if (ret < 0) {
+ pr_info("child%d: ioctl_vmalloc failed", idx);
+ return -1;
+ } else {
+ send_msg(msgid, idx + 1, ka_info);
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .pid = getpid(),
+ };
+
+ pthread_t threads[THREAD_PER_PROCESS] = {0};
+ for (int i = 0; i < THREAD_PER_PROCESS; i++) {
+ ret = pthread_create(threads + i, NULL, child_thread, &k2u_info);
+ if (ret < 0) {
+ pr_info("child%d: pthread_create failed, err:%d", idx, ret);
+ return -1;
+ }
+ }
+
+ for (int i = 0; i < THREAD_PER_PROCESS; i++)
+ if (threads[i])
+ pthread_join(threads[i], NULL);
+
+ return 0;
+}
+
+static pid_t fork_and_add_group(int idx, int group_id, int (*child)(int, int), int msgid, char ch)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(idx, msgid));
+ }
+
+ if (group_id == SPG_ID_DEFAULT)
+ return pid;
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return -1;
+ } else {
+ return pid;
+ }
+}
+
+/*
+ * group_id == SPG_ID_DEFAULT 不加组
+ * group_id == SPG_ID_DEFAULT 所有进程加不同组
+ * 其他,所有进程加指定组
+ */
+static int testcase_routing(int group_id, char ch)
+{
+ int ret = 0;
+ pid_t child[CHILD_NUM] = {0};
+
+ int msgid;
+ int msgkey = 1234;
+ msgid = msgget(msgkey, IPC_CREAT | 0666);
+ pr_info("msg id is %d", msgid);
+
+ for (int i = 0; i < CHILD_NUM; i++) {
+ pid_t pid = fork_and_add_group(i, group_id == SPG_ID_DEFAULT ? i + 1 : group_id, child_process, msgid, ch);
+ if (pid < 0) {
+ ret = -1;
+ goto out;
+ }
+ child[i] = pid;
+ get_msg(msgid, i + 1);
+ }
+
+ unsigned int sed = time(NULL);
+ srand(sed);
+ pr_info("rand sed: %u", sed);
+
+ int count = 0;
+ for (int i = 0; i < KILL_NUM; i++) {
+ int idx = rand() % CHILD_NUM;
+ if (child[idx] > 0) {
+ kill(child[idx], SIGKILL);
+ waitpid(child[idx], NULL, 0);
+ //pr_info("vfree address is %lx", vm_data[idx].addr);
+ //vm_data[idx].size = VMALLOC_SIZE;
+ if (ioctl_vfree(dev_fd, &vm_data[idx]) < 0) {
+ pr_info("vfree %d failed", idx);
+ } else {
+ vfree_count++;
+ pr_info("vfree %d finished.", idx);
+ }
+ pr_info("count: %d, kill child: %d, pid: %d", ++count, idx, child[idx]);
+ child[idx] = 0;
+ } else {
+ pid_t pid = fork_and_add_group(idx, group_id == SPG_ID_DEFAULT ? idx + 1 : group_id,
+ child_process, msgid, ch);
+ if (pid < 0) {
+ ret = -1;
+ goto out;
+ }
+ child[idx] = pid;
+ pr_info("fork child: %d, pid: %d", idx, child[idx]);
+ get_msg(msgid, idx + 1);
+ }
+// sleep(1);
+ }
+
+out:
+ for (int i = 0; i < CHILD_NUM; i++)
+ if (child[i] > 0) {
+ kill(child[i], SIGKILL);
+ //pr_info("vfree2 address is %lx", vm_data[i].addr);
+ //vm_data[i].size = VMALLOC_SIZE;
+ if (ioctl_vfree(dev_fd, &vm_data[i]) < 0) {
+ pr_info("vfree2 %d failed, errno is %d", i, errno);
+ } else {
+ vfree_count++;
+ pr_info("vfree2 %d finished.", i);
+ }
+ }
+
+ pr_info("vmalloc %d times, vfree %d times.", vmalloc_count, vfree_count);
+ vmalloc_count = 0;
+ vfree_count = 0;
+
+ return ret;
+}
+
+// 所有进程不加组
+static int testcase1(void)
+{
+ return testcase_routing(SPG_ID_DEFAULT, 'a');
+}
+
+// 所有进程加不同组
+static int testcase2(void)
+{
+ return testcase_routing(SPG_ID_DEFAULT, 'b');
+}
+
+// 所有进程加同一组
+static int testcase3(void)
+{
+ return testcase_routing(100, 'c');
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "k2u and kill, 所有进程不加组")
+ TESTCASE_CHILD(testcase2, "k2u and kill, 所有进程加不同组")
+ TESTCASE_CHILD(testcase3, "k2u and kill, 所有进程加同一组")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_k2u_unshare.c b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_k2u_unshare.c
new file mode 100644
index 000000000000..85ad9ce5af01
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_k2u_unshare.c
@@ -0,0 +1,188 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Nov 30 18:27:26 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+#define pr_info(fmt, args...) \
+ printf("[file:%s, func:%s, line:%d] " fmt "\n", __FILE__, __func__, __LINE__, ##args)
+
+static int dev_fd;
+
+/*
+ * 内存共享后,先释放内存再停止共享。
+ * testcase1: vmalloc并k2task后,不unshare直接vfree,预计失败。kill进程再vfree。预计成功。
+ */
+
+static int testcase1_child(struct vmalloc_info ka_info)
+{
+ int ret = 0;
+
+ while (1);
+
+ return ret;
+}
+
+/* testcase1: 测试发送signal 9 给进程 */
+static int testcase1(void)
+{
+ int ret;
+ int pid;
+ int status;
+
+ struct vmalloc_info ka_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+ }
+
+ pid = fork();
+ if (pid < 0)
+ printf("fork failed");
+ else if (pid == 0)
+ exit(testcase1_child(ka_info));
+ else {
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = pid,
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+ }
+
+ // 测试点1:不经过unshare()直接vfree,预期失败,有警告打印
+ printf("test point 1: vfree no unshare. ----------\n");
+ ioctl_vfree(dev_fd, &ka_info);
+
+ // 测试点2:使用SIGKILL杀死进程,再vfree,预期成功,无警告打印
+ printf("test point 2: vfree after kill process. ----------\n");
+ kill(pid, SIGKILL);
+ waitpid(pid, &status, 0);
+
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ return ret;
+}
+
+/* testcase2: 测试发送signal 2 给进程 */
+static int testcase2(void)
+{
+ int ret;
+ int pid;
+ int status;
+
+ struct vmalloc_info ka_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+ }
+
+ pid = fork();
+ if (pid < 0)
+ printf("fork failed");
+ else if (pid == 0)
+ exit(testcase1_child(ka_info));
+ else {
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = pid,
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+ }
+
+ // 测试点1:不经过unshare()直接vfree,预期失败,有警告打印
+ printf("test point 1: vfree no unshare. ----------\n");
+ ioctl_vfree(dev_fd, &ka_info);
+
+ // 测试点2:用SIGINT杀死进程,再vfree,预期成功,无警告打印
+ printf("test point 2: vfree after kill process. ----------\n");
+ kill(pid, SIGINT);
+ waitpid(pid, &status, 0);
+
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "k2task之后,测试点1:不经过unshare()直接vfree,预期失败,有警告打印;测试点2:使用SIGKILL杀死进程,再vfree,预期成功,无警告打印")
+ TESTCASE_CHILD(testcase2, "k2task之后,测试点1:不经过unshare()直接vfree,预期失败,有警告打印;测试点2:使用SIGINT杀死进程,再vfree,预期成功,无警告打印")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_malloc_u2k.c b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_malloc_u2k.c
new file mode 100644
index 000000000000..168ad1139648
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_malloc_u2k.c
@@ -0,0 +1,187 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Dec 04 17:20:10 2020
+ */
+#define _GNU_SOURCE
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <pthread.h>
+#include <fcntl.h> /* For O_* constants */
+
+#include <sys/wait.h>
+#include <sys/types.h>
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * testcase1: 用户态调用malloc,再调用u2k。预期成功。
+ */
+
+static int testcase1(void)
+{
+ int ret;
+
+ int psize = getpagesize();
+ char *user_addr = malloc(1000 * psize);
+ if (user_addr == NULL) {
+ pr_info("malloc failed, errno: %d", errno);
+ return -1;
+ }
+ memset((void *)user_addr, 'q', 1000 * psize);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = (unsigned long)user_addr,
+ .size = 1000 * psize,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'q',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+
+ free(user_addr);
+ return ret;
+}
+
+#define TEST2_MEM_SIZE (64 * 1024 * 1024) // 64MB
+char *test2_p;
+
+static void *testcase2_thread(void *arg)
+{
+ int ret = 0;
+ sem_t *sync = (sem_t *)arg;
+ struct sp_make_share_info u2k_info = {
+ // 由于malloc分配的首地址往往不是页对齐的,这里映射时故意减少1个页
+ .size = 3 * PAGE_SIZE,
+ .pid = getpid(),
+ };
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ for (unsigned long i = 0; i < TEST2_MEM_SIZE / PAGE_SIZE / 4; i++) {
+ // we expect page migration may happen here
+ test2_p[i * 4 * PAGE_SIZE] = 'b';
+
+ u2k_info.uva = (unsigned long)(&test2_p[i * 4 * PAGE_SIZE]);
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ pthread_exit((void *)1);
+ }
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ pthread_exit((void *)1);
+ }
+ }
+
+ pthread_exit((void *)0);
+}
+
+/*
+ * 尝试构造页迁移,线程绑核需结合NUMA节点的配置,建议在QEMU中运行
+ */
+static int testcase2(void)
+{
+ int ret = 0;
+ pthread_t tid, self;
+ char *sync_name = "/testcase2_sync";
+
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ self = pthread_self();
+ cpu_set_t cpuset;
+ CPU_ZERO(&cpuset);
+ CPU_SET(0, &cpuset); // 需检查:0号核的NUMA节点
+ ret = pthread_setaffinity_np(self, sizeof(cpu_set_t), &cpuset);
+ if (ret < 0) {
+ pr_info("set cpu affinity for main thread failed %d", ret);
+ return -1;
+ }
+
+ ret = pthread_create(&tid, NULL, testcase2_thread, sync);
+ if (ret != 0) {
+ pr_info("pthread_create failed, errno: %d", errno);
+ return -1;
+ }
+
+ CPU_ZERO(&cpuset);
+ CPU_SET(5, &cpuset); // 需检查:5号核的NUMA节点
+ ret = pthread_setaffinity_np(tid, sizeof(cpu_set_t), &cpuset);
+ if (ret < 0) {
+ pr_info("set cpu affinity for test thread failed %d", ret);
+ return -1;
+ }
+
+ test2_p = malloc(TEST2_MEM_SIZE);
+ // 所有的页全踩一遍
+ for (unsigned int i = 0; i < TEST2_MEM_SIZE / PAGE_SIZE; i++) {
+ test2_p[i * PAGE_SIZE] = 'a';
+ }
+
+ sem_post(sync);
+
+ void *thread_ret;
+ ret = pthread_join(tid, &thread_ret);
+ if (ret != 0) {
+ pr_info("can't join thread %d", ret);
+ return -1;
+ }
+ if ((int)thread_ret != 0) {
+ pr_info("join thread failed %d", (int)thread_ret);
+ return -1;
+ }
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "用户态调用malloc,再调用u2k。预期成功。")
+ TESTCASE_CHILD(testcase2, "尝试构造页迁移,线程绑核需结合NUMA节点的配置,建议在QEMU中运行")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_u2k_and_kill.c b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_u2k_and_kill.c
new file mode 100644
index 000000000000..f6022aa59bd0
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_u2k_and_kill.c
@@ -0,0 +1,155 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Dec 08 01:48:57 2020
+ */
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <pthread.h>
+
+#include "sharepool_lib.h"
+
+
+
+/*
+ * 异常用例,用来看护u2k接口,会oom,单独跑,内核没挂就通过
+ *
+ * 多进程多线程并发执行u2k,然后随机杀掉进程
+ * 可能存在内存泄漏
+ */
+#define CHILD_NUM 5
+#define THREAD_PER_PROCESS 10
+#define KILL_NUM 1000
+
+static void *child_thread(void *arg)
+{
+ int ret;
+ struct sp_make_share_info u2k_info = *(struct sp_make_share_info *)arg;
+ while (1) {
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return (void *)ret;
+ } else {
+ //pr_info("ioctl_u2k success");
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return (void *)ret;
+ }
+ }
+
+ return NULL;
+}
+
+static int child_process(void)
+{
+ int ret;
+ int psize = getpagesize();
+ char *user_addr = malloc(psize);
+ if (user_addr == NULL) {
+ pr_info("malloc failed, errno: %d", errno);
+ return -1;
+ }
+ memset((void *)user_addr, 'q', psize);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = (unsigned long)user_addr,
+ .size = psize,
+ .pid = getpid(),
+ };
+
+ pthread_t threads[THREAD_PER_PROCESS] = {0};
+ for (int i = 0; i < THREAD_PER_PROCESS; i++) {
+ ret = pthread_create(threads + i, NULL, child_thread, &u2k_info);
+ if (ret < 0) {
+ pr_info("pthread_create failed, err:%d", ret);
+ return -1;
+ }
+ }
+
+ for (int i = 0; i < THREAD_PER_PROCESS; i++)
+ if (threads[i])
+ pthread_join(threads[i], NULL);
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+ int ret = 0;
+ pid_t child[CHILD_NUM] = {0};
+
+ for (int i = 0; i < CHILD_NUM; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ ret = -1;
+ goto out;
+ } else if (pid == 0) {
+ exit(child_process());
+ }
+ child[i] = pid;
+ }
+
+ unsigned int sed = time(NULL);
+ srand(sed);
+ pr_info("rand sed: %u", sed);
+
+ int count = 0;
+ for (int i = 0; i < KILL_NUM; i++) {
+ int idx = rand() % CHILD_NUM;
+ if (child[idx] > 0) {
+ kill(child[idx], SIGKILL);
+ waitpid(child[idx], NULL, 0);
+
+ pr_info("count: %d, kill child: %d, pid: %d", ++count, idx, child[idx]);
+
+ child[idx] = 0;
+ } else {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ ret = -1;
+ goto out;
+ } else if (pid == 0) {
+ exit(child_process());
+ }
+ child[idx] = pid;
+ pr_info("fork child: %d, pid: %d", idx, child[idx]);
+ }
+// sleep(1);
+ }
+
+out:
+ for (int i = 0; i < CHILD_NUM; i++)
+ if (child[i] > 0)
+ kill(child[i], SIGKILL);
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "多进程多线程并发执行u2k,然后随机杀掉进程")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/kthread/Makefile b/tools/testing/sharepool/testcase/reliability_test/kthread/Makefile
new file mode 100644
index 000000000000..5370208dbd64
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/kthread/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/reliability_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/reliability_test/kthread/test_add_strange_task.c b/tools/testing/sharepool/testcase/reliability_test/kthread/test_add_strange_task.c
new file mode 100644
index 000000000000..e97ae46a2e81
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/kthread/test_add_strange_task.c
@@ -0,0 +1,46 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 02:28:23 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <stdlib.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 增加某个进程到某个组
+ */
+
+int main(int argc, char *argv[])
+{
+ if (argc != 3) {
+ printf("Usage:\n"
+ "\t%s <pid> <group_id>", argv[0]);
+ return -1;
+ }
+
+ int dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ pid_t pid = atoi(argv[1]);
+ int group_id = atoi(argv[2]);
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ printf("add task(pid%d) to group(%d) failed, err: %s\n", pid, group_id, strerror(errno));
+ return -1;
+ } else {
+ printf("add task(pid%d) to group(%d) success\n", pid, group_id);
+ return 0;
+ }
+}
diff --git a/tools/testing/sharepool/testcase/reliability_test/kthread/test_del_kthread.c b/tools/testing/sharepool/testcase/reliability_test/kthread/test_del_kthread.c
new file mode 100644
index 000000000000..649e716ed769
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/kthread/test_del_kthread.c
@@ -0,0 +1,61 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 02:28:23 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <stdlib.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 增加某个进程到某个组
+ */
+
+int main(int argc, char *argv[])
+{
+ int ret = 0;
+ if (argc != 3) {
+ printf("Usage:\n"
+ "\t%s <pid> <group_id>", argv[0]);
+ return -1;
+ }
+
+ int dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ pid_t pid = atoi(argv[1]);
+ int group_id = atoi(argv[2]);
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ printf("add task(pid%d) to group(%d) failed, err: %s\n", ag_info.pid, group_id, strerror(errno));
+ goto error_out;
+ }
+
+ // 尝试将传入的进程退组
+ struct sp_del_from_group_info del_info = {
+ .pid = pid,
+ .spg_id = group_id,
+ };
+ ret = ioctl_del_from_group(dev_fd, &del_info);
+ if (ret < 0) {
+ printf("try delete task(pid%d) from group(%d) failed, err: %s\n", pid, group_id, strerror(errno));
+ goto error_out;
+ }
+
+ close_device(dev_fd);
+ return 0;
+
+error_out:
+ close_device(dev_fd);
+ return -1;
+}
diff --git a/tools/testing/sharepool/testcase/reliability_test/others/Makefile b/tools/testing/sharepool/testcase/reliability_test/others/Makefile
new file mode 100644
index 000000000000..5370208dbd64
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/others/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/reliability_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/reliability_test/others/test_judge_addr.c b/tools/testing/sharepool/testcase/reliability_test/others/test_judge_addr.c
new file mode 100644
index 000000000000..cdb168a0cddb
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/others/test_judge_addr.c
@@ -0,0 +1,104 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Dec 08 20:38:39 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * testcase1: 进程调用u2k得到的kva,用is_sharepool_addr去查询,预期false。
+ */
+
+static int testcase1(void)
+{
+ int ret;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = 1,
+ };
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info.addr, 'q', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+
+ if (!ioctl_judge_addr(dev_fd, u2k_info.addr)) {
+ pr_info("invalid address as expected, errno: %d", errno);
+ } else {
+ pr_info("valid address unexpected");
+ return -1;
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "进程调用u2k得到的kva,用is_sharepool_addr去查询,预期false")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/others/test_kill_sp_process.c b/tools/testing/sharepool/testcase/reliability_test/others/test_kill_sp_process.c
new file mode 100644
index 000000000000..37093689a59a
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/others/test_kill_sp_process.c
@@ -0,0 +1,430 @@
+#include "sharepool_lib.h"
+#include "sem_use.h"
+#include <pthread.h>
+#include <stdlib.h>
+
+#define ALLOC_TEST_TYPES 4
+
+#define PROCESS_NUM 5
+#define GROUP_NUM 2
+#define KILL_TIME 1000
+
+static int group_ids[GROUP_NUM];
+static int semid;
+static struct vmalloc_info vmalloc_infos[PROCESS_NUM][GROUP_NUM] = {0};
+static struct vmalloc_info vmalloc_huge_infos[PROCESS_NUM][GROUP_NUM] = {0};
+
+static int add_multi_group()
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+ for (int i = 0; i < GROUP_NUM; i++) {
+ group_ids[i] = i + 1;
+ ag_info.spg_id = group_ids[i];
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("proc%d add group%d failed", getpid(), group_ids[i]);
+ return -1;
+ }
+ }
+
+ return ret;
+}
+
+static int check_multi_group()
+{
+ int ret;
+ // query groups
+ int spg_ids[GROUP_NUM];
+ int group_num = GROUP_NUM;
+ struct sp_group_id_by_pid_info find_group_info = {
+ .num = &group_num,
+ .spg_ids = spg_ids,
+ .pid = getpid(),
+ };
+ ret = ioctl_find_group_by_pid(dev_fd, &find_group_info);
+ if (ret < 0) {
+ pr_info("find group id by pid failed");
+ return ret;
+ } else {
+ for (int i = 0; i < GROUP_NUM; i++)
+ if (spg_ids[i] != group_ids[i]) {
+ pr_info("group id %d not consistent", i);
+ ret = -1;
+ }
+ }
+
+ return ret;
+}
+
+/*
+ * alloc - u2k - vmalloc - k2u - unshare - vfree - unshare - free
+ * 对大页/大页dvpp/小页/小页dvpp都测一下
+ * 多组:对每一组都测一下(query 进程所在的group)
+ */
+void *thread_and_process_helper(int arg)
+{
+ pr_info("thread_and_process_helper start.");
+ int ret, i;
+ pid_t pid;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[ALLOC_TEST_TYPES] = {0};
+ struct sp_make_share_info u2k_info[ALLOC_TEST_TYPES] = {0}, k2u_info = {0}, k2u_huge_info = {0};
+ char *addr;
+
+ int process_index = arg / 100;
+ int group_index = arg % 100;
+ int group_id = group_ids[group_index];
+
+ /* check sp group */
+ pid = getpid();
+
+ // 大页
+ alloc_info[0].flag = SP_HUGEPAGE;
+ alloc_info[0].spg_id = group_id;
+ alloc_info[0].size = 2 * PMD_SIZE;
+
+ // 大页 DVPP
+ alloc_info[1].flag = SP_DVPP | SP_HUGEPAGE;
+ alloc_info[1].spg_id = group_id;
+ alloc_info[1].size = 2 * PMD_SIZE;
+
+ // 普通页 DVPP
+ alloc_info[2].flag = SP_DVPP;
+ alloc_info[2].spg_id = group_id;
+ alloc_info[2].size = 4 * PAGE_SIZE;
+
+ // 普通页
+ alloc_info[3].flag = 0;
+ alloc_info[3].spg_id = group_id;
+ alloc_info[3].size = 4 * PAGE_SIZE;
+
+ /* alloc & u2k */
+ for (i = 0; i < ALLOC_TEST_TYPES; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("sp_alloc return addr %lx\n", alloc_info[i].addr);
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(dev_fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("addr %lx is a valid share pool addr\n", alloc_info[i].addr);
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ pr_info("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("u2k return addr %lx, check memory content succ.\n",
+ // u2k_info[i].addr);
+ }
+ }
+ //pr_info("\n");
+ }
+
+ /* prepare for vmalloc */
+ vmalloc_infos[process_index][group_index].size = 3 * PAGE_SIZE;
+ vmalloc_huge_infos[process_index][group_index].size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(dev_fd, &(vmalloc_infos[process_index][group_index]));
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(dev_fd, &(vmalloc_huge_infos[process_index][group_index]));
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_infos[process_index][group_index].addr;
+ k2u_info.size = vmalloc_infos[process_index][group_index].size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = group_id;
+
+ k2u_huge_info.kva = vmalloc_huge_infos[process_index][group_index].addr;
+ k2u_huge_info.size = vmalloc_huge_infos[process_index][group_index].size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = group_id;
+
+ /* k2u */
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ret = ioctl_vfree(dev_fd, &(vmalloc_infos[process_index][group_index]));
+ if (ret < 0) {
+ pr_info("vfree small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vfree(dev_fd, &(vmalloc_huge_infos[process_index][group_index]));
+ if (ret < 0) {
+ pr_info("vfree huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* unshare kva & sp_free*/
+ for (i = 0; i < ALLOC_TEST_TYPES; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ //close_device(dev_fd);
+ return 0;
+
+error:
+ //close_device(dev_fd);
+ return -1;
+}
+
+void *thread(void *arg)
+{
+ int ret = 0;
+ int count = 0;
+ while (1) {
+ ret = thread_and_process_helper(arg);
+ pr_info("thread finished, count: %d", count);
+ if (ret < 0) {
+ pr_info("thread_and_process helper failed, spg %d.", (int)arg);
+ return -1;
+ }
+ sem_inc_by_one(semid);
+ int sem_val = sem_get_value(semid);
+ pr_info("thread run %d times, %d left.", sem_val, KILL_TIME - sem_val);
+ }
+
+ return ret;
+}
+
+static int process(int index)
+{
+ int ret = 0;
+
+ // 每个组创建一个线程,循环执行helper
+ pthread_t threads[GROUP_NUM];
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = pthread_create(threads + i, NULL, thread, index * 100 + i);
+ if (ret < 0) {
+ pr_info("pthread %d create failed.", i);
+ return -1;
+ }
+ }
+
+ for (int i = 0; i < GROUP_NUM; i++) {
+ void *tret;
+ ret = pthread_join(threads[i], &tret);
+ if (ret < 0) {
+ pr_info("pthread %d join failed.", i);
+ ret = -1;
+ }
+ if ((long)tret != 0) {
+ pr_info("testcase execution failed, thread %d exited unexpectedly\n", i);
+ ret = -1;
+ }
+ }
+
+ return ret;
+}
+
+static int vfreeAll()
+{
+ int ret = 0;
+ for (int i = 0; i < PROCESS_NUM; i++) {
+ for (int j = 0; j < GROUP_NUM; j++) {
+ ioctl_vfree(dev_fd, &(vmalloc_infos[i][j]));
+ ioctl_vfree(dev_fd, &(vmalloc_huge_infos[i][j]));
+ }
+ }
+ return ret;
+}
+
+/*
+ * testcase1: 创建N个进程,加入所有组,并创建M个线程执行sharepool任务: u2k/k2u...;
+ * 并发kill所有进程,期望能顺利结束。
+ */
+static int testcase1(void)
+{
+ int ret;
+
+ semid = sem_create(1234, "kill all after N times api calls");
+ sem_set_value(semid, 0);
+
+ // 创建N个进程,加入所有组;
+ pid_t childs[PROCESS_NUM];
+ for (int i = 0; i < PROCESS_NUM; i++) {
+ pid_t pid_child = fork();
+ if (pid_child < 0) {
+ pr_info("fork failed, error %d", pid_child);
+ exit(-1);
+ } else if (pid_child == 0) {
+ if (add_multi_group())
+ return -1;
+ if (check_multi_group())
+ return -1;
+ pr_info("%s add %dth child to all groups success, %d left",
+ __FUNCTION__, i + 1, PROCESS_NUM - i - 1);
+ exit(process(i));
+ } else {
+ childs[i] = pid_child;
+ pr_info("fork child%d, pid: %d", i, pid_child);
+ }
+ }
+
+ // 计数达到xxx后,并发kill进程
+ sem_dec_by_val(semid, KILL_TIME);
+ pr_info("start to kill all process...");
+ for (int i = 0; i < PROCESS_NUM; i++) {
+ kill(childs[i], SIGKILL);
+ }
+
+ // waitpid
+ for (int i = 0; i < PROCESS_NUM; i++) {
+ int status;
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", i, status);
+ ret = 0;
+ } else {
+ pr_info("child%d test success, status: %d, %d processes left", i, status, PROCESS_NUM - i);
+ }
+ childs[i] = 0;
+ }
+
+ // vfree the vmalloc memories
+ if (vfreeAll() < 0) {
+ pr_info("not all are vfreed.");
+ }
+
+ sem_close(semid);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "创建N个进程,加入所有组,并创建M个线程执行sharepool任务: u2k/k2u...; 并发kill所有进程,期望能顺利结束。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/others/test_kthread.c b/tools/testing/sharepool/testcase/reliability_test/others/test_kthread.c
new file mode 100644
index 000000000000..a7d0b5dc0d73
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/others/test_kthread.c
@@ -0,0 +1,195 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Dec 08 20:38:39 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+static int testcase1(void)
+{
+ int ret;
+ struct sp_kthread_info info;
+ ret = ioctl_kthread_start(dev_fd, &info);
+ if (ret < 0) {
+ pr_info("kthread start failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_kthread_end(dev_fd, &info);
+ if (ret < 0) {
+ pr_info("kthread end failed, errno: %d", errno);
+ return ret;
+ }
+
+ return ret;
+}
+
+static int testcase2(void)
+{
+ int ret;
+ int spg_id = 1;
+ void *addr;
+ unsigned long va;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id);
+ if (ret < 0) {
+ pr_info("add group failed");
+ return -1;
+ }
+
+ addr = wrap_sp_alloc(spg_id, 4096, 0);
+ if (addr == (void *)-1) {
+ pr_info("alloc failed");
+ return -1;
+ }
+ va = (unsigned long)addr;
+
+ struct sp_kthread_info info = {
+ .type = 1,
+ .addr = va,
+ .size = 4096,
+ .spg_id = spg_id,
+ };
+
+ ret = ioctl_kthread_start(dev_fd, &info);
+ if (ret < 0) {
+ pr_info("kthread start failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_kthread_end(dev_fd, &info);
+ if (ret < 0) {
+ pr_info("kthread end failed, errno: %d", errno);
+ return ret;
+ }
+
+ return ret;
+}
+
+static int prepare(struct vmalloc_info *ka_info, bool ishuge)
+{
+ int ret;
+ if (ishuge) {
+ ret = ioctl_vmalloc_hugepage(dev_fd, ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc_hugepage failed, errno: %d", errno);
+ return -1;
+ }
+ } else {
+ ret = ioctl_vmalloc(dev_fd, ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'a',
+ .addr = ka_info->addr,
+ .size = ka_info->size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, ka_info);
+ }
+ return ret;
+}
+
+static int testcase3(void)
+{
+ int ret;
+ int spg_id = 1;
+ void *addr;
+ unsigned long va;
+ unsigned long flag = 0;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = spg_id,
+ .sp_flags = flag,
+ .pid = getpid(),
+ };
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id);
+ if (ret < 0)
+ return -1;
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("testcase ioctl_k2u failed unexpected", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ return -1;
+ } else {
+ pr_info("testcase5 ioctl_k2u success expected");
+ }
+
+ struct sp_kthread_info info = {
+ .type = 2,
+ .addr = k2u_info.addr,
+ .size = k2u_info.size,
+ .spg_id = spg_id,
+ };
+
+ ret = ioctl_kthread_start(dev_fd, &info);
+ if (ret < 0) {
+ pr_info("kthread start failed, errno: %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+ }
+
+ sleep(2);
+
+ ret = ioctl_kthread_end(dev_fd, &info);
+ if (ret < 0) {
+ pr_info("kthread end failed, errno: %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "内核线程调用")
+ TESTCASE_CHILD(testcase2, "内核线程调用 sp free")
+ TESTCASE_CHILD(testcase3, "内核线程调用 sp unshare")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/others/test_mmap_sp_address.c b/tools/testing/sharepool/testcase/reliability_test/others/test_mmap_sp_address.c
new file mode 100644
index 000000000000..01416b647cbd
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/others/test_mmap_sp_address.c
@@ -0,0 +1,223 @@
+#include <stdio.h>
+#include <unistd.h>
+#include <errno.h>
+#include <stdbool.h>
+#include <stdlib.h>
+#include "sharepool_lib.h"
+
+#define start_addr 0xe80000000000UL
+#define end_addr 0xf80000000000UL
+
+static int try_mmap(void *addr, unsigned long size)
+{
+ int *result;
+ int ret = 0;
+ result = mmap(addr, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0);
+ if (result == MAP_FAILED) {
+ printf("mmap failed as expected, errno is %d\n", errno);
+ ret = 0;
+ }
+ else {
+ printf("mmap success unexpected, addr is %lx\n", (unsigned long)result);
+ ret = -1;
+ }
+ return ret;
+}
+
+/* testcase1: 试图通过mmap()映射sharepool内存地址(未使用),预期失败 */
+static int testcase1(void)
+{
+ int ret;
+
+ void *addr = start_addr;
+
+ int *result = mmap(addr, sizeof(int), PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0);
+ if (result == MAP_FAILED) {
+ printf("mmap failed as expected, errno is %d\n", errno);
+ ret = 0;
+ }
+ else {
+ printf("mmap success unexpected, addr is %lx\n", (unsigned long)result);
+ ret = -1;
+ }
+
+ return ret;
+}
+
+/* testcase2: 试图通过mmap()映射sharepool内存地址(已使用),预期失败 */
+static int testcase2(void)
+{
+ int *result;
+ int ret;
+ int spg_id = 1;
+
+ // 加组,申请内存,用返回地址进行mmap
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id) < 0)
+ return -1;
+
+ unsigned long addr;
+ addr = (unsigned long)wrap_sp_alloc(spg_id, PAGE_SIZE, 0);
+ pr_info("sp_alloc first address is %lx", addr);
+
+ addr = (unsigned long)wrap_sp_alloc(spg_id, PAGE_SIZE, 0);
+ pr_info("sp_alloc second address is %lx", addr);
+
+ addr = (unsigned long)wrap_sp_alloc(spg_id, PMD_SIZE, 0);
+ pr_info("sp_alloc third address is %lx", addr);
+
+ addr = (unsigned long)wrap_sp_alloc(spg_id, PMD_SIZE, 0);
+ pr_info("sp_alloc fourth address is %lx", addr);
+
+ result = mmap(addr, PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0);
+ if (result == MAP_FAILED) {
+ printf("mmap failed as expected, errno is %d\n", errno);
+ ret = 0;
+ }
+ else {
+ printf("mmap success unexpected, addr is %lx\n", (unsigned long)result);
+ ret = -1;
+ }
+
+ return ret;
+}
+
+/* testcase3: 加组,申请内存,用返回地址进行munmap,再sp_free */
+static int testcase3(void)
+{
+ int *result;
+ int ret;
+ int spg_id = 1;
+
+
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id) < 0)
+ return -1;
+
+ unsigned long addr1;
+ addr1 = (unsigned long)wrap_sp_alloc(spg_id, PAGE_SIZE, 0);
+ pr_info("sp_alloc small page is %lx", addr1);
+
+ // 尝试munmap小页
+ ret = munmap(addr1, PAGE_SIZE);
+ if (ret < 0)
+ pr_info("munmap failed");
+ else
+ pr_info("munmap success");
+
+
+ // 尝试munmap大页
+ unsigned long addr2;
+ addr2 = (unsigned long)wrap_sp_alloc(spg_id, PMD_SIZE, 0);
+ pr_info("sp_alloc hugepage is %lx", addr2);
+
+ ret = munmap(addr2, PMD_SIZE);
+ if (ret < 0) {
+ pr_info("munmap hugepage failed");
+ ret = 0;
+ }
+ else {
+ pr_info("munmap hugepage success.");
+ return -1;
+ }
+
+ // 再sp free,再munmap
+ ret = wrap_sp_free(addr2);
+ if (ret < 0) {
+ pr_info("sp_free hugepage failed.");
+ } else {
+ pr_info("sp_free hugepage success.");
+ }
+
+ return ret;
+}
+
+/* testcase4: 加组,申请内存,用返回地址进行mmap和munmap,再sp_free。 */
+static int testcase4(void)
+{
+ int *result;
+ int ret;
+ int spg_id = 1;
+
+ // alloc
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id) < 0)
+ return -1;
+
+ unsigned long addr1;
+ addr1 = (unsigned long)wrap_sp_alloc(spg_id, PAGE_SIZE, 0);
+ pr_info("sp_alloc addr1 is %lx", addr1);
+
+ // mmap & munmap
+ result = mmap(addr1, PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0);
+ if (result == MAP_FAILED) {
+ printf("mmap addr1 failed as expected, errno is %d\n", errno);
+ ret = 0;
+ }
+ else {
+ printf("mmap addr1 success unexpected, addr is %lx\n", (unsigned long)result);
+ ret = munmap(addr1, PAGE_SIZE);
+ if (ret < 0)
+ pr_info("munmap after mmap failed");
+ else
+ pr_info("munmap after mmap success");
+ }
+
+ // 再free
+ ret = wrap_sp_free(addr1);
+ if (ret < 0) {
+ pr_info("sp_free addr1 failed.");
+ } else {
+ pr_info("sp_free addr1 success.");
+ }
+
+ return ret;
+}
+
+/* testcase5: alloc和mmap交叉进行 */
+static int testcase5(void)
+{
+ int *result;
+ int ret;
+ int spg_id = 1;
+
+ // 加组,申请内存,用返回地址进行mmap
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id) < 0)
+ return -1;
+
+ unsigned long addr;
+ addr = (unsigned long)wrap_sp_alloc(spg_id, PAGE_SIZE, 0);
+ pr_info("sp_alloc first address is %lx", addr);
+ ret = try_mmap(addr, PAGE_SIZE);
+
+ addr = (unsigned long)wrap_sp_alloc(spg_id, PAGE_SIZE, 0);
+ pr_info("sp_alloc second address is %lx", addr);
+ ret = try_mmap(addr, PAGE_SIZE);
+
+ addr = (unsigned long)wrap_sp_alloc(spg_id, PMD_SIZE, 0);
+ pr_info("sp_alloc third address is %lx", addr);
+ ret = try_mmap(addr, PMD_SIZE);
+
+ addr = (unsigned long)wrap_sp_alloc(spg_id, PMD_SIZE, 0);
+ pr_info("sp_alloc fourth address is %lx", addr);
+ ret = try_mmap(addr, PMD_SIZE);
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "试图通过mmap()映射sharepool内存地址(未使用),预期失败")
+ TESTCASE_CHILD(testcase2, "试图通过mmap()映射sharepool内存地址(已使用),预期失败")
+ TESTCASE_CHILD(testcase3, "加组,申请内存,用返回地址进行munmap,预期失败,再sp_free,预期成功")
+ TESTCASE_CHILD(testcase4, "加组,申请内存,用返回地址进行mmap和munmap,预期失败,再sp_free,预期成功")
+ TESTCASE_CHILD(testcase5, "alloc和mmap交叉进行")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/others/test_notifier_block.c b/tools/testing/sharepool/testcase/reliability_test/others/test_notifier_block.c
new file mode 100644
index 000000000000..856142a15a85
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/others/test_notifier_block.c
@@ -0,0 +1,101 @@
+#include <stdlib.h>
+#include <errno.h>
+#include <string.h>
+#include "sharepool_lib.h"
+
+#define PROC_NUM 4
+#define GROUP_ID 1
+
+int register_notifier_block(int id)
+{
+ int ret = 0;
+
+ /* see sharepool_dev.c for more details*/
+ struct sp_notifier_block_info notifier_info = {
+ .i = id,
+ };
+ ret = ioctl_register_notifier_block(dev_fd, ¬ifier_info);
+ if (ret != 0)
+ pr_info("proc %d register notifier block %d failed. ret is %d. errno is %s.",
+ getpid(), id, ret, strerror(errno));
+ else
+ pr_info("proc %d register notifier for func %d success!!", getpid(), id);
+
+ return ret;
+}
+
+int unregister_notifier_block(int id)
+{
+ int ret = 0;
+
+ /* see sharepool_dev.c for more details*/
+ struct sp_notifier_block_info notifier_info = {
+ .i = id,
+ };
+ ret = ioctl_unregister_notifier_block(dev_fd, ¬ifier_info);
+ if (ret != 0)
+ pr_info("proc %d unregister notifier block %d failed. ret is %d. errno is %s.",
+ getpid(), id, ret, strerror(errno));
+ else
+ pr_info("proc %d unregister notifier for func %d success!!", getpid(), id);
+
+ return ret;
+}
+
+static int testcase1(void)
+{
+ pid_t childs[PROC_NUM];
+
+ register_notifier_block(1);
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed.");
+ return -1;
+ } else if (pid == 0) {
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, i + 1) < 0) {
+ pr_info("add group failed.");
+ return -1;
+ }
+ while (1) {
+
+ }
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ sleep(2);
+
+ for (int i = 0; i < PROC_NUM / 2; i++) {
+ pr_info("group %d is exiting...", i + 1);
+ KILL_CHILD(childs[i]);
+ }
+
+ unregister_notifier_block(1);
+ sleep(2);
+
+ for (int i = PROC_NUM / 2; i < PROC_NUM; i++) {
+ pr_info("group %d is exiting...", i + 1);
+ KILL_CHILD(childs[i]);
+ }
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "测试注册组销毁通知的效果")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/reliability_test.sh b/tools/testing/sharepool/testcase/reliability_test/reliability_test.sh
new file mode 100755
index 000000000000..7d47a35cfd0d
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/reliability_test.sh
@@ -0,0 +1,51 @@
+#!/bin/sh
+
+set -x
+
+for line in test_add_group1 \
+ test_unshare1 \
+ test_unshare2 \
+ test_unshare3 \
+ test_unshare4 \
+ test_unshare5 \
+ test_unshare6 \
+ test_unshare7 \
+ test_malloc_u2k \
+ test_judge_addr
+do
+ ./reliability_test/$line
+ if [ $? -ne 0 ] ;then
+ echo "testcase reliability_test/$line failed"
+ exit 1
+ fi
+done
+
+# 异常用例,用来看护u2k接口,会oom,单独跑,内核没有挂就通过
+./reliability_test/test_u2k_and_kill
+
+# 异常用例,用来看护k2u接口,会oom,单独跑,内核没有挂就通过
+./reliability_test/test_k2u_and_kill
+
+# 添加当前shell到组11,不崩溃即可
+./reliability_test/test_add_strange_task $$ 11
+# 添加kthread线程到组99999,不崩溃即可
+./reliability_test/test_add_strange_task 2 99999
+# 添加init进程到组10,不崩溃即可
+#./reliability_test/test_add_strange_task 1 11
+# 添加守护进程到组12,不崩溃即可(ps命令查看,进程名用方括号包含的是守护进程)
+#./reliability_test/test_add_strange_task 2 11
+
+# coredump程序运行时,同组其它进程正在做share pool各项基础操作
+# 这里不再额外编程,快速复用已有的用例,但要注意:
+# 1. 检查两个用例的共享组id是否相等
+# 2. 在背景用例执行完之前触发test_coredump的coredump
+./test_mult_process/test_proc_interface_process 1 &
+./reliability_test/test_coredump
+
+# 构造外部碎片,平时不运行该用例
+# 120000 * 4K ~= 500MB. Please kill one of the process after allocation is done.
+#./reliability_test/test_external_fragmentation 100000 & ./reliability_test/test_external_fragmentation 100000 &
+#echo "now sleep 20 sec, please kill one of the process above"
+#sleep 20
+#./reliability_test/test_external_fragmentation_trigger
+
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_add_group/Makefile b/tools/testing/sharepool/testcase/reliability_test/sp_add_group/Makefile
new file mode 100644
index 000000000000..5370208dbd64
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_add_group/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/reliability_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_exiting_task.c b/tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_exiting_task.c
new file mode 100644
index 000000000000..a72eceffa38c
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_exiting_task.c
@@ -0,0 +1,61 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 28 02:21:42 2020
+ */
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include "sharepool_lib.h"
+
+static int testcase1(void)
+{
+ int ret = 0;
+
+ srand((unsigned int)time(NULL));
+ for (int i = 0; i < 10000; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ if (!(i % 100))
+ pr_info("child process %d", i);
+ exit(0);
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = rand() % (SPG_ID_AUTO_MIN - 1) + 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d, pid: %d, spg_id: %d",
+ errno, pid, ag_info.spg_id);
+ }
+ waitpid(pid, NULL, 0);
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "将正在退出的task加入组中。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_group1.c b/tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_group1.c
new file mode 100644
index 000000000000..fb64b0c379c9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_group1.c
@@ -0,0 +1,118 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Nov 30 15:24:35 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+#define MAX_CHILD 100
+#define CHILD_EXIT 5
+
+static int group_id = SPG_ID_MAX;
+/*
+ * 多进程并发加入指定组,预期只有第一个进程会加组成功。
+ */
+
+static int testcase1(void)
+{
+ pid_t child_pid;
+ int status;
+ int ret;
+ int child_result;
+
+ pr_info("start test: child num = %d", MAX_CHILD);
+ for (int i = 0; i < MAX_CHILD; i++) {
+ child_pid = fork();
+ if (child_pid == -1 || child_pid == 0) { // ensure only parent do fork()
+ break;
+ }
+ }
+ if (child_pid == -1) {
+ pr_info("fork failed");
+ } else if (child_pid == 0) {
+ int child_ret = 0;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+
+ child_ret = ioctl_add_group(dev_fd, &ag_info);
+ sleep(10);
+ if (child_ret == 0) {
+ exit(EXIT_SUCCESS);
+ } else if (child_ret < 0 && errno == EPERM) {
+ exit(CHILD_EXIT);
+ } else {
+ pr_info("ioctl_add_group failed unexpected: errno = %d", errno);
+ exit(EXIT_FAILURE);
+ }
+
+ } else {
+ for (int i = 0; i < MAX_CHILD; i++) {
+ ret = waitpid(-1, &status, 0);
+ if (WIFEXITED(status)) {
+ pr_info("child execute success, ret value : %d, pid = %d", WEXITSTATUS(status), ret);
+ if (WEXITSTATUS(status) == CHILD_EXIT) {
+ child_result++;
+ }
+ } else if (WIFSIGNALED(status)) {
+ pr_info("child execute failed, killed by signal : %d, pid = %d", WTERMSIG(status), ret);
+ ret = -1;
+ goto error_out;
+ } else if (WIFSTOPPED(status)) {
+ printf("child execute failed, stoped by signal : %d, pid = %d", WSTOPSIG(status), ret);
+ ret = -1;
+ goto error_out;
+ } else {
+ printf("child execute failed, WIFEXITED(status) : %d, pid = %d", WIFEXITED(status), ret);
+ ret = -1;
+ goto error_out;
+ }
+ }
+ if (child_result == 0) {
+ pr_info("testcase success!!");
+ ret = 0;
+ } else {
+ pr_info("testcase failed!! %d childs add group failed", child_result);
+ ret = -1;
+ goto error_out;
+ }
+ }
+
+error_out:
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "多进程并发加入指定组,预期都会加组成功。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_strange_task.c b/tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_strange_task.c
new file mode 100644
index 000000000000..e97ae46a2e81
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_strange_task.c
@@ -0,0 +1,46 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 02:28:23 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <stdlib.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 增加某个进程到某个组
+ */
+
+int main(int argc, char *argv[])
+{
+ if (argc != 3) {
+ printf("Usage:\n"
+ "\t%s <pid> <group_id>", argv[0]);
+ return -1;
+ }
+
+ int dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ pid_t pid = atoi(argv[1]);
+ int group_id = atoi(argv[2]);
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ printf("add task(pid%d) to group(%d) failed, err: %s\n", pid, group_id, strerror(errno));
+ return -1;
+ } else {
+ printf("add task(pid%d) to group(%d) success\n", pid, group_id);
+ return 0;
+ }
+}
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_unshare/Makefile b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/Makefile
new file mode 100644
index 000000000000..5370208dbd64
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/reliability_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare1.c b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare1.c
new file mode 100644
index 000000000000..08d14a6c2d95
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare1.c
@@ -0,0 +1,325 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Nov 30 18:27:26 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * 内存共享后,先释放内存再停止共享。
+ * testcase1: vmalloc并k2task后,直接vfree kva(预期成功),然后unshare uva(预期成功)。
+ * testcase2: vmalloc并k2spg后,直接vfree kva(预期失败),然后unshare uva(预期成功)
+ * testcase3: 父进程vmalloc,子进程加组,父进程k2u给子进程所在组,子进程退出,组销毁。父进程vfree。(成功, 无报错)
+ * testcase4: sp_alloc并u2k后,直接sp_free uva(预期成功),然后unshare kva(预期成功)。
+ */
+
+static int testcase1(void)
+{
+ int ret;
+
+ struct vmalloc_info vmalloc_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ return ret;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ return ret;
+ }
+
+ ret = ioctl_vfree(dev_fd, &vmalloc_info); /* 预期失败 */
+ if (ret < 0) {
+ pr_info("ioctl vfree failed for the first time, errno: %d", errno);
+ }
+
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ }
+ ret = ioctl_vfree(dev_fd, &vmalloc_info); /* 预期成功 */
+ if (ret < 0) {
+ pr_info("ioctl vfree failed for the second time, errno: %d", errno);
+ }
+ return ret;
+}
+
+#define GROUP_ID 1
+static int testcase2(void)
+{
+ int ret;
+
+ /* k2u prepare*/
+ struct vmalloc_info vmalloc_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ return ret;
+ }
+
+ struct sp_make_share_info k2spg_info = {
+ .kva = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ .spg_id = GROUP_ID,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID);
+ if (ret < 0) {
+ pr_info("process add group %d failed, errno: %d", GROUP_ID, errno);
+ } else {
+ pr_info("process %d add group %d success", getpid(), GROUP_ID);
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2spg_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ return ret;
+ } else {
+ pr_info("process k2u success");
+ }
+
+ /* the vm_area has SP_FLAG in it, vfree shall cause warning*/
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ //ret = sharepool_log("vfree without unshare");
+
+ ret = ioctl_unshare(dev_fd, &k2spg_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ }
+ //ret = sharepool_log("after unshare");
+
+ ioctl_vfree(dev_fd, &vmalloc_info); /* 预期成功 */
+ //ret = sharepool_log("vfree again");
+
+ return 0;
+}
+
+static int testcase3(void)
+{
+ int ret;
+ sem_t *sem_addgroup, *sem_k2u;
+ sem_addgroup = sem_open("/child_process_add_group_finish", O_CREAT, O_RDWR, 0);
+ sem_k2u = sem_open("/child_process_k2u_finish", O_CREAT, O_RDWR, 0);
+
+ /* k2u prepare*/
+ struct vmalloc_info vmalloc_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ return ret;
+ }
+
+ struct sp_make_share_info k2spg_info = {
+ .kva = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ .spg_id = GROUP_ID,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ /* fork child */
+ pid_t child = fork();
+ if (child == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID);
+ if (ret < 0) {
+ pr_info("child process add group %d failed, errno: %d", GROUP_ID, errno);
+ } else {
+ pr_info("child process add group %d success", GROUP_ID);
+ }
+ sem_post(sem_addgroup);
+ /*wait parent make share kernel address to child */
+ sem_wait(sem_k2u);
+ exit(0);
+ }
+ pr_info("parent process is %d, child process is %d", getpid(), child);
+
+ sem_wait(sem_addgroup);
+ /* k2u to child */
+ k2spg_info.pid = child;
+ ret = ioctl_k2u(dev_fd, &k2spg_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed as exepected, errno: %d", errno);
+ ret = 0;
+ } else {
+ pr_info("parent process %d k2u success unexpected.", getpid());
+ ret = -1;
+ }
+ sem_post(sem_k2u);
+
+ int status;
+ waitpid(child, &status, 0);
+ pr_info("child process %d exited.", child);
+
+ /* the vm_area has SP_FLAG in it, vfree shall cause warning*/
+ ioctl_vfree(dev_fd, &vmalloc_info);
+
+ sem_unlink("/child_process_add_group_finish");
+ sem_unlink("/child_process_k2u_finish");
+ return ret;
+}
+
+static int testcase4(void)
+{
+ int ret;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ } else {
+ pr_info("process %d add group %d success.", getpid(), ag_info.spg_id);
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = 1,
+ };
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info.addr, 'q', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'q',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret == 0) {
+ pr_info("ioctl_free succeed as expected, ret: %d, errno: %d", ret, errno);
+ } else {
+ pr_info("ioctl_free result unexpected, ret: %d, errno: %d", ret, errno);
+ return -1;
+ }
+
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ ioctl_unshare(dev_fd, &u2k_info);
+ return ret;
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "vmalloc并k2task后,直接vfree kva(预期成功),然后unshare uva(预期成功)")
+ TESTCASE_CHILD(testcase2, "vmalloc并k2spg后,直接vfree kva(预期失败),然后unshare uva(预期成功)")
+ TESTCASE_CHILD(testcase3, "父进程vmalloc,子进程加组,父进程k2u给子进程所在组,子进程退出,组销毁。父进程vfree。(成功, 无报错)")
+ TESTCASE_CHILD(testcase4, "sp_alloc并u2k后,直接sp_free uva(预期成功),然后unshare kva(预期成功)")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
+
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare2.c b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare2.c
new file mode 100644
index 000000000000..fc1eeb623f49
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare2.c
@@ -0,0 +1,202 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Dec 01 09:10:50 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * testcase1: 进程A分配并共享内存后,由不同组的进程B进行停止共享(B得到了A的pid和spg_id)。
+ * testcase2: 进程A分配内存后,由不同组的进程B进行释放。
+ */
+
+/*
+ * A进程u2k得到的kva,没有办法判断合法的unshare调用方是谁。B进程如果猜到了kva,它去unshare kva,也能成功
+ * 这个一直是设计缺失的。我们认为现网用户态没法拿到kva,也无法调用unshare kva. 只有驱动内核态才能调。
+ */
+
+#define TESTCASE2_MSG_KEY 20
+#define TESTCASE2_MSG_TYPE 200
+// 利用消息队列在进程间传递struct sp_alloc_info
+struct msgbuf_alloc_info {
+ long type;
+ struct sp_alloc_info alloc_info;
+};
+
+static int testcase2_child(sem_t *sync, sem_t *grandsync)
+{
+ int ret;
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ int msgid = msgget(TESTCASE2_MSG_KEY, 0);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ struct sp_alloc_info *alloc_info = &msgbuf.alloc_info;
+ ret = msgrcv(msgid, &msgbuf, sizeof(*alloc_info), TESTCASE2_MSG_TYPE, IPC_NOWAIT);
+ if (ret < 0) {
+ pr_info("msgrcv failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ if (!ioctl_judge_addr(dev_fd, alloc_info->addr)) {
+ pr_info("invalid address");
+ goto error_out;
+ }
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0 && errno == EPERM) {
+ pr_info("ioctl_free failed as expected, errno: %d", errno);
+ } else {
+ pr_info("ioctl_free succeed unexpected, ret: %d", ret);
+ goto error_out;
+ }
+
+ sem_post(grandsync);
+ return 0;
+
+error_out:
+ sem_post(grandsync);
+ return -1;
+}
+
+static int testcase2(struct sp_alloc_info *alloc_info)
+{
+ int ret, status = 0;
+ int group_id = alloc_info->spg_id;
+
+ char *sync_name = "/testcase2_sync";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *grandsync_name = "/testcase2_childsync";
+ sem_t *grandsync = sem_open(grandsync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(grandsync_name);
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ return -1;
+ }
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ return -1;
+ } else if (pid == 0) {
+ exit(testcase2_child(sync, grandsync));
+ }
+
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ int msgid = msgget(TESTCASE2_MSG_KEY, IPC_CREAT | 0666);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ msgbuf.type = TESTCASE2_MSG_TYPE;
+ memcpy(&msgbuf.alloc_info, alloc_info, sizeof(*alloc_info));
+ ret = msgsnd(msgid, &msgbuf, sizeof(*alloc_info), 0);
+ if (ret < 0) {
+ pr_info("msgsnd failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ memset((void *)alloc_info->addr, 'a', alloc_info->size);
+
+ sem_post(sync);
+ ret = sem_wait(grandsync);
+ if (ret < 0) {
+ pr_info("sem wait failed, errno: %d", errno);
+ goto error_out;
+ } else {
+ goto out;
+ }
+
+error_out:
+ kill(pid, SIGKILL);
+out:
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child failed!!");
+ return -1;
+ } else
+ pr_info("child success!!");
+
+ if (ioctl_free(dev_fd, alloc_info) < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return -1;
+ }
+ return ret;
+}
+
+struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = 100,
+};
+
+static int test2(void) { return testcase2(&alloc_info); }
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(test2, "进程A分配内存后,由不同组的进程B进行释放")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
+
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare3.c b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare3.c
new file mode 100644
index 000000000000..9735b1e30d72
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare3.c
@@ -0,0 +1,243 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Dec 01 19:49:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * 内存共享后,由另一方停止共享。预期停止共享失败。
+ * testcase1: k2u后,由用户态进程停止共享。(用户态进程用内核vm area地址调用unshare,预期失败)
+ * testcase2: k2u后,由用户态进程停止共享。(用户态进程用进程内vma地址调用unshare,预期成功)
+ * testcase3: u2k后,由内核模块停止共享。
+ */
+static int testcase2(void)
+{
+ int ret;
+
+ struct vmalloc_info kva_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &kva_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'a',
+ .addr = kva_info.addr,
+ .size = kva_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto out;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = kva_info.addr,
+ .size = kva_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto out;
+ }
+
+ unsigned long user_va = k2u_info.addr; // user vma
+ //k2u_info.addr = kva_info.addr; //kernel vm area
+ ret = ioctl_unshare(dev_fd, &k2u_info); // user process try to unshare user vma (back to kernel), shall success
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed unexpected, errno: %d", errno);
+ ret = -1;
+ } else {
+ pr_info("ioctl_unshare succeed as expected");
+ ret = 0;
+ }
+
+ /*
+ k2u_info.addr = user_va;
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ ret = -1;
+ goto out;
+ }
+ */
+
+out:
+ ioctl_vfree(dev_fd, &kva_info);
+ return ret;
+}
+
+static int testcase3(void)
+{
+ int ret;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = 1,
+ };
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info.addr, 'q', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+
+ unsigned long kernel_va = u2k_info.addr;
+ u2k_info.addr = alloc_info.addr;
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ //pr_info("ioctl_unshare failed as expected, errno: %d", errno);
+ ret = 0;
+ } else {
+ pr_info("ioctl_unshare result unexpected, ret: %d, errno: %d", ret, errno);
+ ret = -1;
+ }
+
+ /*
+ u2k_info.addr = kernel_va;
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+ */
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ ret = -1;
+ } else {
+ pr_info("u2k area freed.");
+ }
+ return ret;
+}
+
+static int testcase4(void)
+{
+ int ret;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = 1,
+ };
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info.addr, 'q', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+
+ unsigned long kernel_va = u2k_info.addr;
+ //u2k_info.addr = alloc_info.addr;
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed unexpected, errno: %d", errno);
+ ret = -1;
+ } else {
+ pr_info("ioctl_unshare result succeed as expected.");
+ ret = 0;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ ret = -1;
+ } else {
+ pr_info("u2k area freed.");
+ }
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+// TESTCASE_CHILD(testcase1, false)
+ TESTCASE_CHILD(testcase2, "k2u后,由用户态进程停止共享。(用户态进程用进程内vma地址调用unshare,预期成功")
+ TESTCASE_CHILD(testcase3, "u2k后,由内核模块停止共享。")
+ TESTCASE_CHILD(testcase4, "TBD")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare4.c b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare4.c
new file mode 100644
index 000000000000..2edfe9ffbd06
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare4.c
@@ -0,0 +1,516 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 02 09:38:57 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+#define MAX_PROC_PER_GRP 500
+#define CHILD_EXIT 5
+
+static int process_pre_group = 100;
+static sem_t *sync1[MAX_PROC_PER_GRP];
+static sem_t *grandsync1[MAX_PROC_PER_GRP];
+static sem_t *sync2[MAX_PROC_PER_GRP];
+static sem_t *grandsync2[MAX_PROC_PER_GRP];
+
+/*
+ * testcase1: 进程A分配并共享内存后,组内所有进程并发调用unshare。预期只有其中一个是成功的
+ * -> 本用例想的很不错,但是sp_unshare_kva无法防住并发。
+ * testcase2: 进程A分配内存后,组内所有进程并发调用free。预期只有其中一个是成功的。
+ */
+#if 0
+#define TESTCASE1_MSG_KEY 10
+// 利用消息队列在进程间传递struct sp_alloc_info
+struct msgbuf_share_info {
+ long type;
+ struct sp_make_share_info u2k_info;
+};
+
+static int testcase1_child(int num)
+{
+ int ret;
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(sync1[num]);
+ } while (ret < 0 && errno == EINTR);
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ /* 通知父进程加组成功 */
+ sem_post(grandsync1[num]);
+ if (group_id < 0) {
+ pr_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ /* 等待父进程共享 */
+ do {
+ ret = sem_wait(sync1[num]);
+ } while (ret < 0 && errno == EINTR);
+
+ int msgid = msgget(TESTCASE1_MSG_KEY, 0);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_share_info msgbuf = {0};
+ struct sp_make_share_info *u2k_info = &msgbuf.u2k_info;
+ ret = msgrcv(msgid, &msgbuf, sizeof(*u2k_info), (num + 1), IPC_NOWAIT);
+ if (ret < 0) {
+ pr_info("msgrcv failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ ret = ioctl_unshare(dev_fd, u2k_info);
+ if (ret == 0) {
+ pr_info("ioctl_unshare success");
+ } else {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ ret = CHILD_EXIT;
+ }
+
+ sem_post(grandsync1[num]);
+ return ret;
+
+error_out:
+ sem_post(grandsync1[num]);
+ return -1;
+}
+
+static int testcase1(struct sp_alloc_info *alloc_info)
+{
+ int ret, status = 0;
+ int group_id = alloc_info->spg_id;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+ int child_succ = 0;
+ int child_fail = 0;
+
+ for (int i = 0; i < process_pre_group; i++) {
+ char buf[100];
+ sprintf(buf, "/testcase1_sync%d", i);
+ sync1[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (sync1[i] == SEM_FAILED) {
+ pr_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < process_pre_group; i++) {
+ char buf[100];
+ sprintf(buf, "/testcase1_childsync%d", i);
+ grandsync1[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandsync1[i] == SEM_FAILED) {
+ pr_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ return -1;
+ }
+
+ for (int i = 0; i < process_pre_group; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ return -1;
+ } else if (pid == 0) {
+ exit(testcase1_child(i));
+ } else {
+ pr_info("fork grandchild%d, pid: %d", i, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("add grandchild%d to group %d failed", i, group_id);
+ goto error_out;
+ } else
+ pr_info("add grandchild%d to group %d successfully", i, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(sync1[i]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandsync1[i]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ memset((void *)alloc_info->addr, 'a', alloc_info->size);
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info->addr,
+ .size = alloc_info->size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ int msgid = msgget(TESTCASE1_MSG_KEY, IPC_CREAT | 0666);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ struct msgbuf_share_info msgbuf = {0};
+ memcpy(&msgbuf.u2k_info, &u2k_info, sizeof(u2k_info));
+ for (int i = 0; i < process_pre_group; i++) {
+ msgbuf.type = i + 1;
+ ret = msgsnd(msgid, &msgbuf, sizeof(u2k_info), 0);
+ if (ret < 0) {
+ pr_info("msgsnd failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+ }
+
+
+ /* 通知子进程共享成功 */
+ for (int i = 0; i < process_pre_group; i++) {
+ sem_post(sync1[i]);
+ }
+
+ for (int i = 0; i < process_pre_group; i++) {
+ ret = sem_wait(grandsync1[i]);
+ if (ret < 0) {
+ pr_info("sem wait failed, errno: %d", errno);
+ goto error_out;
+ }
+ }
+ goto out;
+
+error_out:
+ for (int i = 0; i < process_pre_group && childs[i] > 0; i++) {
+ if (kill(childs[i], SIGKILL) != 0) {
+ return -1;
+ }
+ }
+out:
+ for (int i = 0; i < process_pre_group && childs[i] > 0; i++) {
+ waitpid(childs[i], &status, 0);
+ if (WIFEXITED(status)) {
+ pr_info("grandchild%d execute success, ret: %d", i, WEXITSTATUS(status));
+ if (WEXITSTATUS(status) == CHILD_EXIT) {
+ child_fail++;
+ } else if (WEXITSTATUS(status) == 0) {
+ child_succ++;
+ }
+ }
+ }
+
+ if (ioctl_free(dev_fd, alloc_info) < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return -1;
+ }
+
+ if (child_succ == 1 && child_fail == (process_pre_group - 1)) {
+ pr_info("testcase1: child unshare test success!!");
+ return 0;
+ } else {
+ pr_info("testcase1: child unshare test failed!! child_succ: %d, child_fail: %d", child_succ, child_fail);
+ return -1;
+ }
+
+ return ret;
+}
+#endif
+
+#define TESTCASE2_MSG_KEY 20
+// 利用消息队列在进程间传递struct sp_alloc_info
+struct msgbuf_alloc_info {
+ long type;
+ struct sp_alloc_info alloc_info;
+};
+
+static int testcase2_child(int num)
+{
+ int ret;
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(sync2[num]);
+ } while (ret < 0 && errno == EINTR);
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ /* 通知父进程加组成功 */
+ sem_post(grandsync2[num]);
+ if (group_id < 0) {
+ pr_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ /* 等待父进程共享 */
+ do {
+ ret = sem_wait(sync2[num]);
+ } while (ret < 0 && errno == EINTR);
+
+ int msgid = msgget(TESTCASE2_MSG_KEY, 0);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ struct sp_alloc_info *alloc_info = &msgbuf.alloc_info;
+ ret = msgrcv(msgid, &msgbuf, sizeof(*alloc_info), (num + 1), IPC_NOWAIT);
+ if (ret < 0) {
+ pr_info("msgrcv failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ if (!ioctl_judge_addr(dev_fd, alloc_info->addr)) {
+ pr_info("invalid address");
+ goto error_out;
+ }
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret == 0) {
+ pr_info("ioctl_free success, errno: %d", errno);
+ } else {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ ret = CHILD_EXIT;
+ }
+
+ sem_post(grandsync2[num]);
+ return ret;
+
+error_out:
+ sem_post(grandsync2[num]);
+ return -1;
+}
+
+static int test(struct sp_alloc_info *alloc_info)
+{
+ int ret, status = 0;
+ int group_id = alloc_info->spg_id;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+ int child_succ = 0;
+ int child_fail = 0;
+
+ for (int i = 0; i < process_pre_group; i++) {
+ char buf[100];
+ sprintf(buf, "/testcase2_sync%d", i);
+ sync2[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (sync2[i] == SEM_FAILED) {
+ pr_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < process_pre_group; i++) {
+ char buf[100];
+ sprintf(buf, "/testcase2_childsync%d", i);
+ grandsync2[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandsync2[i] == SEM_FAILED) {
+ pr_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ return -1;
+ }
+
+ for (int i = 0; i < process_pre_group; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ return -1;
+ } else if (pid == 0) {
+ exit(testcase2_child(i));
+ } else {
+ pr_info("fork grandchild%d, pid: %d", i, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("add grandchild%d to group %d failed", i, group_id);
+ goto error_out;
+ } else
+ pr_info("add grandchild%d to group %d successfully", i, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(sync2[i]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandsync2[i]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ int msgid = msgget(TESTCASE2_MSG_KEY, IPC_CREAT | 0666);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ memcpy(&msgbuf.alloc_info, alloc_info, sizeof(*alloc_info));
+ for (int i = 0; i < process_pre_group; i++) {
+ msgbuf.type = i + 1;
+ ret = msgsnd(msgid, &msgbuf, sizeof(*alloc_info), 0);
+ if (ret < 0) {
+ pr_info("msgsnd failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+ }
+
+ memset((void *)alloc_info->addr, 'a', alloc_info->size);
+
+ /* 通知子进程alloc成功 */
+ for (int i = 0; i < process_pre_group; i++) {
+ sem_post(sync2[i]);
+ }
+
+ for (int i = 0; i < process_pre_group; i++) {
+ ret = sem_wait(grandsync2[i]);
+ if (ret < 0) {
+ pr_info("sem wait failed, errno: %d", errno);
+ goto error_out;
+ }
+ }
+ goto out;
+
+error_out:
+ for (int i = 0; i < process_pre_group && childs[i] > 0; i++) {
+ if (kill(childs[i], SIGKILL) != 0) {
+ return -1;
+ }
+ }
+out:
+ for (int i = 0; i < process_pre_group && childs[i] > 0; i++) {
+ waitpid(childs[i], &status, 0);
+ if (WIFEXITED(status)) {
+ pr_info("grandchild%d execute success, ret: %d", i, WEXITSTATUS(status));
+ if (WEXITSTATUS(status) == CHILD_EXIT) {
+ child_fail++;
+ } else if (WEXITSTATUS(status) == 0) {
+ child_succ++;
+ }
+ }
+ }
+
+ if (child_succ == 1 && child_fail == (process_pre_group - 1)) {
+ pr_info("testcase2: child unshare test success!!");
+ return 0;
+ } else {
+ pr_info("testcase2: child unshare test failed!! child_succ: %d, child_fail: %d", child_succ, child_fail);
+ return -1;
+ }
+
+ return ret;
+}
+
+static void print_help()
+{
+ printf("Usage:./test_unshare4 -p process_num\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:")) != -1) {
+ switch (opt) {
+ case 'p': // 组内子进程个数
+ process_pre_group = atoi(optarg);
+ if (process_pre_group > MAX_PROC_PER_GRP || process_pre_group <= 0) {
+ printf("process num invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = 100,
+ };
+
+ return test(&alloc_info);
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "进程A分配内存后,组内所有进程并发调用free。预期只有其中一个是成功的。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare5.c b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare5.c
new file mode 100644
index 000000000000..dd8e704e390c
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare5.c
@@ -0,0 +1,185 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Thu Dec 03 11:27:42 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * 内存共享后,先释放内存再停止共享。预期停止共享失败。
+ * testcase1: 内核调用k2u得到的uva,用sp_free去释放,再由内核调用unshare
+ * testcase2: 进程调用u2k得到的kva,用vfree去释放,再由用户态调用unshare, 再用free去释放。
+ */
+
+static int testcase1(void)
+{
+ int ret;
+
+ struct vmalloc_info ka_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+ }
+
+ struct sp_alloc_info uva_info = {
+ .addr = k2u_info.addr,
+ .size = 10000,
+ };
+
+ ret = ioctl_free(dev_fd, &uva_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed as expected, errno: %d", errno);
+ ret = 0;
+ } else {
+ pr_info("ioctl_free result unexpected, ret: %d, errno: %d", ret, errno);
+ return -1;
+ }
+
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ } else {
+ pr_info("ioctl_unshare k2u succeeded.");
+ }
+
+ ret = ioctl_vfree(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("ioctl_vfree failed, errno: %d", errno);
+ } else {
+ pr_info("ioctl_vfree succeeded.");
+ }
+ return ret;
+}
+
+static int testcase2(void)
+{
+ int ret;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = 1,
+ };
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info.addr, 'q', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+
+ struct vmalloc_info kva_info = {
+ .addr = u2k_info.addr,
+ .size = 12345,
+ };
+
+ if (ioctl_vfree(dev_fd, &kva_info) < 0) {
+ pr_info("vfree u2k kernel vm area failed.");
+ ret = -1;
+ } else {
+ pr_info("vfree u2k kernal vm area succeeded.");
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed as expected, errno: %d", errno);
+ } else {
+ pr_info("ioctl_unshare result unexpected, ret: %d, errno: %d", ret, errno);
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("free u2k vma failed, errno: %d", errno);
+ } else {
+ pr_info("free u2k vma succeeded");
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "内核调用k2u得到的uva,用sp_free去释放,再由内核调用unshare")
+ TESTCASE_CHILD(testcase2, "进程调用u2k得到的kva,用vfree去释放,再由用户态调用unshare, 再用free去释放。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare6.c b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare6.c
new file mode 100644
index 000000000000..389402e9ebc0
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare6.c
@@ -0,0 +1,93 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Dec 04 17:20:10 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * testcase1: 用户态进程sp_alloc分配得到的uva用sp_unshare去停止共享,预期失败。
+ */
+
+static int testcase1(void)
+{
+ int ret;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = 1,
+ };
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info.addr, 'q', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .addr = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed as expected, errno: %d", errno);
+ } else {
+ pr_info("ioctl_unshare result unexpected, ret: %d, errno: %d", ret, errno);
+ return -1;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "用户态进程sp_alloc分配得到的uva用sp_unshare去停止共享,预期失败。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare7.c b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare7.c
new file mode 100644
index 000000000000..b85171c50995
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare7.c
@@ -0,0 +1,159 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Nov 30 18:27:26 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/* testcase1: 验证未加组的进程能否k2spg */
+
+#define GROUP_ID 1
+
+static int testcase1(void)
+{
+ int ret;
+
+ /* sync prepare*/
+ sem_t *sem_addgroup, *sem_k2u;
+ sem_addgroup = sem_open("/child_process_add_group_finish", O_CREAT, O_RDWR, 0);
+ sem_k2u = sem_open("/child_process_k2u_finish", O_CREAT, O_RDWR, 0);
+
+ /* k2u prepare*/
+ struct vmalloc_info vmalloc_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ return ret;
+ }
+
+ struct sp_make_share_info k2spg_info = {
+ .kva = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ .spg_id = GROUP_ID,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ /* fork child */
+ pid_t child = fork();
+ if (child == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID);
+ if (ret < 0) {
+ pr_info("child process add group %d failed, errno: %d", GROUP_ID, errno);
+ } else {
+ pr_info("child process %d add group %d success", getpid(), GROUP_ID);
+ }
+ sem_post(sem_addgroup);
+ /*wait parent make share kernel address to child */
+ sem_wait(sem_k2u);
+ /* check kernel shared address */
+
+ exit(0);
+ }
+
+ sem_wait(sem_addgroup);
+ /* k2u to child */
+ k2spg_info.pid = child;
+ ret = ioctl_k2u(dev_fd, &k2spg_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed as expected, errno: %d", errno);
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ ret = 0;
+ goto end;
+ } else {
+ pr_info("parent process k2u success unexpected.");
+ ret = -1;
+ }
+
+ /* fork a new process and add into group to check k2u address*/
+ pid_t proc_check = fork();
+ if (proc_check == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID);
+ if (ret < 0) {
+ pr_info("child process %d add group %d failed, errno: %d", getpid(), GROUP_ID, errno);
+ } else {
+ pr_info("child process %d add group %d success", getpid(), GROUP_ID);
+ }
+ char *addr = (char *)k2spg_info.addr;
+ if (addr[0] != 'b') {
+ pr_info("addr value is not b! k2spg failed.");
+ } else {
+ pr_info("addr value is b! k2spg success.");
+ }
+ exit(0);
+ }
+
+ int status;
+ waitpid(proc_check, &status, 0);
+ pr_info("k2spg check process exited.");
+
+end:
+ sem_post(sem_k2u);
+
+ waitpid(child, &status, 0);
+ pr_info("child process %d exited.", child);
+
+ /* the vm_area has SP_FLAG in it, vfree shall cause warning*/
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ pr_info("ioctl vfree success.");
+
+ /* parent try create same group id and unshare, expect success */
+ // ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID);
+ // if (ret < 0) {
+ // pr_info("add group failed, errno: %d", errno);
+ // } else {
+ // pr_info("parent add group %d success", GROUP_ID);
+ // }
+
+ sem_unlink("/child_process_add_group_finish");
+ sem_unlink("/child_process_k2u_finish");
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "验证未加组的进程能否k2spg ")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
+
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare_kill.c b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare_kill.c
new file mode 100644
index 000000000000..94c2401a7376
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare_kill.c
@@ -0,0 +1,150 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Nov 30 18:27:26 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+#include <pthread.h>
+
+#include "sharepool_lib.h"
+#include "sem_use.h"
+
+#define PROC_NUM 200
+#define PROT (PROT_WRITE | PROT_READ)
+#define REPEAT 20
+#define THREAD_NUM 50
+/*
+ * 内存共享后,先释放内存再停止共享。
+ * testcase1: vmalloc并k2task后,直接vfree kva(预期成功),然后unshare uva(预期成功)。
+ */
+struct sp_make_share_info k2u_info;
+int sem;
+
+static void *tc1_thread(void *arg)
+{
+ int ret;
+ int gid = (int)arg;
+ struct sp_make_share_info infos[REPEAT];
+
+ pr_info("gid is %d\n", gid);
+ for (int i = 0; i < REPEAT; i++) {
+ infos[i].spg_id = SPG_ID_DEFAULT;
+ infos[i].pid = getpid();
+ infos[i].kva = k2u_info.kva;
+ infos[i].size = k2u_info.size;
+ infos[i].sp_flags = 0;
+ ret = ioctl_k2u(dev_fd, &infos[i]);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, child %d errno: %d", gid, errno);
+ sem_inc_by_one(sem);
+ return ret;
+ }
+ }
+
+ sem_inc_by_one(sem);
+ sem_check_zero(sem);
+
+ for (int i = 0; i < REPEAT; i++) {
+ ret = ioctl_unshare(dev_fd, &infos[i]);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, child %d errno: %d", getpid(), errno);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+static int tc1_child(int gid)
+{
+ int ret;
+ pthread_t threads[THREAD_NUM];
+ void *tret;
+
+ ret = wrap_add_group(getpid(), PROT, gid);
+ if (ret < 0) {
+ pr_info("add group failed child %d", gid);
+ sem_inc_by_val(sem, THREAD_NUM);
+ return ret;
+ }
+
+ for (int i = 0; i < THREAD_NUM; i++)
+ pthread_create(threads + i, NULL, tc1_thread, (void *)gid);
+
+ for (int i = 0; i < THREAD_NUM; i++)
+ pthread_join(threads[i], &tret);
+
+ pr_info("child %d finish all work", gid);
+ return 0;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ pid_t child[PROC_NUM];
+ struct vmalloc_info vmalloc_info = {
+ .size = 4096,
+ };
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ sem = sem_create(1234, "sem");
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ FORK_CHILD_ARGS(child[i], tc1_child(i + 1));
+ }
+
+ pr_info("\nwaits to kill all child...\n");
+ sem_dec_by_val(sem, PROC_NUM * THREAD_NUM);
+ pr_info("\nstarts to kill child...\n");
+ for (int i = 0; i < PROC_NUM; i++)
+ kill(child[i], SIGKILL);
+ for (int i = 0; i < PROC_NUM; i++)
+ waitpid(child[i], NULL, 0);
+
+ pr_info("finish kill child...\n");
+ ret = ioctl_vfree(dev_fd, &vmalloc_info); /* 预期失败 */
+ if (ret < 0) {
+ pr_info("ioctl vfree failed for the first time, errno: %d", errno);
+ }
+
+out:
+ sem_close(sem);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "vmalloc并k2task后,直接vfree kva(预期成功),然后unshare uva(预期成功)")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
+
diff --git a/tools/testing/sharepool/testcase/remove_list.sh b/tools/testing/sharepool/testcase/remove_list.sh
new file mode 100755
index 000000000000..03287c09ea6e
--- /dev/null
+++ b/tools/testing/sharepool/testcase/remove_list.sh
@@ -0,0 +1,22 @@
+#!/bin/bash
+rm_list()
+{
+ curdir=$1
+ echo $curdir
+
+ cd $curdir
+ rm -rf tc_list
+
+ subdirs=`ls -d */`
+
+ for dir in $subdirs
+ do
+ rm_list $dir
+ done
+
+ cd ..
+ echo "back to `pwd`"
+}
+
+rm_list `pwd`
+
diff --git a/tools/testing/sharepool/testcase/scenario_test/Makefile b/tools/testing/sharepool/testcase/scenario_test/Makefile
new file mode 100644
index 000000000000..826d5ba6b255
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/Makefile
@@ -0,0 +1,15 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ mkdir -p $(TOOL_BIN_DIR)/scenario_test
+ cp $(testcases) $(TOOL_BIN_DIR)/scenario_test
+ cp test_hugepage_setting.sh $(TOOL_BIN_DIR)/
+ cp scenario_test.sh $(TOOL_BIN_DIR)/
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/scenario_test/scenario_test.sh b/tools/testing/sharepool/testcase/scenario_test/scenario_test.sh
new file mode 100755
index 000000000000..f344ae0dda17
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/scenario_test.sh
@@ -0,0 +1,45 @@
+#!/bin/sh
+
+set -x
+
+echo 'test_dfx_heavy_load
+ test_dvpp_16g_limit
+ test_max_50000_groups
+ test_proc_sp_group_state
+ test_oom ' | while read line
+do
+ let flag=0
+ ./scenario_test/$line
+ if [ $? -ne 0 ] ;then
+ echo "testcase scenario_test/$line failed"
+ let flag=1
+ fi
+
+ sleep 3
+
+ #打印spa_stat
+ ret=`cat /proc/sharepool/spa_stat | wc -l`
+ if [ $ret -ge 15 ] ;then
+ cat /proc/sharepool/spa_stat
+ echo spa_stat not clean
+ let flag=1
+ fi
+ #打印proc_stat
+ ret=`cat /proc/sharepool/proc_stat | wc -l`
+ if [ $ret -ge 15 ] ;then
+ cat /proc/sharepool/proc_stat
+ echo proc_stat not clean
+ let flag=1
+ fi
+
+ cat /proc/sharepool/proc_overview
+ #如果泄漏 -->exit
+ if [ $flag -eq 1 ] ;then
+ exit 1
+ fi
+ echo "testcase scenario_test/$line success"
+
+ cat /proc/meminfo
+ free -m
+
+done
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_auto_check_statistics.c b/tools/testing/sharepool/testcase/scenario_test/test_auto_check_statistics.c
new file mode 100644
index 000000000000..904a5bbee2ad
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_auto_check_statistics.c
@@ -0,0 +1,338 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 20 01:38:40 2020
+ */
+
+#include <errno.h>
+#include <fcntl.h> /* For O_* constants */
+#include <semaphore.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/stat.h> /* For mode constants */
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <unistd.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+static int semid;
+
+#define PROC_NUM 1023
+#define GROUP_NUM 2999
+#define GROUP_ID 1
+
+#define ROW_MAX 100
+#define COL_MAX 300
+
+#define byte2kb(x) (x) / 1024
+#define byte2mb(x) (x) / 1024UL / 1024UL
+
+#define SPA_STAT "/proc/sharepool/spa_stat"
+#define PROC_STAT "/proc/sharepool/proc_stat"
+#define PROC_OVERVIEW "/proc/sharepool/proc_overview"
+
+static void __reset(char **array, int end)
+{
+ for (int i = 0; i < end; i++)
+ memset(array[i], 0, COL_MAX);
+}
+
+static int test_route(unsigned long flag, unsigned long size, int spg_id)
+{
+ int ret = 0;
+ pid_t pid;
+ unsigned long addr;
+ char pidstr[SIZE];
+ char pidattr[SIZE];
+ int row_num, column_num;
+ char **result;
+ char **exp;
+ int row_real;
+ bool flag_out = false;
+ unsigned long size_dvpp = 0;
+
+ strcat(pidattr, "/proc/");
+ sprintf(pidstr, "%d", getpid());
+ strcat(pidattr, pidstr);
+ strcat(pidattr, "/sp_group");
+
+ if (flag & SP_DVPP)
+ size_dvpp = PMD_SIZE;
+
+ // 加组
+ if (spg_id == SPG_ID_DEFAULT)
+ goto alloc;
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID);
+ if (ret < 0) {
+ pr_info("add group failed!");
+ return ret;
+ }
+
+ // 起一个工具进程加组,工具进程的统计也要正确
+ pid = fork();
+ if (pid == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID);
+ if (ret < 0) {
+ pr_info("tool proc add group failed!");
+ exit(-1);
+ }
+ while (1) {
+
+ }
+ }
+
+alloc:
+ // sp_alloc flag=大页/小页 dvpp/非dvpp size
+ addr = (unsigned long)wrap_sp_alloc(spg_id, size, flag);
+ if (addr == -1) {
+ pr_info("alloc memory failed, size %lx, flag %lx", size, flag);
+ ret = -1;
+ goto kill_child;
+ }
+
+ sleep(5);
+
+ // 检查spa_stat
+ result = (char **)calloc(ROW_MAX, sizeof(char *));
+ exp = (char **)calloc(ROW_MAX, sizeof(char *));
+ for (int i = 0; i < ROW_MAX; i++) {
+ result[i] = (char *)calloc(COL_MAX, sizeof(char));
+ exp[i] = (char *)calloc(COL_MAX, sizeof(char));
+ }
+
+ get_attr(SPA_STAT, result, ROW_MAX, COL_MAX, &row_real);
+ for (int i = 0; i < row_real; i++) {
+ printf("%s", result[i]);
+ }
+ pr_info("\nrow_real is %d", row_real);
+
+ sprintf(exp[0], "Share pool total size: %d KB, spa total num: %d.\n",
+ byte2kb(size), 1);
+ sprintf(exp[1],
+ "Group %6d size: %lld KB, spa num: %d, total alloc: %lld KB, normal alloc: %lld KB, huge alloc: %lld KB\n",
+ spg_id, byte2kb(size), 1, byte2kb(size),
+ flag & SP_HUGEPAGE ? 0 : byte2kb(size),
+ flag & SP_HUGEPAGE ? byte2kb(size) : 0);
+ sprintf(exp[2], "%s", "\n");
+ sprintf(exp[3], "Spa total num %u.\n", 1);
+ sprintf(exp[4], "Spa alloc num %u, k2u(task) num %u, k2u(spg) num %u.\n",
+ 1, 0, 0);
+ sprintf(exp[5], "Spa total size: %13lu KB\n", byte2kb(size));
+ sprintf(exp[6], "Spa alloc size: %13lu KB\n", byte2kb(size));
+ sprintf(exp[7], "Spa k2u(task) size: %13lu KB\n", 0);
+ sprintf(exp[8], "Spa k2u(spg) size: %13lu KB\n", 0);
+ sprintf(exp[9], "Spa dvpp size: %13lu KB\n",
+ flag & SP_DVPP ? byte2kb(size) : 0);
+ sprintf(exp[10], "Spa dvpp va size: %13lu MB\n", byte2mb(size_dvpp));
+
+ sprintf(exp[11], "%s", "\n");
+ sprintf(exp[12], "%-10s %-16s %-16s %-10s %-7s %-5s %-8s %-8s\n",
+ "Group ID", "va_start", "va_end", "Size(KB)", "Type", "Huge", "PID", "Ref");
+ sprintf(exp[13], "%-10d %2s%-14lx %2s%-14lx %-10ld %-7s %-5s %-8d %-8d\n",
+ spg_id,
+ "0x", flag & SP_DVPP ? 0xf00000000000 : 0xe80000000000,
+ "0x", flag & SP_DVPP ? 0xf00000200000 : 0xe80000200000,
+ byte2kb(size), "ALLOC", flag & SP_HUGEPAGE ? "Y" : "N", getpid(), 3);
+ for (int i = 0; i < row_real; i++) {
+ if (strcmp(result[i], exp[i]) != 0) {
+ pr_info("a not same with b,\na: %sb: %s", result[i], exp[i]);
+ flag_out = true;
+ }
+ }
+
+ // 检查proc_stat
+ __reset(result, row_real);
+ __reset(exp, row_real);
+ get_attr(PROC_STAT, result, ROW_MAX, COL_MAX, &row_real);
+ for (int i = 0; i < row_real; i++) {
+ printf("%s", result[i]);
+ }
+ pr_info("\nrow_real is %d\n", row_real);
+
+ sprintf(exp[0], "Share pool total size: %d KB, spa total num: %d.\n",
+ byte2kb(size), 1);
+ sprintf(exp[1],
+ "Group %6d size: %lld KB, spa num: %d, total alloc: %lld KB, normal alloc: %lld KB, huge alloc: %lld KB\n",
+ spg_id, byte2kb(size), 1, byte2kb(size),
+ flag & SP_HUGEPAGE ? 0 : byte2kb(size),
+ flag & SP_HUGEPAGE ? byte2kb(size) : 0);
+ sprintf(exp[2], "%s", "\n");
+ sprintf(exp[3], "Spa total num %u.\n", 1);
+ sprintf(exp[4], "Spa alloc num %u, k2u(task) num %u, k2u(spg) num %u.\n",
+ 1, 0, 0);
+ sprintf(exp[5], "Spa total size: %13lu KB\n", byte2kb(size));
+ sprintf(exp[6], "Spa alloc size: %13lu KB\n", byte2kb(size));
+ sprintf(exp[7], "Spa k2u(task) size: %13lu KB\n", 0);
+ sprintf(exp[8], "Spa k2u(spg) size: %13lu KB\n", 0);
+ sprintf(exp[9], "Spa dvpp size: %13lu KB\n",
+ flag & SP_DVPP ? byte2kb(size) : 0);
+ sprintf(exp[10], "Spa dvpp va size: %13lu MB\n", byte2mb(size_dvpp));
+ sprintf(exp[11], "%s", "\n");
+ sprintf(exp[12], "%-8s %-8s %-9s %-9s %-9s %-8s %-7s %-7s %-4s\n",
+ "PID", "Group_ID", "SP_ALLOC", "SP_K2U", "SP_RES", "VIRT", "RES",
+ "Shm", "PROT");
+ sprintf(exp[13], "%-8s %-8s %-9lld %-9lld\n", "guard", "-", 0, 0);
+ sprintf(exp[14], "%-8d %-8d %-9ld %-9ld %-9ld %-8ld %-7ld %-7ld %-4s\n",
+ getpid(), spg_id, byte2kb(size), 0, byte2kb(size), 0, 0, 0, "RW");
+ sprintf(exp[15], "%-8d %-8d %-9ld %-9ld %-9ld %-8ld %-7ld %-7ld %-4s\n",
+ pid, spg_id, 0, 0, byte2kb(size), 0, 0, 0, "RW");
+ sprintf(exp[16], "%-8d %-8d %-9ld %-9ld %-9ld %-8ld %-7ld %-7ld %-4s \n",
+ getpid(), 200001, 0, 0, 0, 0, 0, 0, "RW");
+ sprintf(exp[17], "%-8d %-8d %-9ld %-9ld %-9ld %-8ld %-7ld %-7ld %-4s \n",
+ pid, 200002, 0, 0, 0, 0, 0, 0, "RW");
+
+ for (int i = 0; i < row_real; i++) {
+ if (i < 14)
+ ret = strcmp(result[i], exp[i]);
+ else
+ ret = strncmp(result[i], exp[i], 43);
+
+ if (ret == 0)
+ continue;
+
+ pr_info("a not same with b,\na: %sb: %s", result[i], exp[i]);
+ flag_out = true;
+ }
+
+ // 检查proc_overview
+ __reset(result, row_real);
+ __reset(exp, row_real);
+ get_attr(PROC_OVERVIEW, result, ROW_MAX, COL_MAX, &row_real);
+ for (int i = 0; i < row_real; i++) {
+ printf("%s", result[i]);
+ }
+ pr_info("\nrow_real is %d\n", row_real);
+ sprintf(exp[0], "%-8s %-16s %-9s %-9s %-9s %-10s %-10s %-8s\n",
+ "PID", "COMM", "SP_ALLOC", "SP_K2U", "SP_RES", "Non-SP_RES",
+ "Non-SP_Shm", "VIRT");
+ sprintf(exp[1], "%-8d %-16s %-9ld %-9ld %-9ld %-10ld %-10ld %-8ld\n",
+ getpid(), "test_auto_check",
+ byte2kb(size), 0, byte2kb(size), 0, 0, 0);
+ sprintf(exp[2], "%-8d %-16s %-9ld %-9ld %-9ld %-10ld %-10ld %-8ld\n",
+ pid, "test_auto_check", 0, 0, byte2kb(size), 0, 0, 0);
+ for (int i = 0; i < row_real; i++) {
+ if (i < 1)
+ ret = strcmp(result[i], exp[i]);
+ else
+ ret = strncmp(result[i], exp[i], 51);
+
+ if (ret == 0)
+ continue;
+
+ pr_info("a not same with b,\na: %sb: %s", result[i], exp[i]);
+ flag_out = true;
+ }
+
+ // 检查sp_group
+ __reset(result, row_real);
+ __reset(exp, row_real);
+ get_attr(pidattr, result, ROW_MAX, COL_MAX, &row_real);
+ for (int i = 0; i < row_real; i++) {
+ printf("%s", result[i]);
+ }
+ pr_info("\nrow_real is %d\n", row_real);
+ sprintf(exp[0], "Share Pool Aggregate Data of This Process\n");
+ sprintf(exp[1], "\n");
+ sprintf(exp[2], "%-8s %-16s %-9s %-9s %-9s %-10s %-10s %-8s\n",
+ "PID", "COMM", "SP_ALLOC", "SP_K2U", "SP_RES", "Non-SP_RES",
+ "Non-SP_Shm", "VIRT");
+ sprintf(exp[3], "%-8d %-16s %-9ld %-9ld %-9ld %-10ld %-10ld %-8ld\n",
+ getpid(), "test_auto_check", byte2kb(size), 0, byte2kb(size),
+ 0, 0, 0);
+ sprintf(exp[4], "\n");
+ sprintf(exp[5], "\n");
+ sprintf(exp[6], "Process in Each SP Group\n");
+ sprintf(exp[7], "\n");
+ sprintf(exp[8], "%-8s %-9s %-9s %-9s %-4s\n",
+ "Group_ID", "SP_ALLOC", "SP_K2U", "SP_RES", "PROT");
+ sprintf(exp[9], "%-8d %-9ld %-9ld %-9ld %s\n",
+ 200001, 0, 0, 0, "RW");
+ sprintf(exp[10], "%-8d %-9ld %-9ld %-9ld %s\n",
+ spg_id, byte2kb(size), 0, byte2kb(size), "RW");
+
+ for (int i = 0; i < row_real; i++) {
+ if (i != 3)
+ ret = strcmp(result[i], exp[i]);
+ else
+ ret = strncmp(result[i], exp[i], 51);
+
+ if (ret == 0)
+ continue;
+
+ pr_info("a not same with b,\na: %sb: %s", result[i], exp[i]);
+ flag_out = true;
+ }
+
+ // 释放
+ struct sp_alloc_info info = {
+ .addr = addr,
+ .spg_id = spg_id,
+ };
+ ret = ioctl_free(dev_fd, &info);
+ if (ret < 0) {
+ pr_info("free memory failed, size %lx, flag %lx", size, flag);
+ goto out;
+ }
+ pr_info("\n\nfree memory finished\n\n");
+
+out:
+ // 回收array
+ for (int i = 0; i < ROW_MAX; i++) {
+ free(result[i]);
+ free(exp[i]);
+ }
+ free(result);
+ free(exp);
+
+kill_child:
+ // 回收工具进程
+ if (spg_id != SPG_ID_DEFAULT)
+ KILL_CHILD(pid);
+ if (flag_out)
+ return -1;
+ return ret;
+}
+
+static int testcase1(void) { return test_route(0, 4096, 1); }
+static int testcase2(void) { return test_route(SP_HUGEPAGE, 2 * 1024UL * 1024UL, 1); }
+static int testcase3(void) { return test_route(SP_DVPP, 4096, 1); }
+static int testcase4(void) { return test_route(SP_HUGEPAGE | SP_DVPP, 2 * 1024UL * 1024UL, 1); }
+
+static int testcase5(void) {
+ if (wrap_add_group(getpid(), PROT_READ, 1) < 0)
+ return -1;
+
+ sharepool_print();
+ sleep(1);
+
+ if (wrap_del_from_group(getpid(), 1) < 0)
+ return -1;
+
+ sharepool_print();
+ sleep(1);
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "共享组申请一个小页")
+ TESTCASE_CHILD(testcase2, "共享组申请一个大页")
+ TESTCASE_CHILD(testcase3, "共享组申请一个dvpp小页")
+ TESTCASE_CHILD(testcase4, "共享组申请一个dvpp大页")
+ TESTCASE_CHILD(testcase5, "进程先加入共享组,再主动退出共享组")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_dfx_heavy_load.c b/tools/testing/sharepool/testcase/scenario_test/test_dfx_heavy_load.c
new file mode 100644
index 000000000000..16ec53d046bb
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_dfx_heavy_load.c
@@ -0,0 +1,143 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+#define MAX_GROUP 49999
+#define SPG_ID_AUTO_MIN 100000
+#define PROC_NUM 20
+
+int sem_id;
+int msg_id;
+struct msgbuf {
+ long mtype;
+ int group_num;
+};
+
+int group_count;
+
+static void send_msg(int msgid, int msgtype, int group_num)
+{
+ struct msgbuf msg = {
+ .mtype = msgtype,
+ .group_num = group_num,
+ };
+
+ if(msgsnd(msgid, (void *) &msg, sizeof(msg.group_num),
+ IPC_NOWAIT) == -1) {
+ perror("msgsnd error");
+ exit(EXIT_FAILURE);
+ } else {
+ pr_info("child %d message sent success: group_num: %d",
+ msgtype - 1, group_num);
+ }
+}
+
+static void get_msg(int msgid, int msgtype)
+{
+ struct msgbuf msg;
+ if (msgrcv(msgid, (void *) &msg, sizeof(msg.group_num), msgtype,
+ MSG_NOERROR) == -1) {
+ if (errno != ENOMSG) {
+ perror("msgrcv");
+ exit(EXIT_FAILURE);
+ }
+ pr_info("No message available for msgrcv()");
+ } else {
+ pr_info("child %d message received success: group_num: %d",
+ msgtype - 1, msg.group_num);
+ group_count += msg.group_num;
+ }
+}
+
+/* 子进程创建组直至失败 */
+static int test1(void)
+{
+ int ret = 0;
+ int count = 0;
+ int spg_id = 0;
+
+ while (1) {
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0)
+ break;
+ wrap_sp_alloc(spg_id, 4096, 0);
+ count++;
+ }
+
+ pr_info("proc %d add %d groups", getpid(), count);
+ send_msg(msg_id, getpid(), count);
+ sem_inc_by_one(sem_id);
+ while (1) {
+
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+ int ret = 0;
+ int count = 0;
+ int status = 0;
+ char cpid[SIZE];
+ pid_t child[PROC_NUM];
+
+ sem_id = sem_create(1234, "semid");
+ int msgkey = 2345;
+ msg_id = msgget(msgkey, IPC_CREAT | 0666);
+
+ for (int i = 0; i < PROC_NUM; i++)
+ FORK_CHILD_ARGS(child[i], test1());
+
+ for (int i = 0; i < PROC_NUM; i++)
+ get_msg(msg_id, (int)child[i]);
+
+ sem_dec_by_val(sem_id, PROC_NUM);
+ pr_info("\n%d Groups are created.\n", group_count);
+ sleep(2);
+
+ pr_info("Gonna cat /proc/sharepool/proc_stat...\n");
+ ret = cat_attr("/proc/sharepool/proc_stat");
+
+ msgctl(msg_id, IPC_RMID, 0);
+ sem_close(sem_id);
+
+ for (int i = 0; i < PROC_NUM; i++)
+ KILL_CHILD(child[i]);
+
+ return ret;
+}
+
+/*
+ * testcase2: 申请共享组,使用AUTO,预期最多49999个。
+*/
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "申请49999个共享组后,cat /proc/sharepool/proc_stat")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_dvpp_16g_limit.c b/tools/testing/sharepool/testcase/scenario_test/test_dvpp_16g_limit.c
new file mode 100644
index 000000000000..9b61c86901b4
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_dvpp_16g_limit.c
@@ -0,0 +1,68 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sharepool_lib.h"
+
+#define MEM_1G_SIZE (1024UL * 1024UL * 1024UL)
+#define MAX_DVPP_16G 16
+
+static int test_route(int spg_id)
+{
+ int ret = 0;
+ unsigned long addr;
+ int count = 0;
+
+ if (spg_id != SPG_ID_DEFAULT) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id);
+ if (ret < 0)
+ return -1;
+ }
+
+ while (1) {
+ addr = (unsigned long)wrap_sp_alloc(spg_id, MEM_1G_SIZE,
+ SP_HUGEPAGE | SP_DVPP);
+ if (addr == -1)
+ break;
+ pr_info("alloc %dG success", ++count);
+ }
+
+ if (count != MAX_DVPP_16G) {
+ pr_info("count is %d unexpected", count);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int testcase1(void) { return test_route(1); }
+static int testcase2(void) { return test_route(SPG_ID_DEFAULT); }
+
+/* testcase1: 申请共享组,指定ID,预期最多49999个。
+ * testcase2: 申请共享组,使用AUTO,预期最多49999个。
+*/
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "申请共享组,指定ID,预期最多49999个。")
+ TESTCASE_CHILD(testcase2, "申请共享组,使用AUTO,预期最多49999个")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_failure.c b/tools/testing/sharepool/testcase/scenario_test/test_failure.c
new file mode 100644
index 000000000000..7e0e7919ac30
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_failure.c
@@ -0,0 +1,630 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 执行流程:sp_alloc直至触发OOM
+ */
+
+#define GROUP_ID 1
+#define PROC_NUM 20
+#define PROT (PROT_READ | PROT_WRITE)
+#define HP_SIZE (2 * 1024 * 1024UL)
+
+static int testcase1(void)
+{
+ int i;
+ int ret = 0;
+ pid_t pid = getpid();
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ sleep(1);
+
+ struct sp_del_from_group_info info = {
+ .pid = pid,
+ .spg_id = -3,
+ };
+
+ ret = ioctl_del_from_group(dev_fd, &info);
+ if (ret < 0) {
+ pr_info("ioctl_del_group failed, errno: %d", errno);
+ }
+
+ return ret;
+}
+
+static int testcase2(void)
+{
+ int i;
+ int ret = 0;
+ unsigned long ret_addr;
+ pid_t pid = getpid();
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = GROUP_ID,
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ ret_addr = (unsigned long)wrap_sp_alloc(GROUP_ID, 0, 0);
+ if (ret_addr == -1)
+ pr_info("alloc failed!");
+ else {
+ pr_info("alloc success!");
+ }
+
+ return ret;
+}
+
+static int testcase3(void)
+{
+ int i;
+ int ret = 0;
+ unsigned long ret_addr;
+ pid_t pid = getpid();
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_SIZE,
+ .spg_id = 1,
+ };
+
+ ret_addr = (unsigned long)ioctl_alloc(dev_fd, &alloc_info);
+ if (ret_addr == -1)
+ pr_info("alloc failed!");
+ else {
+ pr_info("alloc success!");
+ }
+
+ return ret;
+}
+
+static int testcase4(void)
+{
+ int i;
+ int ret;
+ pid_t pid = getpid();
+
+ struct sp_walk_page_range_info wpr_info = {
+ .uva = 0,
+ .size = 1000,
+ };
+ ret = ioctl_walk_page_range_null(dev_fd, &wpr_info);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range successed unexpected");
+ return -1;
+ }
+ return ret;
+}
+
+static int testcase5(void)
+{
+ int i;
+ int ret;
+ pid_t pid = getpid();
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ,
+ .spg_id = GROUP_ID,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ cat_attr("/proc/sharepool/proc_stat");
+ return ret;
+}
+
+static int prepare(struct vmalloc_info *ka_info, bool ishuge)
+{
+ int ret;
+ if (ishuge) {
+ ret = ioctl_vmalloc_hugepage(dev_fd, ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc_hugepage failed, errno: %d", errno);
+ return -1;
+ }
+ } else {
+ ret = ioctl_vmalloc(dev_fd, ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'a',
+ .addr = ka_info->addr,
+ .size = ka_info->size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, ka_info);
+ }
+ return ret;
+}
+
+static int testcase6(void)
+{
+ int i;
+ int ret;
+ pid_t pid;
+ pid = fork();
+
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ pid_t pid = getpid();
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 2,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+ while(1)
+ sleep(1);
+ }
+
+ /*try to k2u*/
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ /*add group to group 1*/
+ struct sp_add_group_info ag_info = {
+ .spg_id = GROUP_ID,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ /*try to share with group 2*/
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = 2,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ return -1;
+}
+
+static int testcase7(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = 1,
+ .addr = -PAGE_SIZE,
+ };
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ struct sp_add_group_info ag_info = {
+ .spg_id = GROUP_ID,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = GROUP_ID,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ k2u_info.size = 1;
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("unshare failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ return ret;
+}
+
+static int testcase8(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ struct sp_add_group_info ag_info = {
+ .spg_id = GROUP_ID,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = GROUP_ID,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ k2u_info.kva = 281474976710656 * 2,
+ k2u_info.addr = 281474976710656 * 2,
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("unshare failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ return ret;
+}
+
+static int testcase9(void)
+{
+ int ret;
+ int id = 1;
+
+ /* see sharepool_dev.c for more details*/
+ struct sp_notifier_block_info notifier_info = {
+ .i = id,
+ };
+ ret = ioctl_register_notifier_block(dev_fd, ¬ifier_info);
+ if (ret != 0)
+ pr_info("proc %d register notifier block %d failed. ret is %d. errno is %s.",
+ getpid(), id, ret, strerror(errno));
+ else
+ pr_info("proc %d register notifier for func %d success!!", getpid(), id);
+
+ /* see sharepool_dev.c for more details*/
+ ret = ioctl_unregister_notifier_block(dev_fd, ¬ifier_info);
+ if (ret != 0)
+ pr_info("proc %d unregister notifier block %d failed. ret is %d. errno is %s.",
+ getpid(), id, ret, strerror(errno));
+ else
+ pr_info("proc %d unregister notifier for func %d success!!", getpid(), id);
+
+ return ret;
+}
+
+static int testcase10(void)
+{
+ void *ret;
+
+ struct sp_add_group_info ag_info = {
+ .spg_id = GROUP_ID,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ // 申请MAX个
+ for (int i = 0; i < 1000; i++) {
+ ret = wrap_sp_alloc(GROUP_ID, 100 * PMD_SIZE, SP_HUGEPAGE_ONLY);
+ if (ret == (void *)-1) {
+ pr_info("alloc hugepage failed.");
+ return -1;
+ }
+ }
+ return ret;
+}
+
+static int testcase11(void)
+{
+ int ret;
+ int group_id = 100;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("add group failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .spg_id = group_id,
+ .size = 12345,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+ struct sp_walk_page_range_info wpr_info = {
+ .uva = alloc_info.addr,
+ .size = -PAGE_SIZE,
+ };
+ ret = ioctl_walk_page_range(dev_fd, &wpr_info);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range successed unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int testcase12(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ struct sp_add_group_info ag_info = {
+ .spg_id = GROUP_ID,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = GROUP_ID,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ k2u_info.kva = 281474976710656 - 100,
+ k2u_info.addr = 281474976710656 - 100,
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("unshare failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ return ret;
+}
+
+static int testcase13(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ struct sp_add_group_info ag_info = {
+ .spg_id = GROUP_ID,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+
+ pid_t pid = fork();
+
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ struct sp_add_group_info ag_info = {
+ .spg_id = GROUP_ID,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ struct sp_make_share_info k2u_info2 = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info2);
+
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ // if (ret < 0) {
+ // pr_info("unshare failed, errno %d", errno);
+ // ioctl_vfree(dev_fd, &ka_info2);
+ // }
+ exit(0);
+ }
+
+ int status;
+ wait(&status);
+
+ return 0;
+}
+
+static int testcase14(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ struct sp_add_group_info ag_info = {
+ .spg_id = GROUP_ID,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = GROUP_ID,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ struct sp_add_group_info ag_info = {
+ .spg_id = 2,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ // if (ret < 0) {
+ // pr_info("unshare failed, errno %d", errno);
+ // ioctl_vfree(dev_fd, &ka_info2);
+ // }
+ exit(0);
+ }
+
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "传入非法group id")
+ TESTCASE_CHILD(testcase2, "传入sp_alloc size 传入0")
+ TESTCASE_CHILD(testcase3, "spalloc 时进程未进组")
+ TESTCASE_CHILD(testcase4, "调用walk_page_range,其中sp_walk_data == NULL")
+ TESTCASE_CHILD(testcase5, "cat 一个 group prot 为 READ")
+ TESTCASE_CHILD(testcase6, "k2u to task 时当前进程不在组内")
+ TESTCASE_CHILD(testcase7, "ushare kva时 kva + size出现溢出")
+ TESTCASE_CHILD(testcase8, "ushare kva时, kva == 1")
+ TESTCASE_CHILD(testcase9, "test register and unregister notifier")
+ TESTCASE_CHILD(testcase10,"alloc huge page to OOM")
+ TESTCASE_CHILD(testcase11,"sp_walk_page_range uva_aligned + size_aligned overflow")
+ TESTCASE_CHILD(testcase12,"sp_unshare_uva uva is invalid")
+ TESTCASE_CHILD(testcase13,"sp_unshare_uva unshare uva(to task) no permission")
+ TESTCASE_CHILD(testcase14,"sp_unshare_uva not in the group")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_hugepage.c b/tools/testing/sharepool/testcase/scenario_test/test_hugepage.c
new file mode 100644
index 000000000000..f7797a03aaeb
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_hugepage.c
@@ -0,0 +1,231 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 执行流程:sp_alloc直至触发OOM
+ */
+
+#define PROT (PROT_READ | PROT_WRITE)
+#define ALLOC_NUM_MAX 100
+#define GROUP_ID 1
+
+static int alloc_num = 50;
+static int nr_hugepages = 0;
+static int nr_overcommit = 0;
+static int cgroup_on = 0;
+
+char *prefix = "/sys/kernel/mm/hugepages/hugepages-2048kB/";
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "n:o:a:c:")) != -1) {
+ switch (opt) {
+ case 'n': // 系统配置静态大页数
+ nr_hugepages = atoi(optarg);
+ printf("nr_hugepages = %d\n", nr_hugepages);
+ break;
+ case 'o': // 系统配置overcommit大页数
+ nr_overcommit = atoi(optarg);
+ printf("nr_overcommit_hugepages = %d\n", nr_overcommit);
+ break;
+ case 'a': // 申请sp大页数量
+ alloc_num = atoi(optarg);
+ printf("want to alloc hugepages = %d\n", alloc_num);
+ break;
+ case 'c': // cgroup是否开启
+ cgroup_on = atoi(optarg);
+ printf("cgroup is %s\n", cgroup_on ? "on" : "off");
+ break;
+ default:
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+char *fields[] = {
+ "nr_hugepages",
+ "free_hugepages",
+ "resv_hugepages",
+ "surplus_hugepages"
+};
+
+static int check_val(char *field, int val)
+{
+ char path[SIZE];
+ char result[SIZE];
+ int real_val;
+
+ strcpy(path, prefix);
+ strcat(path, field);
+ read_attr(path, result, SIZE);
+ real_val = atoi(result);
+ pr_info("%s val is %d", path, real_val);
+ if (real_val != val) {
+ pr_info("Val %s incorrect. expected: %d, now: %d", field, val, real_val);
+ return -1;
+ }
+ return 0;
+}
+
+static int check_vals(int val[])
+{
+ for (int i = 0; i < ARRAY_SIZE(fields); i++)
+ if (check_val(fields[i], val[i]))
+ return -1;
+ return 0;
+}
+
+static void *addr[ALLOC_NUM_MAX];
+static int exp_vals[4];
+
+static int alloc_hugepages_check(void)
+{
+ void *ret;
+
+ // 申请MAX个
+ for (int i = 0; i < alloc_num; i++) {
+ ret = wrap_sp_alloc(GROUP_ID, PMD_SIZE, SP_HUGEPAGE_ONLY);
+ if (ret == (void *)-1) {
+ pr_info("alloc hugepage failed.");
+ return -1;
+ }
+ addr[i] = ret;
+ }
+ pr_info("Alloc %d hugepages success!", alloc_num);
+
+ // 检查 meminfoi f(N)
+ mem_show();
+ if (nr_hugepages >= alloc_num) {
+ exp_vals[0] = nr_hugepages;
+ exp_vals[1] = nr_hugepages - alloc_num;
+ exp_vals[2] = 0;
+ exp_vals[3] = 0;
+ } else if (nr_hugepages + nr_overcommit >= alloc_num) {
+ exp_vals[0] = alloc_num;
+ exp_vals[1] = 0;
+ exp_vals[2] = 0;
+ exp_vals[3] = alloc_num - nr_hugepages;
+ } else {
+ exp_vals[0] = alloc_num;
+ exp_vals[1] = 0;
+ exp_vals[2] = 0;
+ exp_vals[3] = nr_overcommit;
+ }
+
+ if (check_vals(exp_vals)) {
+ pr_info("Check /proc/meminfo hugepages failed.");
+ return -1;
+ }
+ pr_info("Check /proc/meminfo hugepages success");
+ return 0;
+}
+
+static int free_hugepages_check(void)
+{
+ // 释放N个
+ for (int i = 0; i < alloc_num; i++) {
+ if (wrap_sp_free((unsigned long)addr[i])) {
+ pr_info("free failed");
+ return -1;
+ }
+ }
+ pr_info("Free /proc/meminfo hugepages success.");
+
+ // 检查 meminfo
+ mem_show();
+ exp_vals[0] = nr_hugepages;
+ exp_vals[1] = nr_hugepages;
+ exp_vals[2] = 0;
+ exp_vals[3] = 0;
+ if (check_vals(exp_vals)) {
+ pr_info("Check /proc/meminfo hugepages failed.");
+ return -1;
+ }
+ pr_info("Check /proc/meminfo hugepages success");
+ return 0;
+}
+
+/* testcase1:
+ * 申请大页,检查计数;再释放大页,检查计数 */
+static int testcase1(void) {
+ if (wrap_add_group(getpid(), PROT, GROUP_ID) < 0) {
+ pr_info("add group failed");
+ return -1;
+ }
+
+ return alloc_hugepages_check() || free_hugepages_check();
+}
+
+/* testcase2:
+ * 子进程申请大页,检查计数;然后子进程退出
+ * 父进程检查计数 */
+static int testcase2_child(void)
+{
+ if (wrap_add_group(getpid(), PROT, GROUP_ID) < 0) {
+ pr_info("add group failed");
+ return -1;
+ }
+ return alloc_hugepages_check();
+}
+
+static int testcase2(void) {
+ int ret;
+ pid_t child;
+
+ FORK_CHILD_ARGS(child, testcase2_child());
+ WAIT_CHILD_STATUS(child, out);
+
+ // 检查 meminfo
+ mem_show();
+ exp_vals[0] = nr_hugepages;
+ exp_vals[1] = nr_hugepages;
+ exp_vals[2] = 0;
+ exp_vals[3] = 0;
+ if (check_vals(exp_vals)) {
+ pr_info("Check /proc/meminfo hugepages failed.");
+ return -1;
+ }
+ pr_info("Check /proc/meminfo hugepages success");
+ return 0;
+
+out:
+ return -1;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "检查进程申请-主动释放大页过程中,HugePages_xx计数是否正确")
+ TESTCASE_CHILD(testcase2, "检查进程申请-kill并被动释放大页过程中,HugePages_xx计数是否正确")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_hugepage_setting.sh b/tools/testing/sharepool/testcase/scenario_test/test_hugepage_setting.sh
new file mode 100755
index 000000000000..917893247e18
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_hugepage_setting.sh
@@ -0,0 +1,51 @@
+#!/bin/sh
+set -x
+
+nr_hpages=/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
+nr_overcmt=/proc/sys/vm/nr_overcommit_hugepages
+meminfo=/proc/meminfo
+
+# 测试点1:
+# sharepool临时大页体现在HugePages_Total和HugePages_Surp上;
+# setting:
+# 设置静态大页数目=0,overcommit=0
+
+set_nr(){
+ echo $1 > $nr_hpages
+ echo $2 > $nr_overcmt
+ ret=`cat $nr_hpages`
+ if [ $ret -ne $1 ] ;then
+ echo set nr_hugepages failed!
+ return 1
+ fi
+ ret=`cat $nr_overcmt`
+ if [ $ret -ne $2 ] ;then
+ echo set nr_overcommit_hugepages failed!
+ return 1
+ fi
+ return 0
+}
+
+test_hpage(){
+ set_nr $1 $2
+ ./scenario_test/test_hugepage -n $1 -o $2 -a 50 -c 0
+ if [ $? -ne 0 ] ;then
+ return 1
+ fi
+ return 0
+}
+
+echo 'test_hpage 0 0 0
+ test_hpage 20 0 0
+ test_hpage 60 0 0
+ test_hpage 20 20 0
+ test_hpage 20 40 0
+ test_hpage 0 60 0' | while read line
+do
+ $line
+ if [ $? -ne 0 ] ;then
+ echo "$line failed!"
+ exit 1
+ fi
+ echo "$line success"
+done
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_max_50000_groups.c b/tools/testing/sharepool/testcase/scenario_test/test_max_50000_groups.c
new file mode 100644
index 000000000000..cc8a1278c08d
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_max_50000_groups.c
@@ -0,0 +1,138 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+#define MAX_GROUP 49999
+#define SPG_ID_AUTO_MIN 100000
+#define PROC_NUM 20
+
+int sem_id;
+int msg_id;
+struct msgbuf {
+ long mtype;
+ int group_num;
+};
+
+int group_count;
+
+static void send_msg(int msgid, int msgtype, int group_num)
+{
+ struct msgbuf msg = {
+ .mtype = msgtype,
+ .group_num = group_num,
+ };
+
+ if(msgsnd(msgid, (void *) &msg, sizeof(msg.group_num),
+ IPC_NOWAIT) == -1) {
+ perror("msgsnd error");
+ exit(EXIT_FAILURE);
+ } else {
+ pr_info("child %d message sent success: group_num: %d",
+ msgtype - 1, group_num);
+ }
+}
+
+static void get_msg(int msgid, int msgtype)
+{
+ struct msgbuf msg;
+ if (msgrcv(msgid, (void *) &msg, sizeof(msg.group_num), msgtype,
+ MSG_NOERROR) == -1) {
+ if (errno != ENOMSG) {
+ perror("msgrcv");
+ exit(EXIT_FAILURE);
+ }
+ pr_info("No message available for msgrcv()");
+ } else {
+ pr_info("child %d message received success: group_num: %d",
+ msgtype - 1, msg.group_num);
+ group_count += msg.group_num;
+ }
+}
+
+/* 子进程创建组直至失败 */
+static int test1(void)
+{
+ int ret = 0;
+ int count = 0;
+
+ while (1) {
+ if (wrap_add_group(getpid(), PROT_READ, SPG_ID_AUTO) < 0)
+ break;
+ count++;
+ }
+
+ pr_info("proc %d add %d groups", getpid(), count);
+ send_msg(msg_id, getpid(), count);
+ sem_inc_by_one(sem_id);
+ while (1) {
+
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+ int ret = 0;
+ int count = 0;
+ int status = 0;
+ char cpid[SIZE];
+ pid_t child[PROC_NUM];
+
+ sem_id = sem_create(1234, "semid");
+ int msgkey = 2345;
+ msg_id = msgget(msgkey, IPC_CREAT | 0666);
+
+ for (int i = 0; i < PROC_NUM; i++)
+ FORK_CHILD_ARGS(child[i], test1());
+
+ for (int i = 0; i < PROC_NUM; i++)
+ get_msg(msg_id, (int)child[i]);
+
+ pr_info("\n%d Groups are created.\n", group_count);
+ sleep(5);
+
+ sem_dec_by_val(sem_id, PROC_NUM);
+// cat_attr("/proc/sharepool/spa_stat");
+
+ for (int i = 0; i < PROC_NUM; i++)
+ KILL_CHILD(child[i]);
+
+ msgctl(msg_id, IPC_RMID, 0);
+ sem_close(sem_id);
+ return ret;
+}
+
+/*
+ * testcase2: 申请共享组,使用AUTO,预期最多49999个。
+*/
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "申请共享组,使用AUTO,预期最多49999个")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_oom.c b/tools/testing/sharepool/testcase/scenario_test/test_oom.c
new file mode 100644
index 000000000000..ca78d71fbc1a
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_oom.c
@@ -0,0 +1,135 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 执行流程:sp_alloc直至触发OOM
+ */
+
+#define PROC_NUM 20
+#define PROT (PROT_READ | PROT_WRITE)
+#define HP_SIZE (2 * 1024 * 1024UL)
+
+static int testcase1(void)
+{
+ int ret = 0;
+ unsigned long ret_addr;
+ int status;
+ pid_t child[PROC_NUM];
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int pid = fork();
+ if (pid == 0) {
+ int count = 0;
+ TEST_CHECK(wrap_add_group(getpid(), PROT, i + 1), error);
+ while (1) {
+ ret_addr = (unsigned long)wrap_sp_alloc(i + 1, HP_SIZE, 0);
+ if (ret_addr == -1)
+ pr_info("alloc failed! ret_addr: %lx, errno: %d", ret_addr, errno);
+ else
+ pr_info("proc%d: alloc success %d time.", i, ++count);
+ }
+ }
+ child[i] = pid;
+ }
+
+ for (int i = 0; i < PROC_NUM; i++)
+ waitpid(child[i], &status, 0);
+
+error:
+ return ret;
+}
+
+static int testcase2(void)
+{
+ int ret = 0;
+ unsigned long ret_addr;
+ int status;
+ pid_t child[PROC_NUM];
+ pid_t add_workers[PROC_NUM];
+
+ // 拉起加组进程,反复加入和退出组
+ for (int i = 0; i < PROC_NUM; i++) {
+ int pid = fork();
+ if (pid == 0) { // add group -> del group
+ int count = 0;
+ while (1) {
+ int grp_id = PROC_NUM + i + 1;
+ ret = wrap_add_group(getpid(), PROT, grp_id);
+ if (ret < 0) {
+ pr_info("add group %d failed. ret: %d", grp_id, ret);
+ continue;
+ }
+ pr_info("add group %d success.", grp_id);
+ ret = wrap_del_from_group(getpid(), grp_id);
+ if (ret < 0) {
+ pr_info("del from group %d failed unexpected. ret: %d", grp_id, ret);
+ break;
+ }
+ }
+ exit(ret);
+ }
+ add_workers[i] = pid;
+ }
+
+ // 拉起申请进程
+ for (int i = 0; i < PROC_NUM; i++) {
+ int pid = fork();
+ if (pid == 0) {
+ int count = 0;
+ TEST_CHECK(wrap_add_group(getpid(), PROT, i + 1), error); // group id [1, PROC_NUM]
+ while (1) {
+ ret_addr = (unsigned long)wrap_sp_alloc(i + 1, HP_SIZE, 0);
+ if (ret_addr == -1)
+ pr_info("alloc failed! ret_addr: %lx, errno: %d", ret_addr, errno);
+ else
+ pr_info("proc%d: alloc success %d time.", i, ++count);
+ }
+ }
+ child[i] = pid;
+ }
+
+ // 等待申请进程OOM
+ for (int i = 0; i < PROC_NUM; i++)
+ waitpid(child[i], &status, 0);
+
+ // kill加组进程
+ for (int i = 0; i < PROC_NUM; i++)
+ KILL_CHILD(add_workers[i]);
+
+error:
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "执行流程:sp_alloc直至触发OOM")
+ TESTCASE_CHILD(testcase2, "执行sp_alloc,直到oom;同时拉起加组进程,反复加入和退出组")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_proc_sp_group_state.c b/tools/testing/sharepool/testcase/scenario_test/test_proc_sp_group_state.c
new file mode 100644
index 000000000000..8e4ba2881800
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_proc_sp_group_state.c
@@ -0,0 +1,170 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sharepool_lib.h"
+
+#define ALIGN_UP(x, align_to) (((x) + ((align_to)-1)) & ~((align_to)-1))
+#define CMD_LEN 100
+#define UNIT 1024
+#define PAGE_NUM 100
+#define HGPAGE_NUM 10
+#define LARGE_PAGE_NUM 1000000
+#define ATOMIC_TEST_SIZE (1024UL * 1024UL * 1024UL) // 1G
+/*
+ * 前置条件:进程先加组。
+ * testcase1: 申请共享组内存,flag为0。预期申请成功。
+ * testcase2: 申请共享组内存,flag为HP,size不对齐。预期申请到对齐大页大小size的内存。
+ */
+
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int cleanup(struct sp_alloc_info *alloc_info)
+{
+ int ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int alloc_large_repeat(bool hugepage, int repeat)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = hugepage ? 1 : 0,
+ .size = ATOMIC_TEST_SIZE,
+ .spg_id = 1,
+ };
+
+ pr_info("start to alloc...");
+ for (int i = 0; i < repeat; i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ pr_info("alloc %s failed. errno %d",
+ hugepage ? "huge page" : "normal page",
+ errno);
+ return ret;
+ } else {
+ pr_info("alloc %s success %d time.",
+ hugepage ? "huge page" : "normal page",
+ i + 1);
+ }
+ sharepool_print();
+ mem_show();
+ }
+ return 0;
+}
+
+// 申请完停下手动cat下
+static int testcase1(void)
+{
+ char pid_str[SIZE];
+
+ alloc_large_repeat(false, 1);
+ pr_info("process %d suspended, cat /proc/%d/sp_group, then kill",
+ getpid(), getpid());
+
+ sprintf(pid_str, "/proc/%d/sp_group", getpid());
+ cat_attr(pid_str);
+ sleep(1);
+
+ return 0;
+}
+
+static int testcase2(void)
+{
+ char pid_str[SIZE];
+
+ alloc_large_repeat(true, 1);
+ pr_info("process %d suspended, cat /proc/%d/sp_group, then kill",
+ getpid(), getpid());
+
+ sprintf(pid_str, "/proc/%d/sp_group", getpid());
+ cat_attr(pid_str);
+ sleep(1);
+
+ return 0;
+}
+
+static int testcase3(void)
+{
+ int ret = 0;
+ char pid_str[SIZE];
+ sprintf(pid_str, "/proc/%d/sp_group", getpid());
+
+ pr_info("process %d no connection with sharepool, cat /proc/%d/sp_group ...",
+ getpid(), getpid());
+ cat_attr(pid_str);
+ sleep(1);
+
+ ret = alloc_large_repeat(true, 1);
+
+ pr_info("process %d now alloc sharepool memory, cat /proc/%d/sp_group ...",
+ getpid(), getpid());
+ cat_attr(pid_str);
+ sleep(1);
+
+ return ret;
+}
+
+static int testcase4(void)
+{
+ char pid_str[SIZE];
+
+ for (int i = 0; i < 100; i++) {
+ sprintf(pid_str, "/proc/%d/sp_group", i);
+ cat_attr(pid_str);
+ }
+
+ sleep(1);
+ return 0;
+}
+
+
+/* testcase1: 申请共享组内存,flag为0。预期申请成功。
+ * testcase2: 申请共享组内存,flag为HP,size不对齐。预期申请到对齐大页大小size的内存。
+*/
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "单进程申请1G普通页,重复1次,然后退出。观察维测结构打印是否正常")
+ TESTCASE_CHILD(testcase2, "单进程申请1G大页,重复1次,然后退出。观察维测结构打印是否正常")
+ TESTCASE_CHILD(testcase3, "单进程未与sharepool产生关联,直接cat,然后再加组申请内存,再cat")
+ TESTCASE_CHILD(testcase4, "cat /proc/1~100/sp_group, 不死锁即可")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_vmalloc_cgroup.c b/tools/testing/sharepool/testcase/scenario_test/test_vmalloc_cgroup.c
new file mode 100644
index 000000000000..c8e5748cdebe
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_vmalloc_cgroup.c
@@ -0,0 +1,65 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 执行流程:sp_alloc直至触发OOM
+ */
+
+#define PROC_NUM 20
+#define PROT (PROT_READ | PROT_WRITE)
+#define HP_SIZE (2 * 1024 * 1024UL)
+
+static int test(bool ishuge)
+{
+ int ret = 0;
+ unsigned long kaddr;
+ unsigned long size = 10UL * PMD_SIZE;
+
+ kaddr = wrap_vmalloc(size, ishuge);
+ if (!kaddr) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ KAREA_ACCESS_SET('A', kaddr, size, out);
+ return ret;
+out:
+ return -1;
+}
+
+static int testcase1(void) { return test(false); }
+static int testcase2(void) { return test(true); }
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "vmalloc 20M小页内存,不释放,退出")
+ TESTCASE_CHILD(testcase1, "vmalloc 20M大页内存,不释放,退出")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/stress_test/Makefile b/tools/testing/sharepool/testcase/stress_test/Makefile
new file mode 100644
index 000000000000..30c6456130d9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/stress_test/Makefile
@@ -0,0 +1,15 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ mkdir -p $(TOOL_BIN_DIR)/stress_test
+ cp $(testcases) $(TOOL_BIN_DIR)/stress_test
+ cp stress_test.sh $(TOOL_BIN_DIR)
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/stress_test/sp_ro_fault_injection.sh b/tools/testing/sharepool/testcase/stress_test/sp_ro_fault_injection.sh
new file mode 100644
index 000000000000..2bc729f09fd0
--- /dev/null
+++ b/tools/testing/sharepool/testcase/stress_test/sp_ro_fault_injection.sh
@@ -0,0 +1,21 @@
+#!/bin/bash
+
+# fault injection test for SP_RO read only area
+
+fn_fail_page_alloc()
+{
+ echo Y > /sys/kernel/debug/fail_page_alloc/task-filter
+ echo 10 > /sys/kernel/debug/fail_page_alloc/probability
+ echo 1 > /sys/kernel/debug/fail_page_alloc/interval
+ printf %#x -1 > /sys/kernel/debug/fail_page_alloc/times
+ echo 0 > /sys/kernel/debug/fail_page_alloc/space
+ echo 2 > /sys/kernel/debug/fail_page_alloc/verbose
+ bash -c "echo 1 > /proc/self/make-it-fail && exec $*"
+}
+
+fn_page_alloc_fault()
+{
+ fn_fail_page_alloc ./api_test/ro_test
+}
+
+fn_page_alloc_fault
diff --git a/tools/testing/sharepool/testcase/stress_test/stress_test.sh b/tools/testing/sharepool/testcase/stress_test/stress_test.sh
new file mode 100755
index 000000000000..fd83c8d51015
--- /dev/null
+++ b/tools/testing/sharepool/testcase/stress_test/stress_test.sh
@@ -0,0 +1,47 @@
+#!/bin/sh
+
+set -x
+
+echo 'test_u2k_add_and_kill -g 2999 -p 1023 -n 10000
+ test_alloc_add_and_kill -g 2999 -p 1023 -n 10000
+ test_concurrent_debug -g 2999 -p 1023 -n 10000
+ test_mult_u2k -n 5000 -s 1000 -r 500
+ test_alloc_free_two_process -g 2999 -p 1023 -n 100 -s 10000
+ test_statistics_stress
+ test_mult_proc_interface' | while read line
+do
+ let flag=0
+ ./stress_test/$line
+ if [ $? -ne 0 ] ;then
+ echo "testcase stress_test/$line failed"
+ let flag=1
+ fi
+
+ sleep 3
+
+ #打印spa_stat
+ ret=`cat /proc/sharepool/spa_stat | wc -l`
+ if [ $ret -ge 15 ] ;then
+ cat /proc/sharepool/spa_stat
+ echo spa_stat not clean
+ let flag=1
+ fi
+ #打印proc_stat
+ ret=`cat /proc/sharepool/proc_stat | wc -l`
+ if [ $ret -ge 15 ] ;then
+ cat /proc/sharepool/proc_stat
+ echo proc_stat not clean
+ let flag=1
+ fi
+
+ cat /proc/sharepool/proc_overview
+ #如果泄漏 -->exit
+ if [ $flag -eq 1 ] ;then
+ exit 1
+ fi
+ echo "testcase stress_test/$line success"
+
+ cat /proc/meminfo
+ free -m
+
+done
diff --git a/tools/testing/sharepool/testcase/stress_test/test_alloc_add_and_kill.c b/tools/testing/sharepool/testcase/stress_test/test_alloc_add_and_kill.c
new file mode 100644
index 000000000000..7278815bd328
--- /dev/null
+++ b/tools/testing/sharepool/testcase/stress_test/test_alloc_add_and_kill.c
@@ -0,0 +1,347 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 执行流程:
+ * grandchild:
+ * 连续申请100次内存,然后释放掉,循环做
+ * child:
+ * 不断的kill或者创建grandchild进程
+ * 参数:
+ * -n 杀死或者创建grandchild进程次数
+ * -p 每组进程数
+ * -g sharepool组的数量
+ * -s grandchild进程每次申请内存的大小
+ */
+
+#define NR_GROUP 3000
+#define MAX_PROC_PER_GRP 1024
+#define NR_HOLD_AREAS 100
+
+static sem_t *child_sync[NR_GROUP];
+static sem_t *grandchild_sync[NR_GROUP];
+
+static int group_num = 2;
+static int process_per_group = 3;
+static int kill_num = 1000;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int grandchild_id)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", grandchild_id, getpid(), ##args)
+ int ret;
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync[grandchild_id / MAX_PROC_PER_GRP]);
+ } while ((ret != 0) && errno == EINTR);
+
+ //pr_local_info("start!!");
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ sem_post(grandchild_sync[grandchild_id / MAX_PROC_PER_GRP]);
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_infos[NR_HOLD_AREAS] = {0};
+ for (int i = 0; i < NR_HOLD_AREAS; i++) {
+ alloc_infos[i].flag = 0;
+ alloc_infos[i].spg_id = group_id;
+ alloc_infos[i].size = alloc_size;
+ }
+
+ int top = 0;
+ int count = 0;
+ while (1) {
+ struct sp_alloc_info *info = alloc_infos + top++;
+ ret = ioctl_alloc(dev_fd, info);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ return -1;
+ } else {
+ if (IS_ERR_VALUE(info->addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", info->addr);
+ return -1;
+ }
+ }
+
+ memset((void *)info->addr, 'z', info->size);
+
+ if (top == NR_HOLD_AREAS) {
+ top = 0;
+ for (int i = 0; i < NR_HOLD_AREAS; i++) {
+ ret = ioctl_free(dev_fd, &alloc_infos[i]);
+ if (ret < 0) {
+ pr_local_info("sp_free failed, %d", ret);
+ return ret;
+ }
+ }
+ pr_info("grandchild process id:%d finished %dth alloc-free-100times-run.", grandchild_id, ++count);
+ }
+ }
+
+ //pr_local_info("exit!!");
+ return 0;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ //pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ for (int i = 0; i < process_per_group; i++) {
+ int grandchild_id = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(grandchild_id);
+ exit(ret);
+ } else {
+ //pr_local_info("fork grandchild%d, pid: %d", grandchild_id, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", grandchild_id, group_id);
+ goto out;
+ } else {
+ //pr_local_info("add grandchild%d to group %d successfully", grandchild_id, group_id);
+ }
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+ pr_local_info("%dth sp group %d, create %d processes and add group success!!", arg, group_id, process_per_group);
+
+ unsigned int seed = time(NULL);
+ srand(seed);
+ pr_local_info("rand seed: %u\n", seed);
+ /* 随机杀死或者创建进程 */
+ for (int i = 0; i < kill_num; i++) {
+ sleep(1);
+ pr_info("group %d %dth interruption, %d times left.", group_id, i + 1, kill_num - i - 1);
+ int idx = rand() % process_per_group;
+
+ if (childs[idx] > 0) {
+ kill(childs[idx], SIGKILL);
+ waitpid(childs[idx], NULL, 0);
+ childs[idx] = 0;
+ } else {
+ int grandchild_id = arg * MAX_PROC_PER_GRP + idx;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(grandchild_id);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", grandchild_id, pid);
+ childs[idx] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", grandchild_id, group_id);
+ goto out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", grandchild_id, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+ }
+
+out:
+ for (int i = 0; i < process_per_group; i++) {
+ if (!childs[i])
+ continue;
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ if (!WIFSIGNALED(status)) {
+ pr_local_info("grandchild%d test failed", arg * MAX_PROC_PER_GRP + i);
+ ret = -1;
+ }
+ }
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_PER_GRP || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > NR_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 杀死或者创建子进程次数
+ kill_num = atoi(optarg);
+ if (kill_num > 100000 || kill_num <= 0) {
+ printf("kill number invalid\n");
+ return -1;
+ }
+ break;
+ case 's':
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ printf("unsupported options: '%c'\n", opt);
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+#define pr_local_info(fmt, args...) printf("[parent , pid:%d] " fmt "\n", getpid(), ##args)
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[NR_GROUP];
+
+ pr_local_info("group: %d, process_per_group: %d", group_num, process_per_group);
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_local_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_local_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+
+ pr_local_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ status = (char)WEXITSTATUS(status);
+ if (status) {
+ pr_local_info("child%d test failed, %d", i, status);
+ ret = status;
+ }
+ }
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "连续申请100次内存,然后释放掉,循环; 同时不断杀死进程/创建新进程加组")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/stress_test/test_alloc_free_two_process.c b/tools/testing/sharepool/testcase/stress_test/test_alloc_free_two_process.c
new file mode 100644
index 000000000000..63621dbf5b7d
--- /dev/null
+++ b/tools/testing/sharepool/testcase/stress_test/test_alloc_free_two_process.c
@@ -0,0 +1,303 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 两个组,每组两个进程,同时多次申请释放内存
+ *
+ * 用信号量进行同步,进程创建完成后先睡眠
+ */
+
+#define NR_GROUP 3000
+#define MAX_PROC_PER_GRP 1024
+
+static int group_num = 2;
+static int process_per_group = 3;
+static int alloc_num = 1000;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int arg, sem_t *child_sync, sem_t *grandchild_sync)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", arg, getpid(), ##args)
+ int ret;
+
+ struct sp_alloc_info *alloc_infos = malloc(sizeof(*alloc_infos) * alloc_num);
+ if (!alloc_infos) {
+ pr_local_info("malloc failed");
+ return -1;
+ }
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync);
+ } while ((ret != 0) && errno == EINTR);
+
+ sleep(1); // it seems sem_wait doesn't work as expected
+ pr_local_info("start!!, ret is %d, errno is %d", ret, errno);
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ goto error_out;
+ }
+
+ for (int i = 0; i < alloc_num; i++) {
+ (alloc_infos + i)->flag = 0,
+ (alloc_infos + i)->spg_id = group_id,
+ (alloc_infos + i)->size = alloc_size,
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, alloc_infos + i);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ goto error_out;
+ } else {
+ if (IS_ERR_VALUE(alloc_infos[i].addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", alloc_infos[i].addr);
+ goto error_out;
+ }
+ }
+
+ memset((void *)alloc_infos[i].addr, 'z', alloc_infos[i].size);
+ }
+
+ sem_post(grandchild_sync);
+ do {
+ ret = sem_wait(child_sync);
+ } while (ret < 0 && errno == EINTR);
+
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_free(dev_fd, alloc_infos + i);
+ if (ret < 0) {
+ pr_local_info("ioctl free failed, errno: %d", errno);
+ goto error_out;
+ }
+ }
+ sem_post(grandchild_sync);
+ free(alloc_infos);
+ pr_local_info("exit!!");
+ return 0;
+
+error_out:
+ sem_post(grandchild_sync);
+ free(alloc_infos);
+ return -1;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+ sem_t *child_sync[MAX_PROC_PER_GRP] = {0};
+ sem_t *grandchild_sync[MAX_PROC_PER_GRP] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ // create syncs for grandchilds
+ for (int i = 0; i < process_per_group; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_local_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // create syncs for childs
+ for (int i = 0; i < process_per_group; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_local_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // 创建子进程并将之加组
+ for (int i = 0; i < process_per_group; i++) {
+ int num = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(num, child_sync[i], grandchild_sync[i]);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", num, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", num, group_id);
+ goto error_out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", num, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[i]);
+ }
+ }
+
+ for (int i = 0; i < process_per_group; i++)
+ do {
+ ret = sem_wait(grandchild_sync[i]);
+ } while (ret < 0 && errno == EINTR);
+
+ for (int i = 0; i < process_per_group; i++)
+ sem_post(child_sync[i]);
+ pr_local_info("grandchild-processes start to do sp_free");
+
+ for (int i = 0; i < process_per_group && childs[i] > 0; i++) {
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_local_info("grandchild%d test failed, %d", arg * MAX_PROC_PER_GRP + i, status);
+ ret = -1;
+ }
+ }
+ pr_local_info("exit!!");
+ return ret;
+
+error_out:
+ for (int i = 0; i < process_per_group && childs[i] > 0; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], NULL, 0);
+ };
+ pr_local_info("exit!!");
+ return -1;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_PER_GRP || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > NR_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 内存申请次数
+ alloc_num = atoi(optarg);
+ if (alloc_num > 100000 || alloc_num <= 0) {
+ printf("alloc number invalid\n");
+ return -1;
+ }
+ break;
+ case 's':
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+#define pr_local_info(fmt, args...) printf("[parent , pid:%d] " fmt "\n", getpid(), ##args)
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[NR_GROUP];
+
+ pr_local_info("group: %d, process_per_group: %d", group_num, process_per_group);
+
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+
+ pr_local_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_local_info("child%d test failed, %d", i, status);
+ ret = -1;
+ }
+ }
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "两个组每组两个进程,同时申请释放内存,简单验证")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/stress_test/test_concurrent_debug.c b/tools/testing/sharepool/testcase/stress_test/test_concurrent_debug.c
new file mode 100644
index 000000000000..6de3001e5632
--- /dev/null
+++ b/tools/testing/sharepool/testcase/stress_test/test_concurrent_debug.c
@@ -0,0 +1,359 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 执行流程:
+ * grandchild:
+ * 连续申请100次内存,然后释放掉,循环
+ * child:
+ * 不断的kill或者创建grandchild进程
+ * 参数:
+ * -n 杀死或者创建grandchild进程次数
+ * -p 每组进程数
+ * -g sharepool组的数量
+ * -s grandchild进程每次申请内存的大小
+ */
+
+#define NR_GROUP 3000
+#define MAX_PROC_PER_GRP 1024
+#define NR_HOLD_AREAS 100
+
+static sem_t *child_sync[NR_GROUP];
+static sem_t *grandchild_sync[NR_GROUP];
+
+static int group_num = 2;
+static int process_per_group = 3;
+static int kill_num = 100;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int grandchild_id)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", grandchild_id, getpid(), ##args)
+ int ret;
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync[grandchild_id / MAX_PROC_PER_GRP]);
+ } while ((ret != 0) && errno == EINTR);
+
+ //pr_local_info("start!!");
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ sem_post(grandchild_sync[grandchild_id / MAX_PROC_PER_GRP]);
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_infos[NR_HOLD_AREAS] = {0};
+ for (int i = 0; i < NR_HOLD_AREAS; i++) {
+ alloc_infos[i].flag = 0;
+ alloc_infos[i].spg_id = group_id;
+ alloc_infos[i].size = alloc_size;
+ }
+
+ int top = 0;
+ int count = 0;
+ while (1) {
+ struct sp_alloc_info *info = alloc_infos + top++;
+ ret = ioctl_alloc(dev_fd, info);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ return -1;
+ } else {
+ if (IS_ERR_VALUE(info->addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", info->addr);
+ return -1;
+ }
+ }
+
+ memset((void *)info->addr, 'z', info->size);
+
+ if (top == NR_HOLD_AREAS) {
+ top = 0;
+ for (int i = 0; i < NR_HOLD_AREAS; i++) {
+ ret = ioctl_free(dev_fd, &alloc_infos[i]);
+ if (ret < 0) {
+ pr_local_info("sp_free failed, %d", ret);
+ return ret;
+ }
+ }
+ pr_info("grandchild process id:%d finished %dth alloc-free-100times-run.", grandchild_id, ++count);
+ }
+ }
+
+ //pr_local_info("exit!!");
+ return 0;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ //pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ for (int i = 0; i < process_per_group; i++) {
+ int grandchild_id = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(grandchild_id);
+ exit(ret);
+ } else {
+ //pr_local_info("fork grandchild%d, pid: %d", grandchild_id, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", grandchild_id, group_id);
+ goto out;
+ } else {
+ //pr_local_info("add grandchild%d to group %d successfully", grandchild_id, group_id);
+ }
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+ pr_local_info("%dth sp group %d, create %d processes and add group success!!", arg, group_id, process_per_group);
+
+ unsigned int seed = time(NULL);
+ srand(seed);
+ pr_local_info("rand seed: %u\n", seed);
+ /* 随机杀死或者创建进程 */
+ for (int i = 0; i < kill_num; i++) {
+ sleep(1);
+ pr_info("group %d %dth interruption, %d times left.", group_id, i + 1, kill_num - i - 1);
+ int idx = rand() % process_per_group;
+
+ if (childs[idx] > 0) {
+ kill(childs[idx], SIGKILL);
+ waitpid(childs[idx], NULL, 0);
+ childs[idx] = 0;
+ } else {
+ int grandchild_id = arg * MAX_PROC_PER_GRP + idx;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(grandchild_id);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", grandchild_id, pid);
+ childs[idx] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", grandchild_id, group_id);
+ goto out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", grandchild_id, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+ }
+
+out:
+ for (int i = 0; i < process_per_group; i++) {
+ if (!childs[i])
+ continue;
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ if (!WIFSIGNALED(status)) {
+ pr_local_info("grandchild%d test failed", arg * MAX_PROC_PER_GRP + i);
+ ret = -1;
+ }
+ }
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_PER_GRP || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > NR_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 杀死或者创建子进程次数
+ kill_num = atoi(optarg);
+ if (kill_num > 100000 || kill_num <= 0) {
+ printf("kill number invalid\n");
+ return -1;
+ }
+ break;
+ case 's':
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ printf("unsupported options: '%c'\n", opt);
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+#define pr_local_info(fmt, args...) printf("[parent , pid:%d] " fmt "\n", getpid(), ##args)
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[NR_GROUP];
+ pid_t pid;
+
+ pr_local_info("group: %d, process_per_group: %d", group_num, process_per_group);
+
+ pid = fork();
+ if (pid == 0) {
+ while(1) {
+ usleep(200);
+ sharepool_log("sharepool_log");
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_local_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_local_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+
+ pr_local_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ status = (char)WEXITSTATUS(status);
+ if (status) {
+ pr_local_info("child%d test failed, %d", i, status);
+ ret = status;
+ }
+ }
+
+ kill(pid, SIGKILL);
+ waitpid(pid, &status, 0);
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "持续申请->释放内存,同时循环打印维测")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/stress_test/test_mult_u2k.c b/tools/testing/sharepool/testcase/stress_test/test_mult_u2k.c
new file mode 100644
index 000000000000..04a5a3e5c6e0
--- /dev/null
+++ b/tools/testing/sharepool/testcase/stress_test/test_mult_u2k.c
@@ -0,0 +1,514 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Nov 24 15:40:31 2020
+ */
+#define _GNU_SOURCE
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <pthread.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+#define MAX_ALLOC 100000
+#define MAX_SHARE 1000
+#define MAX_READ 100000
+
+static int alloc_num = 20;
+static int share_num = 20;
+static int read_num = 20;
+
+struct __thread_info {
+ struct sp_make_share_info *u2k_info;
+ struct karea_access_info *karea_info;
+};
+
+/*
+ * 用户态进程A加组后分配并写内存N,反复调用u2k(share_num次)共享到内核,内核模块通过每个kva反复读同一块内存(read_num次)N成功。
+ * 进程A停止共享N后,内核模块读N失败。进程A释放内存N。
+ */
+static void *grandchild1(void *arg)
+{
+ struct karea_access_info *karea_info = (struct karea_access_info*)arg;
+ int ret = 0;
+ for (int j = 0; j < read_num; j++) {
+ ret = ioctl_karea_access(dev_fd, karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ pthread_exit((void*)ret);
+ }
+ pr_info("thread read u2k area %dth time success", j);
+ }
+ pr_info("thread read u2k area %d times success", read_num);
+ pthread_exit((void*)ret);
+}
+
+static int child1(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = alloc_info->spg_id;
+ pr_info("want to add group_id: %d", group_id);
+
+ // add group()
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+ pr_info("now added into group_id: %d", alloc_info->spg_id);
+
+ // alloc()
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ pr_info("alloc %0lx memory success", alloc_info->size);
+
+ // write
+ memset((void *)alloc_info->addr, 'o', alloc_info->size);
+ pr_info("memset success");
+
+ // u2k
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info->addr,
+ .size = alloc_info->size,
+ .pid = getpid(),
+ };
+
+ struct karea_access_info *karea_info = (struct karea_access_info*)malloc(share_num * sizeof(struct karea_access_info));
+
+ for (int i = 0; i < share_num; i++) { // 同一段用户态内存可以向内核共享很多次
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+ karea_info[i].mod = KAREA_CHECK;
+ karea_info[i].value = 'o';
+ karea_info[i].addr = u2k_info.addr;
+ karea_info[i].size = u2k_info.size;
+ }
+ pr_info("u2k share %d times success", share_num);
+
+ //内核反复读(太慢了!取消了。)
+ //for (int j = 0; j < read_num; j++) {
+ for (int i = 0; i < share_num; i++) {
+ ret = ioctl_karea_access(dev_fd, &karea_info[i]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+ pr_info("kernel read %dth %0lx area success", i, alloc_info->size);
+ }
+ //}
+ //pr_info("kernel read %d times success", read_num);
+
+ //内核并发读 100个线程
+ pthread_t childs[MAX_SHARE] = {0};
+ int status = 0;
+ for (int i = 0; i < share_num; i++) {
+ ret = pthread_create(&childs[i], NULL, grandchild1, (void *)&karea_info[i]);
+ if (ret != 0) {
+ pr_info("pthread_create failed, errno: %d", errno);
+ exit(-1);
+ }
+ }
+ pr_info("create %d threads success", share_num);
+
+ void *child_ret;
+ for (int i = 0; i < share_num; i++) {
+ pthread_join(childs[i], &child_ret);
+ if ((int)child_ret != 0) {
+ pr_info("grandchild1 %d test failed, %d", i, (int)child_ret);
+ return (int)child_ret;
+ }
+ }
+ pr_info("exit %d threads success", share_num);
+
+ for (int i = 0; i < share_num; i++) {
+ u2k_info.addr = karea_info[i].addr;
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+ }
+ pr_info("unshare u2k area %d times success", share_num);
+
+ /*
+ * unshare之后再访问内存会导致内核挂掉,在手工测试的时候执行
+ */
+#if 0
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info[0]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+#endif
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ free(karea_info);
+ return ret;
+}
+
+/*
+ * 用户态进程A加组后分配并写alloc_num个内存,调用u2k共享到内核,内核模块通过每个kva分别读多个内存成功(每个kva读read_num次)。
+ * 进程A停止共享后,内核模块读N失败。进程A释放内存N。
+ */
+static void* grandchild2(void *arg)
+{
+ int ret = 0;
+ struct __thread_info* thread2_info = (struct __thread_info*)arg;
+ ret = ioctl_u2k(dev_fd, thread2_info->u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ pthread_exit((void*)ret);
+ }
+ thread2_info->karea_info->mod = KAREA_CHECK;
+ thread2_info->karea_info->value = 'p';
+ thread2_info->karea_info->addr = thread2_info->u2k_info->addr;
+ thread2_info->karea_info->size = thread2_info->u2k_info->size;
+ pthread_exit((void*)ret);
+}
+
+static int child2(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = alloc_info->spg_id;
+
+ // add group
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ // alloc N块内存
+ struct sp_alloc_info *all_alloc_info = (struct sp_alloc_info*)malloc(alloc_num * sizeof(struct sp_alloc_info));
+ for (int i = 0; i < alloc_num; i++) {
+ all_alloc_info[i].flag = alloc_info->flag;
+ all_alloc_info[i].spg_id = alloc_info->spg_id;
+ all_alloc_info[i].size = alloc_info->size;
+ ret = ioctl_alloc(dev_fd, &all_alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)all_alloc_info[i].addr, 'p', all_alloc_info[i].size);
+ }
+
+ struct sp_make_share_info *all_u2k_info = (struct sp_make_share_info*)malloc(alloc_num * sizeof(struct sp_make_share_info));
+
+ struct karea_access_info *karea_info = (struct karea_access_info*)malloc(alloc_num * sizeof(struct karea_access_info));
+
+ // 并发调用u2k
+ // 创建100个线程,分别对一块alloc内存调用u2k,并存储返回地址
+ pthread_t childs[MAX_ALLOC] = {0};
+ struct __thread_info thread2_info[MAX_ALLOC];
+ int status = 0;
+ for (int i = 0; i < alloc_num; i++) {
+ all_u2k_info[i].uva = all_alloc_info[i].addr;
+ all_u2k_info[i].size = all_alloc_info[i].size;
+ all_u2k_info[i].pid = getpid();
+ thread2_info[i].u2k_info = &all_u2k_info[i];
+ thread2_info[i].karea_info = &karea_info[i];
+ ret = pthread_create(&childs[i], NULL, grandchild2, (void *)&thread2_info[i]);
+ if (ret != 0) {
+ pr_info("pthread_create failed, errno: %d", errno);
+ exit(-1);
+ }
+ }
+
+ // 结束所有线程
+ void *child_ret;
+ for (int i = 0; i < alloc_num; i++) {
+ pthread_join(childs[i], &child_ret);
+ if ((int)child_ret != 0) {
+ pr_info("grandchild2 %d test failed, %d", i, (int)child_ret);
+ return (int)child_ret;
+ }
+ }
+
+ // 内核读内存
+ for (int j = 0; j < read_num; j++) {
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_karea_access(dev_fd, &karea_info[i]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+ }
+ }
+
+ // unshare所有内存
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_unshare(dev_fd, &all_u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ /*
+ * unshare之后再访问内存会导致内核挂掉,在手工测试的时候执行
+ */
+#if 0
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info[0]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+#endif
+
+ // free所有内存
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_free(dev_fd, &all_alloc_info[i]);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ }
+ free(all_alloc_info);
+ free(all_u2k_info);
+ free(karea_info);
+ return ret;
+}
+
+/*
+ * 用户态进程A加组后分配并写内存N,反复share_num次(调用u2k共享到内核, 内核模块读内存N成功,用户态调用unshare)。
+ * 进程A最后一次停止共享后,内核模块读N失败。进程A释放内存N。
+ */
+static int child3(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = alloc_info->spg_id;
+
+ // add group
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ // alloc
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info->addr, 'q', alloc_info->size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info->addr,
+ .size = alloc_info->size,
+ .pid = getpid(),
+ };
+
+ // 反复调用u2k-内核读-unshare
+ for (int i = 0; i < share_num; i++) {
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'q',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ /*
+ * unshare之后再访问内存会导致内核挂掉,在手工测试的时候执行
+ */
+#if 0
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info[0]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+#endif
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ return ret;
+}
+
+static void print_help()
+{
+ printf("Usage:./test_multi_u2k -n alloc_num -s share_num -r read_num\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "n:s:r:")) != -1) {
+ switch (opt) {
+ case 'n': // 内存申请次数
+ alloc_num = atoi(optarg);
+ if (alloc_num > MAX_ALLOC || alloc_num <= 0) {
+ printf("alloc number invalid\n");
+ return -1;
+ }
+ break;
+ case 's': // u2k共享次数
+ share_num = atoi(optarg);
+ if (share_num > MAX_SHARE || share_num <= 0) {
+ printf("share number invalid\n");
+ return -1;
+ }
+ break;
+ case 'r': // 内核读内存次数
+ read_num = atoi(optarg);
+ if (read_num > MAX_READ || read_num <= 0) {
+ printf("read number invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+#define PROC_NUM 4
+static struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = 0,
+ .spg_id = 10,
+ .size = 100 * PAGE_SIZE, //400K
+ },
+ {
+ .flag = SP_HUGEPAGE,
+ .spg_id = 12,
+ .size = 10 * PMD_SIZE, // 20M
+ },
+ {
+ .flag = SP_DVPP,
+ .spg_id = 19,
+ .size = 100000, // 约100K
+ },
+ {
+ .flag = SP_DVPP | SP_HUGEPAGE,
+ .spg_id = 19,
+ .size = 10000000, // 约1M
+ },
+};
+
+int (*child_funcs[])(struct sp_alloc_info *) = {
+ child1,
+ child2,
+ child3,
+};
+
+static int testcase(int child_idx)
+{
+ int ret = 0;
+ pid_t procs[PROC_NUM];
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ ret = -1;
+ goto error_out;
+ } else if (pid == 0) {
+ exit(child_funcs[child_idx](alloc_infos + i));
+ } else {
+ procs[i] = pid;
+ }
+ }
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ int status = 0;
+ waitpid(procs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("testcase failed!!, func: %d, alloc info: %d", child_idx, i);
+ ret = -1;
+ } else {
+ pr_info("testcase success!!, func: %d, alloc info: %d", child_idx, i);
+ }
+ }
+
+ return 0;
+error_out:
+ return -1;
+}
+
+static int testcase1(void) { return testcase(0); }
+static int testcase2(void) { return testcase(1); }
+static int testcase3(void) { return testcase(2); }
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "用户态反复调用u2k共享到内核,内核模块通过每个kva反复读同一块内存")
+ TESTCASE_CHILD(testcase2, "创建100个线程,分别对一块alloc内存调用u2k,并存储返回地址;内核读内存")
+ TESTCASE_CHILD(testcase3, "用户态u2k-内核读-unshare,重复循环")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/stress_test/test_sharepool_enhancement_stress_cases.c b/tools/testing/sharepool/testcase/stress_test/test_sharepool_enhancement_stress_cases.c
new file mode 100644
index 000000000000..6e34d2fc8044
--- /dev/null
+++ b/tools/testing/sharepool/testcase/stress_test/test_sharepool_enhancement_stress_cases.c
@@ -0,0 +1,692 @@
+#include "sharepool_lib.h"
+#include "sem_use.h"
+#include <stdlib.h>
+#include <errno.h>
+#include <assert.h>
+#include <pthread.h>
+#include <sys/types.h>
+
+#define PROC_NUM 8
+#define THREAD_NUM 5
+#define GROUP_NUM 16
+#define ALLOC_TYPE 4
+#define REPEAT_TIMES 2
+#define ALLOC_SIZE PAGE_SIZE
+#define PROT (PROT_READ | PROT_WRITE)
+
+static int group_ids[GROUP_NUM];
+static int default_id = 1;
+static int semid;
+
+static int add_multi_group();
+static int check_multi_group();
+static int delete_multi_group();
+static int process();
+void *thread_and_process_helper(int group_id);
+void *del_group_thread(void *arg);
+void *del_proc_from_group(void *arg);
+
+
+// 共享组进程压力测试,大量进程加组退组
+static int testcase1(void)
+{
+ int group_num = 10;
+ int temp_group_id = 1;
+ int proc_num = 10;
+ int prints_num = 3;
+
+ int ret = 0;
+
+ int childs[proc_num];
+ int prints[prints_num];
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, temp_group_id);
+ if (ret < 0) {
+ pr_info("parent %d add into group failed. errno: %d", getpid(), ret);
+ return -1;
+ }
+
+ // 构造进程加组退组
+ for (int i = 0; i < proc_num; i++) {
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, temp_group_id);
+ if (ret < 0)
+ pr_info("child %d add into group failed. errno: %d", getpid(), ret);
+ ret = wrap_del_from_group(getpid(), temp_group_id);
+ if (ret < 0)
+ pr_info("child %d del from group failed. errno: %d", getpid(), ret);
+ }
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ // 持续打印维测接口
+ for(int i = 0; i < prints_num; i++){
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ sharepool_print();
+ usleep(10000);
+ }
+ } else {
+ prints[i] = pid;
+ }
+ }
+
+ sleep(10);
+
+ // 测试结束清理进程
+ for (int i = 0; i < PROC_NUM; i++) {
+ kill(childs[i], SIGKILL);
+ int status;
+ waitpid(childs[i], &status, 0);
+ }
+ for (int i = 0; i < prints_num; i++) {
+ kill(prints[i], SIGKILL);
+ int status;
+ waitpid(prints[i], &status, 0);
+ }
+
+ return 0;
+}
+
+// 共享组上限压力测试
+static int testcase2(void)
+{
+ int ret = 0;
+
+ int default_id = 1;
+ int group_id = 2;
+
+ int prints_num = 3;
+ int prints[prints_num];
+
+ // 持续打印维测接口
+ for(int i = 0; i < prints_num; i++){
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ sharepool_print();
+ usleep(10000);
+ }
+ } else {
+ prints[i] = pid;
+ }
+ }
+
+ // 创建默认共享组
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("parent %d add into group failed. errno: %d", getpid(), ret);
+ return -1;
+ }
+
+ // 持续创建共享组
+ int group_create_pid = fork();
+ if (group_create_pid == 0){
+ while(group_id > 0){
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, group_id);
+ if (ret < 0)
+ pr_info("group %d creation failed. errno: %d", group_id, ret);
+ group_id++;
+ }
+ }
+
+ // 持续创建进程加入默认共享组
+ int process_create_pid = fork();
+ if (process_create_pid == 0){
+ while (1){
+ int temp_pid = fork();
+ if (temp_pid == 0){
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("child %d add into group failed. errno: %d", getpid(), ret);
+ }
+ for (int i=0; i<3; i++){
+ sleep(1);
+ }
+ }
+ }
+ }
+
+ sleep(10);
+
+ int status;
+ kill(group_create_pid, SIGKILL);
+ waitpid(group_create_pid, &status, 0);
+ kill(process_create_pid, SIGKILL);
+ waitpid(process_create_pid, &status, 0);
+
+ for (int i = 0; i < prints_num; i++) {
+ kill(prints[i], SIGKILL);
+ waitpid(prints[i], &status, 0);
+ }
+
+ return 0;
+}
+
+// 共享组内存申请释放压力测试
+static int testcase3(void)
+{
+ int default_id = 1;
+ int ret = 0;
+ int proc_num = 1000;
+ int childs[proc_num];
+
+ int page_size = 4096;
+
+ int prints_num = 3;
+ int prints[prints_num];
+
+ // 持续打印维测接口
+ for(int i = 0; i < prints_num; i++){
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ sharepool_print();
+ usleep(10000);
+ }
+ } else {
+ prints[i] = pid;
+ }
+ }
+
+ // 创建子进程申请和释放4K内存
+ for(int i=0; i<proc_num; i++){
+ int pid = fork();
+ if (pid == 0){
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ while(1){
+ void *addr;
+ addr = wrap_sp_alloc(default_id, page_size, 0);
+ if (addr == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ }
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d free failed.", getpid());
+ } else {
+ pr_info("process %d free success.", getpid());
+ }
+ }
+ }
+ else{
+ childs[i] = pid;
+ }
+ }
+
+ sleep(10);
+
+ int status;
+
+ for (int i = 0; i < proc_num; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ }
+ for (int i = 0; i < prints_num; i++) {
+ kill(prints[i], SIGKILL);
+ waitpid(prints[i], &status, 0);
+ }
+
+}
+
+// 共享组内存大规格申请释放压力测试
+static int testcase4(void)
+{
+ int default_id = 1;
+ int ret = 0;
+ int proc_num = 50;
+ int childs[proc_num];
+
+ int page_size = 1073741824;
+
+ int prints_num = 3;
+ int prints[prints_num];
+
+ // print sharepool maintenance interface
+ for(int i = 0; i < prints_num; i++){
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ sharepool_print();
+ usleep(10000);
+ }
+ } else {
+ prints[i] = pid;
+ }
+ }
+
+ // apply and release memory
+ for(int i=0; i<proc_num; i++){
+ int pid = fork();
+ if (pid == 0){
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ while(1){
+ void *addr;
+ addr = wrap_sp_alloc(default_id, page_size, 0);
+ if (addr == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ }
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d free failed.", getpid());
+ } else {
+ pr_info("process %d free success.", getpid());
+ }
+ }
+ }
+ else{
+ childs[i] = pid;
+ }
+ }
+
+ sleep(10);
+
+ int status;
+
+ for (int i = 0; i < proc_num; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ }
+ for (int i = 0; i < prints_num; i++) {
+ kill(prints[i], SIGKILL);
+ waitpid(prints[i], &status, 0);
+ }
+
+}
+
+
+// 维测接口打印压力测试
+static int testcase5(void)
+{
+ int ret = 0;
+
+ int default_id = 1;
+ int group_id = 2;
+
+ int prints_num = 3;
+ int prints[prints_num];
+
+ // 持续打印维测接口
+ for(int i = 0; i < prints_num; i++){
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ sharepool_print();
+ usleep(10000);
+ }
+ } else {
+ prints[i] = pid;
+ }
+ }
+
+ sleep(10);
+
+ int status;
+
+ for (int i = 0; i < prints_num; i++) {
+ kill(prints[i], SIGKILL);
+ waitpid(prints[i], &status, 0);
+ }
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ //TESTCASE_CHILD(testcase1, "共享组进程压力测试,大量进程加组退组")
+ //TESTCASE_CHILD(testcase2, "共享组上限压力测试")
+ //TESTCASE_CHILD(testcase3, "共享组内存申请释放压力测试")
+ //TESTCASE_CHILD(testcase4, "共享组内存大规格申请释放压力测试")
+ //TESTCASE_CHILD(testcase5, "维测接口打印压力测试")
+};
+
+static int add_multi_group()
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+ for (int i = 0; i < GROUP_NUM; i++) {
+ group_ids[i] = i + 1;
+ ag_info.spg_id = group_ids[i];
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("proc%d add group%d failed", getpid(), group_ids[i]);
+ return -1;
+ }
+ }
+
+ return ret;
+}
+
+static int check_multi_group()
+{
+ int ret;
+ // query groups
+ int spg_ids[GROUP_NUM];
+ int group_num = GROUP_NUM;
+ struct sp_group_id_by_pid_info find_group_info = {
+ .num = &group_num,
+ .spg_ids = spg_ids,
+ .pid = getpid(),
+ };
+ ret = ioctl_find_group_by_pid(dev_fd, &find_group_info);
+ if (ret < 0) {
+ pr_info("find group id by pid failed");
+ return ret;
+ } else
+ for (int i = 0; i < GROUP_NUM; i++)
+ if (spg_ids[i] != group_ids[i]) {
+ pr_info("group id %d not consistent", i);
+ ret = -1;
+ }
+ return ret;
+}
+
+static int delete_multi_group()
+{
+ int ret = 0;
+ int fail = 0, suc = 0;
+ // delete from all groups
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = wrap_del_from_group(getpid(), group_ids[i]);
+ if (ret < 0) {
+ //pr_info("process %d delete from group %d failed, errno: %d", getpid(), group_ids[i], errno);
+ fail++;
+ }
+ else {
+ pr_info("process %d delete from group %d success", getpid(), group_ids[i]);
+ suc++;
+ }
+ }
+
+ return fail;
+}
+
+static int process()
+{
+ int ret = 0;
+ for (int j = 0; j < REPEAT_TIMES; j++) {
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = thread_and_process_helper(group_ids[i]);
+ if (ret < 0) {
+ pr_info("thread_and_process_helper failed");
+ return -1;
+ }
+ }
+ }
+
+ return ret;
+}
+
+static int try_del_from_group(int group_id)
+{
+ int ret = wrap_del_from_group(getpid(), group_id);
+ return -errno;
+}
+
+void *thread_and_process_helper(int group_id)
+{
+ int ret = 0, i;
+ pid_t pid;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[ALLOC_TYPE] = {0};
+ struct sp_make_share_info u2k_info[ALLOC_TYPE] = {0}, k2u_info = {0}, k2u_huge_info = {0};
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ char *addr;
+
+ /* check sp group */
+ pid = getpid();
+
+ // 大页
+ alloc_info[0].flag = SP_HUGEPAGE;
+ alloc_info[0].spg_id = group_id;
+ alloc_info[0].size = 2 * PMD_SIZE;
+
+ // 大页 DVPP
+ alloc_info[1].flag = SP_DVPP | SP_HUGEPAGE;
+ alloc_info[1].spg_id = group_id;
+ alloc_info[1].size = 2 * PMD_SIZE;
+
+ // 普通页 DVPP
+ alloc_info[2].flag = SP_DVPP;
+ alloc_info[2].spg_id = group_id;
+ alloc_info[2].size = 4 * PAGE_SIZE;
+
+ // 普通页
+ alloc_info[3].flag = 0;
+ alloc_info[3].spg_id = group_id;
+ alloc_info[3].size = 4 * PAGE_SIZE;
+
+ /* alloc & u2k */
+ for (i = 0; i < ALLOC_TYPE; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("sp_alloc return addr %lx\n", alloc_info[i].addr);
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(dev_fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("addr %lx is a valid share pool addr\n", alloc_info[i].addr);
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ pr_info("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("u2k return addr %lx, check memory content succ.\n",
+ // u2k_info[i].addr);
+ }
+ }
+ }
+ assert(try_del_from_group(group_id) == -EINVAL);
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = group_id;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = group_id;
+
+ /* k2u */
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+ assert(try_del_from_group(group_id) == -EINVAL);
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ ioctl_vfree(dev_fd, &vmalloc_huge_info);
+ assert(try_del_from_group(group_id) == -EINVAL);
+
+ /* unshare kva & sp_free*/
+ for (i = 0; i < ALLOC_TYPE; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ return 0;
+
+error:
+ return -1;
+}
+
+void *del_group_thread(void *arg)
+{
+ int ret = 0;
+ int i = (int)arg;
+
+ sem_inc_by_one(semid);
+ sem_check_zero(semid);
+
+ pr_info("thread %d now tries to exit from group %d", getpid() + i + 1, default_id);
+ ret = wrap_del_from_group(getpid() + i + 1, default_id);
+ if (ret < 0)
+ pthread_exit((void *)-1);
+ pthread_exit((void *)0);
+}
+
+void *del_proc_from_group(void *arg)
+{
+ sem_dec_by_one(semid);
+ pthread_exit((void *)(wrap_del_from_group((int)arg, default_id)));
+}
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
+
diff --git a/tools/testing/sharepool/testcase/stress_test/test_u2k_add_and_kill.c b/tools/testing/sharepool/testcase/stress_test/test_u2k_add_and_kill.c
new file mode 100644
index 000000000000..198c93afde76
--- /dev/null
+++ b/tools/testing/sharepool/testcase/stress_test/test_u2k_add_and_kill.c
@@ -0,0 +1,358 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 27 13:45:03 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * group_num个组,每组process_pre_group个进程,所有grandchild进程死循环执行:(sp_alloc -> u2k -> 内核读内存 -> unshare -> sp_free)
+ * child进程拉起所有grandchild后,反复kill_num次:kill或者创建grandchild进程,保证总数不超过process_pre_group
+ */
+
+#define MAX_GROUP 3000
+#define MAX_PROC_PER_GRP 1024
+#define MAX_KILL 100000
+
+static sem_t *child_sync[MAX_GROUP];
+static sem_t *grandchild_sync[MAX_GROUP];
+
+static int group_num = 100;
+static int process_per_group = 100;
+static int kill_num = 100;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", arg, getpid(), ##args)
+ int ret;
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync[arg / MAX_PROC_PER_GRP]);
+ } while ((ret != 0) && errno == EINTR);
+
+ //pr_local_info("start!!");
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ sem_post(grandchild_sync[arg / MAX_PROC_PER_GRP]);
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .spg_id = group_id,
+ .size = alloc_size,
+ };
+
+ while (1) {
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ return ret;
+ } else {
+ if (IS_ERR_VALUE(alloc_info.addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", alloc_info.addr);
+ return -1;
+ }
+ }
+ memset((void *)alloc_info.addr, 'r', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_local_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'r',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_local_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_local_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_local_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ //pr_local_info("exit!!");
+ return 0;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ //pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+
+ // add group
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ // 轮流创建该组的所有子进程
+ for (int i = 0; i < process_per_group; i++) {
+ int num = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(num);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", num, pid);
+ childs[i] = pid;
+
+ // 将子进程加组
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", num, group_id);
+ goto out;
+ } else
+ //pr_local_info("add grandchild%d to group %d successfully", num, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+ unsigned int seed = time(NULL);
+ srand(seed);
+ pr_local_info("rand seed: %u\n", seed);
+
+ /* 随机杀死或者创建进程 */
+ for (int i = 0; i < kill_num; i++) {
+ printf("kill/create %dth time.\n", i);
+ int idx = rand() % process_per_group;
+
+ if (childs[idx] > 0) {
+ // 杀死这个位置上的进程
+ kill(childs[idx], SIGKILL);
+ waitpid(childs[idx], NULL, 0);
+ printf("We killed %dth process %d!\n", idx, childs[idx]);
+ childs[idx] = 0;
+ } else {
+ printf("We are going to create a new process.\n");
+ // 该进程已经被杀死了,创建一个新进程填补该位置
+ int num = arg * MAX_PROC_PER_GRP + idx;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(num);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", num, pid);
+ childs[idx] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", num, group_id);
+ goto out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", num, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+ }
+
+out:
+ for (int i = 0; i < process_per_group; i++) {
+ if (!childs[i])
+ continue;
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ if (!WIFSIGNALED(status)) {
+ pr_local_info("grandchild%d test failed", arg * MAX_PROC_PER_GRP + i);
+ ret = -1;
+ }
+ }
+
+ //pr_local_info("exit!!");
+ return ret;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:./test_multi_u2k2 -g group_num -p proc_num -n kill_num -s alloc_size\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_PER_GRP || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > MAX_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 杀死或者创建子进程次数
+ kill_num = atoi(optarg);
+ if (kill_num > MAX_KILL || kill_num <= 0) {
+ printf("kill number invalid\n");
+ return -1;
+ }
+ break;
+ case 's':
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ printf("unsupported options: '%c'\n", opt);
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[MAX_GROUP];
+
+ // 创建组同步锁 grandchild
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // 创建组同步锁 child
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // 创建组对应的子进程
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+ pr_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ // 结束组对应的子进程
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ status = (char)WEXITSTATUS(status);
+ if (status) {
+ pr_info("child%d test failed, %d", i, status);
+ ret = status;
+ }
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "循环sp_alloc -> u2k -> 内核读内存 -> unshare -> sp_free 同时杀掉进程/创建新进程加组")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/test_all/Makefile b/tools/testing/sharepool/testcase/test_all/Makefile
new file mode 100644
index 000000000000..05243de93e4a
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_all/Makefile
@@ -0,0 +1,8 @@
+test_all: test_all.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+install: test_all
+ cp $^ $(TOOL_BIN_DIR)
+
+clean:
+ rm -rf test_all
diff --git a/tools/testing/sharepool/testcase/test_all/test_all.c b/tools/testing/sharepool/testcase/test_all/test_all.c
new file mode 100644
index 000000000000..3e0698f9b1f6
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_all/test_all.c
@@ -0,0 +1,285 @@
+/*
+ * compile: gcc test_all.c sharepool_lib.so -o test_all -lpthread
+ */
+
+#include <sys/types.h>
+#include <unistd.h>
+#include <pthread.h>
+#include <stdio.h>
+
+#include "sharepool_lib.h"
+
+#define GROUP_ID 1
+#define TIMES 4
+int fd;
+void *thread(void *arg)
+{
+ int ret, group_id, i;
+ pid_t pid;
+ unsigned long invalid_addr = 0x30000;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[TIMES] = {0};
+ struct sp_make_share_info u2k_info[TIMES] = {0}, k2u_info = {0}, k2u_huge_info = {0};
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ char *addr;
+
+ printf("enter thread, pid is %d, thread id is %lu\n\n", getpid(), pthread_self());
+
+ /* check sp group */
+ pid = getpid();
+ group_id = ioctl_find_first_group(fd, pid);
+ if (group_id != GROUP_ID) {
+ printf("query group id is %d, but expected group id is %d\n",
+ group_id, GROUP_ID);
+ goto error;
+ }
+
+ /* check invalid share pool addr */
+ judge_ret = ioctl_judge_addr(fd, invalid_addr);
+ if (judge_ret != false) {
+ printf("expect an invalid share pool addr\n");
+ goto error;
+ }
+
+ alloc_info[0].flag = SP_HUGEPAGE;
+ alloc_info[0].spg_id = GROUP_ID;
+ alloc_info[0].size = 2 * PMD_SIZE;
+
+ alloc_info[1].flag = SP_DVPP | SP_HUGEPAGE;
+ alloc_info[1].spg_id = GROUP_ID;
+ alloc_info[1].size = 2 * PMD_SIZE;
+
+ alloc_info[2].flag = SP_DVPP;
+ alloc_info[2].spg_id = GROUP_ID;
+ alloc_info[2].size = 4 * PAGE_SIZE;
+
+ alloc_info[3].flag = 0;
+ alloc_info[3].spg_id = GROUP_ID;
+ alloc_info[3].size = 4 * PAGE_SIZE;
+
+ for (i = 0; i < TIMES; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(fd, &alloc_info[i]);
+ if (ret < 0) {
+ printf("ioctl alloc failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ printf("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ printf("sp_alloc return addr %lx\n", alloc_info[i].addr);
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ printf("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ printf("addr %lx is a valid share pool addr\n", alloc_info[i].addr);
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(fd, &u2k_info[i]);
+ if (ret < 0) {
+ printf("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ printf("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ } else {
+ printf("u2k return addr %lx, check memory content succ.\n",
+ u2k_info[i].addr);
+ }
+ }
+ printf("\n");
+ }
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(fd, &vmalloc_info);
+ if (ret < 0) {
+ printf("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ printf("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = SPG_ID_DEFAULT;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = SPG_ID_DEFAULT;
+
+ /* k2u */
+ ret = ioctl_k2u(fd, &k2u_info);
+ if (ret < 0) {
+ printf("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ printf("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ printf("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(fd, &k2u_huge_info);
+ if (ret < 0) {
+ printf("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ printf("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ printf("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ printf("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ printf("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ printf("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ printf("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(fd, &k2u_info);
+ if (ret < 0) {
+ printf("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(fd, &k2u_huge_info);
+ if (ret < 0) {
+ printf("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ret = ioctl_vfree(fd, &vmalloc_info);
+ if (ret < 0) {
+ printf("vfree small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vfree(fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ printf("vfree huge page error: %d\n", ret);
+ goto error;
+ }
+
+ for (i = 0; i < TIMES; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(fd, &u2k_info[i]);
+ if (ret < 0) {
+ printf("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(fd, &alloc_info[i]);
+ if (ret < 0) {
+ printf("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ printf("\nfinish running thread\n");
+ pthread_exit((void *)0);
+
+error:
+ pthread_exit((void *)1);
+}
+
+int main(void) {
+ int ret = 0;
+ struct sp_add_group_info ag_info = {0};
+ pthread_t tid;
+ void *tret;
+
+ fd = open_device();
+ if (fd < 0) {
+ return -1;
+ }
+
+ pid_t pid = getpid();
+ ag_info.pid = pid;
+ ag_info.prot = PROT_READ | PROT_WRITE;
+ ag_info.spg_id = GROUP_ID;
+ ret = ioctl_add_group(fd, &ag_info);
+ if (ret < 0) {
+ close_device(fd);
+ return -1;
+ }
+ printf("ioctl add group pid is %d, spg_id is %d\n", pid, ag_info.spg_id);
+
+ ret = pthread_create(&tid, NULL, thread, NULL);
+ if (ret != 0)
+ printf("%s: create thread error\n", __func__);
+
+ ret = pthread_join(tid, &tret);
+ if (ret != 0)
+ printf("%s: can't join thread\n", __func__);
+
+ close_device(fd);
+
+ if ((long)tret != 0) {
+ printf("testcase execution failed\n");
+ return -1;
+ }
+ printf("testcase execution is successful\n");
+ return 0;
+}
+
diff --git a/tools/testing/sharepool/testcase/test_mult_process/Makefile b/tools/testing/sharepool/testcase/test_mult_process/Makefile
new file mode 100644
index 000000000000..9a20b0d1fa32
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/Makefile
@@ -0,0 +1,16 @@
+MODULEDIR:=mult_add_group_test mult_k2u_test mult_u2k_test mult_debug_test stress_test
+
+all:tooldir
+
+tooldir:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n; done
+install:
+ mkdir -p $(TOOL_BIN_DIR)/test_mult_process
+ cp test_proc_interface.sh $(TOOL_BIN_DIR)
+ cp test_proc_interface.sh $(TOOL_BIN_DIR)/test_mult_process
+ cp test_mult_process.sh $(TOOL_BIN_DIR)
+ for n in $(MODULEDIR); do $(MAKE) -C $$n install; done
+clean:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n clean; done
+
+
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/Makefile b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/Makefile
new file mode 100644
index 000000000000..9a2b520d1b5f
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/test_mult_process
+
+clean:
+ rm -rf $(testcases)
\ No newline at end of file
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_add_multi_cases.c b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_add_multi_cases.c
new file mode 100644
index 000000000000..67398c8ac927
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_add_multi_cases.c
@@ -0,0 +1,255 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 25 08:21:16 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include <sys/types.h>
+#include <sys/ipc.h>
+#include <sys/sem.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * add_tasks_to_different_group 1000个子进程并发加入不同组,再并发退出 --- 测试多进程并发加组
+ * add_tasks_to_auto_group 1000个子进程并发加入不同组(组id自动分配),再并发退出 --- 测试auto
+ * add_tasks_and_kill 100个子进程加入同一个组,同时循环查询组信息,再依次杀掉子进程 --- 测试组信息查询接口
+ * addgroup_and_querygroup 创建子进程,查询组信息,同时父进程加随机组,然后子进程退出;循环10000次。
+ */
+
+/*
+ * 多进程并发加入不同的指定组(每个组里只有1个进程),然后并发退出。
+ * (后台进程一直反复调用sp_group_id_by_pid查询所有pid)
+ */
+#define TEST_ADDTASK_SEM_KEY 9834
+
+static int add_tasks_to_group_child(int semid, int group_id)
+{
+ int ret;
+
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = -1,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+
+ struct sp_add_group_info ag_info= {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("add group %d failed, errno: %d", group_id, errno);
+ ret = -1;
+ goto out;
+ }
+
+ if (group_id == SPG_ID_AUTO) {
+ if (ag_info.spg_id < SPG_ID_AUTO_MIN || ag_info.spg_id > SPG_ID_AUTO_MAX) {
+ pr_info("invalid spg_id returned: %d", ag_info.spg_id);
+ ret = -1;
+ goto out;
+ }
+ }
+
+out:
+ // 通知本进程加组完成
+ semop(semid, &sembuf, 1);
+ // 等待所有子进程加组完成
+ sembuf.sem_op = 0;
+ semop(semid, &sembuf, 1);
+
+ return ret;
+}
+
+#define NR_CHILD 1000
+static int add_tasks_to_group(int group_id)
+{
+ int i;
+ pid_t childs[NR_CHILD] = {0};
+ int semid = semget(TEST_ADDTASK_SEM_KEY, 1, IPC_CREAT | IPC_EXCL | 0644);
+ if (semid < 0) {
+ pr_info("open systemV semaphore failed, errno: %d", errno);
+ return -1;
+ }
+
+ int ret = semctl(semid, 0, SETVAL, 0);
+ if (ret < 0) {
+ pr_info("sem setval failed, %s", strerror(errno));
+ goto error_out;
+ }
+
+ for (i = 0; i < NR_CHILD; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ for (i--; i >= 0; i--) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], NULL, 0);
+ }
+ ret = -1;
+ goto error_out;
+ } else if (pid == 0) {
+ exit(add_tasks_to_group_child(semid, group_id == SPG_ID_AUTO ? group_id : i + 2));
+ }
+
+ childs[i] = pid;
+ }
+
+ // 通知子进程开始加组
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = NR_CHILD * 2,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+
+ // 等待子进程加组完成
+ sembuf.sem_op = 0;
+ semop(semid, &sembuf, 1);
+
+ int status;
+ for (int i = 0; i < NR_CHILD; i++) {
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child(pid:%d) exit unexpected", childs[i]);
+ ret = -1;
+ }
+ }
+
+error_out:
+ if (semctl(semid, 0, IPC_RMID) < 0)
+ pr_info("sem setval failed, %s", strerror(errno));
+ return ret;
+}
+
+/*
+ * 多进程并发加入不同的指定组(每个组里只有1个进程),然后并发退出。
+ * (后台进程一直反复调用sp_group_id_by_pid查询所有pid)
+ */
+static int add_tasks_to_different_group(void)
+{
+ return add_tasks_to_group(0);
+}
+
+/*
+ * 多进程并发加入不同的指定组(每个组里只有1个进程),然后并发退出。
+ * (后台进程一直反复调用sp_group_id_by_pid查询所有pid)
+ */
+static int add_tasks_to_auto_group(void)
+{
+ return add_tasks_to_group(SPG_ID_AUTO);
+}
+
+/*
+ * 多进程逐个加入同一个组,按进组顺序将这些进程杀掉。
+ */
+#define ADD_TASKS_AND_KILL_CHILD_NUM 100
+static int add_tasks_and_kill(void)
+{
+ int group_id = 234, ret = 0, i;
+ pid_t childs[ADD_TASKS_AND_KILL_CHILD_NUM] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .spg_id = group_id,
+ .prot = PROT_READ | PROT_WRITE,
+ };
+
+ for (i = 0; i < ADD_TASKS_AND_KILL_CHILD_NUM; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ ret = -1;
+ goto out;
+ } else if (pid == 0) {
+ while (1) {
+ ioctl_find_first_group(dev_fd, getpid());
+ }
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("add task(pid:%d) to group(id:%d) failed, errno: %d", pid, group_id, errno);
+ ret = -1;
+ goto out;
+ }
+ childs[i] = pid;
+ }
+
+out:
+ for (i--; i >= 0; i--) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], NULL, 0);
+ }
+
+ return ret;
+}
+
+/*
+ * 并发执行加组和查询组操作
+ */
+static int addgroup_and_querygroup(void)
+{
+ int ret = 0;
+
+ srand((unsigned int)time(NULL));
+ for (int i = 1; !ret && i < 10000; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ ret = ioctl_find_first_group(dev_fd, getpid());
+ exit(ret);
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = rand() % (SPG_ID_AUTO_MIN - 1) + 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d, pid: %d, spg_id: %d",
+ errno, pid, ag_info.spg_id);
+ }
+ waitpid(pid, NULL, 0);
+ }
+
+ // 加组失败可能是用户进程退出导致的,失败不做判据,本用例充当压测用例,无异常即可
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(add_tasks_to_different_group, "多进程并发加入不同的指定组(每个组里只有1个进程),然后并发退出。")
+ TESTCASE_CHILD(add_tasks_to_auto_group, "多进程并发加入随机组(SPG_ID_AUTO),然后退出。")
+ TESTCASE_CHILD(add_tasks_and_kill, "多进程逐个加入同一个组,按进组顺序将这些进程杀掉。")
+ TESTCASE_CHILD(addgroup_and_querygroup, "对同一个进程(pid)同时进行加组和查询组的操作")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_alloc_add_and_kill.c b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_alloc_add_and_kill.c
new file mode 100644
index 000000000000..3cc88b6542bc
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_alloc_add_and_kill.c
@@ -0,0 +1,347 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 执行流程:
+ * grandchild:
+ * 连续申请100次内存,然后释放掉,循环做
+ * child:
+ * 不断的kill或者创建grandchild进程
+ * 参数:
+ * -n 杀死或者创建grandchild进程次数
+ * -p 每组进程数
+ * -g sharepool组的数量
+ * -s grandchild进程每次申请内存的大小
+ */
+
+#define NR_GROUP 100
+#define MAX_PROC_PER_GRP 100
+#define NR_HOLD_AREAS 100
+
+static sem_t *child_sync[NR_GROUP];
+static sem_t *grandchild_sync[NR_GROUP];
+
+static int group_num = 2;
+static int process_per_group = 3;
+static int kill_num = 1000;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int grandchild_id)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", grandchild_id, getpid(), ##args)
+ int ret;
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync[grandchild_id / MAX_PROC_PER_GRP]);
+ } while ((ret != 0) && errno == EINTR);
+
+ //pr_local_info("start!!");
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ sem_post(grandchild_sync[grandchild_id / MAX_PROC_PER_GRP]);
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_infos[NR_HOLD_AREAS] = {0};
+ for (int i = 0; i < NR_HOLD_AREAS; i++) {
+ alloc_infos[i].flag = 0;
+ alloc_infos[i].spg_id = group_id;
+ alloc_infos[i].size = alloc_size;
+ }
+
+ int top = 0;
+ int count = 0;
+ while (1) {
+ struct sp_alloc_info *info = alloc_infos + top++;
+ ret = ioctl_alloc(dev_fd, info);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ return -1;
+ } else {
+ if (IS_ERR_VALUE(info->addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", info->addr);
+ return -1;
+ }
+ }
+
+ memset((void *)info->addr, 'z', info->size);
+
+ if (top == NR_HOLD_AREAS) {
+ top = 0;
+ for (int i = 0; i < NR_HOLD_AREAS; i++) {
+ ret = ioctl_free(dev_fd, &alloc_infos[i]);
+ if (ret < 0) {
+ pr_local_info("sp_free failed, %d", ret);
+ return ret;
+ }
+ }
+ pr_info("grandchild process id:%d finished %dth alloc-free-100times-run.", grandchild_id, ++count);
+ }
+ }
+
+ //pr_local_info("exit!!");
+ return 0;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ //pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ for (int i = 0; i < process_per_group; i++) {
+ int grandchild_id = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(grandchild_id);
+ exit(ret);
+ } else {
+ //pr_local_info("fork grandchild%d, pid: %d", grandchild_id, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", grandchild_id, group_id);
+ goto out;
+ } else {
+ //pr_local_info("add grandchild%d to group %d successfully", grandchild_id, group_id);
+ }
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+ pr_local_info("%dth sp group %d, create %d processes and add group success!!", arg, group_id, process_per_group);
+
+ unsigned int seed = time(NULL);
+ srand(seed);
+ pr_local_info("rand seed: %u\n", seed);
+ /* 随机杀死或者创建进程 */
+ for (int i = 0; i < kill_num; i++) {
+ sleep(1);
+ pr_info("group %d %dth interruption, %d times left.", group_id, i + 1, kill_num - i - 1);
+ int idx = rand() % process_per_group;
+
+ if (childs[idx] > 0) {
+ kill(childs[idx], SIGKILL);
+ waitpid(childs[idx], NULL, 0);
+ childs[idx] = 0;
+ } else {
+ int grandchild_id = arg * MAX_PROC_PER_GRP + idx;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(grandchild_id);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", grandchild_id, pid);
+ childs[idx] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", grandchild_id, group_id);
+ goto out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", grandchild_id, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+ }
+
+out:
+ for (int i = 0; i < process_per_group; i++) {
+ if (!childs[i])
+ continue;
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ if (!WIFSIGNALED(status)) {
+ pr_local_info("grandchild%d test failed", arg * MAX_PROC_PER_GRP + i);
+ ret = -1;
+ }
+ }
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_PER_GRP || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > NR_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 杀死或者创建子进程次数
+ kill_num = atoi(optarg);
+ if (kill_num > 100000 || kill_num <= 0) {
+ printf("kill number invalid\n");
+ return -1;
+ }
+ break;
+ case 's':
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ printf("unsupported options: '%c'\n", opt);
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+#define pr_local_info(fmt, args...) printf("[parent , pid:%d] " fmt "\n", getpid(), ##args)
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[NR_GROUP];
+
+ pr_local_info("group: %d, process_per_group: %d", group_num, process_per_group);
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_local_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_local_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+
+ pr_local_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ status = (char)WEXITSTATUS(status);
+ if (status) {
+ pr_local_info("child%d test failed, %d", i, status);
+ ret = status;
+ }
+ }
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "连续申请100次内存,然后释放掉,循环; 同时不断杀死进程/创建新进程加组")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_max_group_per_process.c b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_max_group_per_process.c
new file mode 100644
index 000000000000..7218651368ee
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_max_group_per_process.c
@@ -0,0 +1,94 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Jun 08 06:47:40 2021
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <sys/mman.h>
+#include <pthread.h>
+
+#include "sharepool_lib.h"
+
+#define PROCESS_NR 15
+#define THERAD_NUM 20
+#define MAX_GROUP_PER_PROCESS 3000
+/*
+ * 测试步骤:多进程多线程并发加组,组ID为AUTO
+ * 预期结果,所有线程加组成功次数为2999
+ */
+static void *test2_thread(void *arg)
+{
+ int i, ret = 0;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ,
+ };
+
+ for (i = 0; i < MAX_GROUP_PER_PROCESS - 1; i++) {
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ }
+
+out:
+ pr_info("thread%d: returned, %d groups has been added successfully", (int)arg, i);
+ return (void *)i;
+}
+
+static int testcase_route(int idx)
+{
+ int i, ret = 0, sum = 0;
+ void *val;
+ pthread_t th[THERAD_NUM];
+
+ for (i = 0; i < ARRAY_SIZE(th); i++)
+ TEST_CHECK(pthread_create(th + i, NULL, test2_thread, (void *)(i + idx * THERAD_NUM)), out);
+
+ for (i = 0; i < ARRAY_SIZE(th); i++) {
+ TEST_CHECK(pthread_join(th[i], &val), out);
+ sum += (int)val;
+ }
+
+ if (sum != MAX_GROUP_PER_PROCESS - 1) {
+ pr_info("MAX_GROUP_PER_PROCESS check failed, %d", sum);
+ return -1;
+ }
+
+out:
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret = 0, i;
+ pid_t pid[PROCESS_NR];
+
+ for (i = 0; i < ARRAY_SIZE(pid); i++)
+ FORK_CHILD_ARGS(pid[i], testcase_route(i));
+
+ for (i = 0; i < ARRAY_SIZE(pid); i++)
+ WAIT_CHILD_STATUS(pid[i], out);
+
+out:
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "同进程多线程并发加组,预期所有线程加组成功次数为2999")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_alloc_and_add_group.c b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_alloc_and_add_group.c
new file mode 100644
index 000000000000..201278e884eb
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_alloc_and_add_group.c
@@ -0,0 +1,138 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Feb 5 09:53:12 2021
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <pthread.h>
+
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include "sharepool_lib.h"
+
+
+#define GROUP_ID 1
+#define REPEAT_TIMES 20
+#define PROC_NUM_1 20
+#define PROC_NUM_2 60
+#define PROT (PROT_READ | PROT_WRITE)
+
+static int testcase1_add_group(int i)
+{
+ if (wrap_add_group(getpid(), PROT, GROUP_ID) < 0)
+ return -1;
+
+ pr_info("%dth process%d add group success.", i, getpid());
+
+ return 0;
+}
+
+static int testcase1_idle(int i)
+{
+ if (wrap_add_group(getpid(), PROT, GROUP_ID) < 0)
+ return -1;
+
+ pr_info("%dth process%d add group success. start idling...",
+ i, getpid());
+
+ while(1) {
+
+ }
+ return 0;
+}
+
+static int testcase1_alloc_free(int idx)
+{
+ unsigned long addr[REPEAT_TIMES];
+ void *ret_addr;
+ int count = 0;
+ int ret = 0;
+
+ if (wrap_add_group(getpid(), PROT, GROUP_ID) < 0)
+ return -1;
+ pr_info("alloc-child %d add group success.", idx);
+
+ while (1) {
+ for (int i = 0; i < REPEAT_TIMES; i++) {
+ ret_addr = wrap_sp_alloc(GROUP_ID, PMD_SIZE, 0);
+ if ((unsigned long)ret_addr == -1) {
+ pr_info("alloc failed!!!");
+ return -1;
+ }
+ addr[i] = (unsigned long)ret_addr;
+ }
+ pr_info("alloc-child %d alloc %dth time finished. start to free..",
+ idx, count);
+ sleep(3);
+
+ for (int i = 0; i < REPEAT_TIMES; i++) {
+ ret = wrap_sp_free(addr[i]);
+ if (ret < 0) {
+ pr_info("free failed!!! errno: %d", ret);
+ return ret;
+ }
+ }
+ pr_info("alloc-child %d free %dth time finished. start to alloc..",
+ idx, count);
+ count++;
+ }
+
+ return 0;
+}
+
+/*
+ * 多进程加组后不停alloc,同时让新进程加组,看会不会冲突mmap
+ */
+static int testcase1(void)
+{
+ int i;
+ int ret = 0;
+ pid_t child_idle[PROC_NUM_1];
+ pid_t child[PROC_NUM_2];
+ pid_t pid;
+
+ sleep(1);
+
+ for (i = 0; i < PROC_NUM_1; i++) {
+ FORK_CHILD_ARGS(child_idle[i], testcase1_idle(i));
+ }
+
+ pid = fork();
+ if (pid == 0)
+ exit(testcase1_alloc_free(0));
+
+ for (i = 0; i < PROC_NUM_2; i++) {
+ sleep(1);
+ FORK_CHILD_ARGS(child[i], testcase1_add_group(i));
+ }
+
+ for (i = 0; i < PROC_NUM_2; i++)
+ WAIT_CHILD_STATUS(child[i], out);
+out:
+ for (i = 0; i < PROC_NUM_1; i++)
+ KILL_CHILD(child_idle[i]);
+ KILL_CHILD(pid);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "多进程加组后不停alloc,同时让新进程加组,预期正常")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_process_thread_exit.c b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_process_thread_exit.c
new file mode 100644
index 000000000000..d8e531139f0c
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_process_thread_exit.c
@@ -0,0 +1,498 @@
+#include <stdlib.h>
+#include <pthread.h>
+#include <stdbool.h>
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+#define THREAD_NUM 50
+#define GROUP_NUM 50
+#define GROUP_BASE_ID 1
+
+static pthread_mutex_t mutex;
+static int add_success, add_fail;
+static int group_ids[GROUP_NUM];
+static int semid;
+
+static int query_group(int *group_num)
+{
+ int ret;
+ // query groups
+ int spg_ids[GROUP_NUM];
+ *group_num = GROUP_NUM;
+ struct sp_group_id_by_pid_info find_group_info = {
+ .num = group_num,
+ .spg_ids = spg_ids,
+ .pid = getpid(),
+ };
+ ret = ioctl_find_group_by_pid(dev_fd, &find_group_info);
+ if (ret < 0) {
+ pr_info("find group id by pid failed");
+ return ret;
+ } else {
+ return 0;
+ }
+}
+
+#define TIMES 4
+void *thread_and_process_helper(int group_id)
+{
+ int ret, i;
+ pid_t pid;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[TIMES] = {0};
+ struct sp_make_share_info u2k_info[TIMES] = {0}, k2u_info = {0}, k2u_huge_info = {0};
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ char *addr;
+
+ /* check sp group */
+ pid = getpid();
+
+ // 大页
+ alloc_info[0].flag = SP_HUGEPAGE;
+ alloc_info[0].spg_id = group_id;
+ alloc_info[0].size = 2 * PMD_SIZE;
+
+ // 大页 DVPP
+ alloc_info[1].flag = SP_DVPP | SP_HUGEPAGE;
+ alloc_info[1].spg_id = group_id;
+ alloc_info[1].size = 2 * PMD_SIZE;
+
+ // 普通页 DVPP
+ alloc_info[2].flag = SP_DVPP;
+ alloc_info[2].spg_id = group_id;
+ alloc_info[2].size = 4 * PAGE_SIZE;
+
+ // 普通页
+ alloc_info[3].flag = 0;
+ alloc_info[3].spg_id = group_id;
+ alloc_info[3].size = 4 * PAGE_SIZE;
+
+ /* alloc & u2k */
+ for (i = 0; i < TIMES; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("sp_alloc return addr %lx\n", alloc_info[i].addr);
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(dev_fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("addr %lx is a valid share pool addr\n", alloc_info[i].addr);
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ pr_info("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("u2k return addr %lx, check memory content succ.\n",
+ // u2k_info[i].addr);
+ }
+ }
+ //pr_info("\n");
+ }
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = group_id;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = group_id;
+
+ /* k2u */
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ret = ioctl_vfree(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vfree small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vfree(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vfree huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* unshare kva & sp_free*/
+ for (i = 0; i < TIMES; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ //close_device(dev_fd);
+ return 0;
+
+error:
+ //close_device(dev_fd);
+ return -1;
+}
+
+void *thread_query_and_work(void *arg)
+{
+ int ret = 0;
+ int group_num = 0, j = 0;
+
+ while(group_num < GROUP_NUM && j++ < 10) {
+ // query group
+ ret = query_group(&group_num);
+ if (ret < 0) {
+ pr_info("query_group failed.");
+ continue;
+ }
+ for (int i = 0; i < group_num; i++) {
+ ret = thread_and_process_helper(group_ids[i]);
+ if (ret != 0) {
+ pr_info("\nthread %lu finish running with error, spg_id: %d\n", pthread_self(), group_ids[i]);
+ pthread_exit((void *)1);
+ }
+ }
+ }
+
+ pthread_exit((void *)0);
+}
+
+void *thread_add_group(void *arg)
+{
+ int ret = 0;
+ for(int i = 1; i <= GROUP_NUM; i++)
+ {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, i + 1);
+ if (pthread_mutex_lock(&mutex) != 0) {
+ pr_info("get pthread mutex failed.");
+ }
+ if (ret < 0)
+ add_fail++;
+ else
+ add_success++;
+ pthread_mutex_unlock(&mutex);
+ }
+ pthread_exit((void *)0);
+}
+
+static int process_routine(void)
+{
+ int ret = 0;
+ // threads for alloc and u2k k2u
+ pthread_t tid1[THREAD_NUM];
+ bool loop[THREAD_NUM];
+ for (int i = 0; i < THREAD_NUM; i++) {
+ loop[i] = false;
+ ret = pthread_create(tid1 + i, NULL, thread_query_and_work, (void *) (loop + i));
+ if (ret < 0) {
+ pr_info("thread create failed.");
+ return -1;
+ }
+ }
+ // N threads for add M groups, N*M attempts, only M attempts shall success
+ pthread_t tid2[THREAD_NUM];
+ for (int j = 0; j < 1; j++) {
+ ret = pthread_create(tid2 + j, NULL, thread_add_group, NULL);
+ if (ret < 0) {
+ pr_info("thread create failed.");
+ return -1;
+ }
+ }
+
+ // wait for add_group threads to return
+ for (int j = 0; j < 1; j++) {
+ void *tret;
+ ret = pthread_join(tid2[j], &tret);
+ if (ret < 0) {
+ pr_info("thread join failed.");
+ ret = -1;
+ }
+ if ((long)tret != 0) {
+ pr_info("thread %d failed.", j);
+ ret = -1;
+ } else {
+ pr_info("add group thread %d return success!!", j);
+ }
+ }
+
+ // wait for work threads to return
+ for (int i = 0; i < THREAD_NUM; i++) {
+ void *tret;
+ ret = pthread_join(tid1[i], &tret);
+ if (ret < 0) {
+ pr_info("thread join failed.");
+ ret = -1;
+ }
+ if ((long)tret != 0) {
+ pr_info("thread %d failed.", i);
+ ret = -1;
+ } else {
+ pr_info("work thread %d return success!!", i);
+ }
+ }
+
+ return ret;
+}
+
+#define PROCESS_NUM 20
+/* testcase1: 多进程加多组,例行调用所有API后,再并发退出*/
+static int testcase1(void)
+{
+ int ret = 0;
+
+ const int semid = sem_create(1234, "concurrent");
+ // fork child processes, they should not copy parent's group
+ int childs[PROCESS_NUM];
+ for (int k = 0; k < PROCESS_NUM; k++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed.");
+ return -1;
+ } else if (pid == 0) {
+ ret = process_routine();
+ sem_inc_by_one(semid);
+ sem_check_zero(semid);
+ exit(ret);
+ }
+ childs[k] = pid;
+ }
+
+ // wait until all processes finished add group thread and work thread
+ sem_dec_by_val(semid, PROCESS_NUM);
+
+ for (int k = 0; k < PROCESS_NUM; k++) {
+ int status;
+ ret = waitpid(childs[k], &status, 0);
+ if (ret < 0) {
+ pr_info("waitpid failed");
+ ret = -1;
+ }
+ if (status != 0) {
+ pr_info("child process %d pid %d exit unexpected", k, childs[k]);
+ ret = -1;
+ }
+ childs[k] = 0;
+ }
+
+ // 检查/proc/sharepool/proc_stat 应该只有一个spg
+
+ sem_close(semid);
+ return 0;
+}
+
+void *thread_exit(void *arg)
+{
+ sem_check_zero(semid);
+ pthread_exit((void *)0);
+}
+
+/* testcase2: 多进程加多组,例行调用所有API后,让多线程并发退出*/
+static int testcase2(void)
+{
+ int ret;
+
+ semid = sem_create(1234, "concurrent");
+ // fork child processes, they should not copy parent's group
+ int childs[PROCESS_NUM];
+ for (int k = 0; k < PROCESS_NUM; k++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed.");
+ return -1;
+ } else if (pid == 0) {
+
+ ret = process_routine(); // add group and work
+
+ // 线程并发退出
+ pthread_t tid[THREAD_NUM];
+ for (int j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_create(tid + j, NULL, thread_exit, NULL);
+ if (ret < 0) {
+ pr_info("thread create failed.");
+ return -1;
+ }
+ }
+ sem_inc_by_one(semid);
+ for (int j = 0; j < THREAD_NUM; j++) {
+ void *tret;
+ ret = pthread_join(tid[j], &tret);
+ if (ret < 0) {
+ pr_info("exit thread join failed.");
+ ret = -1;
+ }
+ if ((long)tret != 0) {
+ pr_info("exit thread %d failed.", j);
+ ret = -1;
+ } else {
+ pr_info("exit thread %d return success!!", j);
+ }
+ }
+
+ exit(ret);
+ }
+ childs[k] = pid;
+ }
+
+ // wait until all processes finished add group thread and work thread
+ sleep(5);
+ sem_dec_by_val(semid, PROCESS_NUM);
+
+ for (int k = 0; k < PROCESS_NUM; k++) {
+ int status;
+ ret = waitpid(childs[k], &status, 0);
+ if (ret < 0) {
+ pr_info("waitpid failed");
+ ret = -1;
+ }
+ if (status != 0) {
+ pr_info("child process %d pid %d exit unexpected", k, childs[k]);
+ ret = -1;
+ }
+ childs[k] = 0;
+ }
+
+ // 检查/proc/sharepool/proc_stat 应该只有一个spg
+
+ sem_close(semid);
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "多进程执行完routine后,并发退出")
+ TESTCASE_CHILD(testcase2, "多线程执行完routine后,并发退出")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_thread_add_group.c b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_thread_add_group.c
new file mode 100644
index 000000000000..7660978f7e08
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_thread_add_group.c
@@ -0,0 +1,220 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Feb 5 09:53:12 2021
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <pthread.h>
+
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include "sharepool_lib.h"
+
+
+#define GROUP_ID 1
+#define REPEAT_TIMES 5
+#define THREAD_NUM 20
+
+/*
+ * testcase1: 创建5个子进程,每个子进程创建20个线程加组。预计每个子进程只有一个线程可以加组成功。
+ *
+ */
+
+static void *thread_add_group(void *arg)
+{
+ int ret = 0;
+ pid_t pid = getpid();
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = GROUP_ID,
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret == 0)
+ return 0;
+ else if (ret < 0 && errno == EEXIST) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ ret = 1;
+ } else {
+ ret = -1;
+ }
+
+ return (void *)ret;
+}
+
+static int child_process(void)
+{
+ int i;
+ int group_id;
+ int ret = 0;
+ void *tret;
+ pthread_t tid[THREAD_NUM];
+ int failure = 0, success = 0;
+
+ for (i = 0; i < THREAD_NUM; i++) {
+ ret = pthread_create(tid + i, NULL, thread_add_group, NULL);
+ if (ret != 0) {
+ pr_info("create thread %d error\n", i);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < THREAD_NUM; i++) {
+ ret = pthread_join(tid[i], &tret);
+ if (ret != 0) {
+ pr_info("can't join thread %d\n", i);
+ return -1;
+ }
+ if ((long)tret == 0) {
+ success++;
+ } else if ((long)tret == 1){
+ failure++;
+ } else {
+ pr_info("testcase execution failed, thread %d exited unexpectedly\n", i);
+ return -1;
+ }
+ }
+
+ pid_t pid = getpid();
+ group_id = ioctl_find_first_group(dev_fd, pid);
+ if (group_id != GROUP_ID) {
+ printf("query group id is %d, but expected group id is %d\n",
+ group_id, GROUP_ID);
+ return -1;
+ }
+
+ if (success != 1 || success + failure != THREAD_NUM) {
+ pr_info("testcase failed, success %d times, fail %d times",
+ success, failure);
+ return -1;
+ }
+
+ return 0;
+}
+
+/*join a group and then exit the group*/
+static int child_process_multi_add_exit(void)
+{
+ int ret = 0;
+ int cnt = 0;
+ while(cnt < 600)
+ {
+ pid_t pid = getpid();
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = GROUP_ID,
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ sleep(1);
+
+ struct sp_del_from_group_info info = {
+ .pid = pid,
+ .spg_id = GROUP_ID,
+ };
+
+ ret = ioctl_del_from_group(dev_fd, &info);
+ if (ret < 0) {
+ pr_info("ioctl_del_group failed, errno: %d", errno);
+ }
+
+ cnt++;
+ }
+ return 0;
+}
+
+/*
+ * 单进程创建一批线程并发尝试加入同一个共享组,除其中一个线程加入成功之外,
+ * 其它应返回-EEXIST
+ */
+static int testcase1(void)
+{
+ int i;
+ int ret = 0;
+ pid_t pid;
+
+ for (i = 0; i < REPEAT_TIMES; i++) {
+ pid = fork();
+
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child_process());
+ }
+
+ int status;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("thread add group test, time %d failed", i + 1);
+ ret = -1;
+ break;
+ } else {
+ pr_info("thread add group test, time %d success", i + 1);
+ }
+ }
+
+ return ret;
+}
+
+static int testcase2(void)
+{
+ int i;
+ int ret = 0;
+ pid_t pid = getpid();
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = GROUP_ID,
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ for (i = 0; i < 999; i++) {
+ pid = fork();
+
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+
+ exit(child_process_multi_add_exit());
+ }
+ }
+
+ int status;
+ wait(&status);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "同一进程内多线程并发加入同个共享组,预期只有1个成功")
+ TESTCASE_CHILD(testcase2, "同一进程内多线程并发加入同个共享组,退出同个共享组,同时查询/proc_show接口")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_u2k_add_and_kill.c b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_u2k_add_and_kill.c
new file mode 100644
index 000000000000..dd42e674f402
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_u2k_add_and_kill.c
@@ -0,0 +1,358 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 27 13:45:03 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * group_num个组,每组process_pre_group个进程,所有grandchild进程死循环执行:(sp_alloc -> u2k -> 内核读内存 -> unshare -> sp_free)
+ * child进程拉起所有grandchild后,反复kill_num次:kill或者创建grandchild进程,保证总数不超过process_pre_group
+ */
+
+#define MAX_GROUP 500
+#define MAX_PROC_PER_GRP 500
+#define MAX_KILL 100000
+
+static sem_t *child_sync[MAX_GROUP];
+static sem_t *grandchild_sync[MAX_GROUP];
+
+static int group_num = 100;
+static int process_per_group = 100;
+static int kill_num = 100;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", arg, getpid(), ##args)
+ int ret;
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync[arg / MAX_PROC_PER_GRP]);
+ } while ((ret != 0) && errno == EINTR);
+
+ //pr_local_info("start!!");
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ sem_post(grandchild_sync[arg / MAX_PROC_PER_GRP]);
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .spg_id = group_id,
+ .size = alloc_size,
+ };
+
+ while (1) {
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ return ret;
+ } else {
+ if (IS_ERR_VALUE(alloc_info.addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", alloc_info.addr);
+ return -1;
+ }
+ }
+ memset((void *)alloc_info.addr, 'r', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_local_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'r',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_local_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_local_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_local_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ //pr_local_info("exit!!");
+ return 0;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ //pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+
+ // add group
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ // 轮流创建该组的所有子进程
+ for (int i = 0; i < process_per_group; i++) {
+ int num = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(num);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", num, pid);
+ childs[i] = pid;
+
+ // 将子进程加组
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", num, group_id);
+ goto out;
+ } else
+ //pr_local_info("add grandchild%d to group %d successfully", num, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+ unsigned int seed = time(NULL);
+ srand(seed);
+ pr_local_info("rand seed: %u\n", seed);
+
+ /* 随机杀死或者创建进程 */
+ for (int i = 0; i < kill_num; i++) {
+ printf("kill/create %dth time.\n", i);
+ int idx = rand() % process_per_group;
+
+ if (childs[idx] > 0) {
+ // 杀死这个位置上的进程
+ kill(childs[idx], SIGKILL);
+ waitpid(childs[idx], NULL, 0);
+ printf("We killed %dth process %d!\n", idx, childs[idx]);
+ childs[idx] = 0;
+ } else {
+ printf("We are going to create a new process.\n");
+ // 该进程已经被杀死了,创建一个新进程填补该位置
+ int num = arg * MAX_PROC_PER_GRP + idx;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(num);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", num, pid);
+ childs[idx] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", num, group_id);
+ goto out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", num, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+ }
+
+out:
+ for (int i = 0; i < process_per_group; i++) {
+ if (!childs[i])
+ continue;
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ if (!WIFSIGNALED(status)) {
+ pr_local_info("grandchild%d test failed", arg * MAX_PROC_PER_GRP + i);
+ ret = -1;
+ }
+ }
+
+ //pr_local_info("exit!!");
+ return ret;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:./test_multi_u2k2 -g group_num -p proc_num -n kill_num -s alloc_size\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_PER_GRP || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > MAX_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 杀死或者创建子进程次数
+ kill_num = atoi(optarg);
+ if (kill_num > MAX_KILL || kill_num <= 0) {
+ printf("kill number invalid\n");
+ return -1;
+ }
+ break;
+ case 's':
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ printf("unsupported options: '%c'\n", opt);
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[MAX_GROUP];
+
+ // 创建组同步锁 grandchild
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // 创建组同步锁 child
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // 创建组对应的子进程
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+ pr_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ // 结束组对应的子进程
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ status = (char)WEXITSTATUS(status);
+ if (status) {
+ pr_info("child%d test failed, %d", i, status);
+ ret = status;
+ }
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "循环sp_alloc -> u2k -> 内核读内存 -> unshare -> sp_free 同时杀掉进程/创建新进程加组")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/Makefile b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/Makefile
new file mode 100644
index 000000000000..9a2b520d1b5f
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/test_mult_process
+
+clean:
+ rm -rf $(testcases)
\ No newline at end of file
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_add_group_and_print.c b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_add_group_and_print.c
new file mode 100644
index 000000000000..3a070bc514ec
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_add_group_and_print.c
@@ -0,0 +1,182 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 执行流程:
+ * child0&child1: 不断地cat /proc/sharepool/proc_stat和spa_stat,预期返回成功
+ * child1&child2: 不断地创建新进程加组,预期返回成功,且无死锁
+ * proc_work: 新进程加组后执行所有功能
+ */
+
+#define PROC_ADD_GROUP 200
+#define PROT_ADD_GROUP (PROT_READ | PROT_WRITE)
+
+int proc_work(void)
+{
+ while(1) {
+
+ }
+}
+
+int start_printer(void)
+{
+ while(1) {
+ pr_info("printer working.");
+ usleep(1000);
+ sharepool_log("sharepool_log");
+ // sharepool_print();
+ }
+ return 0;
+}
+
+int start_creator(int spg_id)
+{
+ int ret = 0;
+ int i = 0;
+ int pid = 0;
+ pid_t child[PROC_ADD_GROUP];
+ memset(child, 0, sizeof(pid_t) * PROC_ADD_GROUP);
+
+ // 加组
+ TEST_CHECK(wrap_add_group(getpid(), PROT_ADD_GROUP, spg_id), out);
+
+ // 拉起子进程加组
+ for (i = 0; i < PROC_ADD_GROUP;) {
+ FORK_CHILD_ARGS(pid, proc_work());
+ child[i++] = pid;
+ TEST_CHECK(wrap_add_group(pid, PROT_ADD_GROUP, spg_id), out);
+ pr_info("%d th group: process %d add success", spg_id, i);
+ }
+out:
+ for (int j = 0; j < i; j++)
+ KILL_CHILD(child[j]);
+
+ pr_info("%s exiting. ret: %d", __FUNCTION__, ret);
+ return ret < 0 ? ret : 0;
+}
+
+static int testcase1(void)
+{
+ int ret = 0;
+ int pid;
+ int status;
+ pid_t child[4];
+
+ // 拉起打印进程
+ for (int i = 0; i < 2; i++) {
+ FORK_CHILD_ARGS(pid, start_printer());
+ child[i] = pid;
+ }
+
+ // 拉起加组进程
+ for (int i = 2; i < 4; i++) {
+ pr_info("creating process");
+ FORK_CHILD_ARGS(pid, start_creator(i)); // use i as spg_id
+ child[i] = pid;
+ }
+
+ // 回收进程
+ for (int i = 2; i < 4; i++)
+ WAIT_CHILD_STATUS(child[i], out);
+ pr_info("creators finished.");
+out:
+ for (int i = 0; i < 2; i++) {
+ kill(child[i], SIGKILL);
+ waitpid(child[i], &status, 0);
+ }
+ pr_info("printers killed.");
+
+ return ret;
+}
+#define GROUP_NUM 1000
+int start_group_creators(int use_auto)
+{
+ int ret;
+ int id = 1;
+
+ if (use_auto % 2 == 0) {
+ while (id <= GROUP_NUM) {
+ TEST_CHECK(wrap_add_group(getpid(), PROT_ADD_GROUP, id), out);
+ pr_info("create group %d success\n", ret);
+ id++;
+ }
+ } else {
+ while (id <= GROUP_NUM) {
+ TEST_CHECK(wrap_add_group(getpid(), PROT_ADD_GROUP, SPG_ID_AUTO), out);
+ pr_info("create group %d success\n", ret);
+ id++;
+ }
+ }
+
+out:
+ return ret < 0? ret : 0;
+}
+
+static int testcase2(void)
+{
+ int ret = 0;
+ int pid;
+ int status;
+ pid_t child[4];
+
+ // 拉起打印进程
+ for (int i = 0; i < 2; i++) {
+ FORK_CHILD_ARGS(pid, start_printer());
+ child[i] = pid;
+ }
+
+ // 拉起创建组进程
+ for (int i = 2; i < 4; i++) {
+ pr_info("creating add group process");
+ FORK_CHILD_ARGS(pid, start_group_creators(i)); // even: auto id, odd: use id
+ child[i] = pid;
+ }
+
+ // 回收进程
+ for (int i = 2; i < 4; i++)
+ WAIT_CHILD_STATUS(child[i], out);
+ pr_info("group creators finished.");
+
+out:
+ for (int i = 0; i < 2; i++) {
+ kill(child[i], SIGKILL);
+ waitpid(child[i], &status, 0);
+ }
+ pr_info("printers killed.");
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "加组与维测打印并发(指定组ID)")
+ TESTCASE_CHILD(testcase2, "加组与维测打印并发(auto ID)")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_concurrent_debug.c b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_concurrent_debug.c
new file mode 100644
index 000000000000..09cbe67d19cd
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_concurrent_debug.c
@@ -0,0 +1,359 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 执行流程:
+ * grandchild:
+ * 连续申请100次内存,然后释放掉,循环
+ * child:
+ * 不断的kill或者创建grandchild进程
+ * 参数:
+ * -n 杀死或者创建grandchild进程次数
+ * -p 每组进程数
+ * -g sharepool组的数量
+ * -s grandchild进程每次申请内存的大小
+ */
+
+#define NR_GROUP 100
+#define MAX_PROC_PER_GRP 100
+#define NR_HOLD_AREAS 100
+
+static sem_t *child_sync[NR_GROUP];
+static sem_t *grandchild_sync[NR_GROUP];
+
+static int group_num = 2;
+static int process_per_group = 3;
+static int kill_num = 100;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int grandchild_id)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", grandchild_id, getpid(), ##args)
+ int ret;
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync[grandchild_id / MAX_PROC_PER_GRP]);
+ } while ((ret != 0) && errno == EINTR);
+
+ //pr_local_info("start!!");
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ sem_post(grandchild_sync[grandchild_id / MAX_PROC_PER_GRP]);
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_infos[NR_HOLD_AREAS] = {0};
+ for (int i = 0; i < NR_HOLD_AREAS; i++) {
+ alloc_infos[i].flag = 0;
+ alloc_infos[i].spg_id = group_id;
+ alloc_infos[i].size = alloc_size;
+ }
+
+ int top = 0;
+ int count = 0;
+ while (1) {
+ struct sp_alloc_info *info = alloc_infos + top++;
+ ret = ioctl_alloc(dev_fd, info);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ return -1;
+ } else {
+ if (IS_ERR_VALUE(info->addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", info->addr);
+ return -1;
+ }
+ }
+
+ memset((void *)info->addr, 'z', info->size);
+
+ if (top == NR_HOLD_AREAS) {
+ top = 0;
+ for (int i = 0; i < NR_HOLD_AREAS; i++) {
+ ret = ioctl_free(dev_fd, &alloc_infos[i]);
+ if (ret < 0) {
+ pr_local_info("sp_free failed, %d", ret);
+ return ret;
+ }
+ }
+ pr_info("grandchild process id:%d finished %dth alloc-free-100times-run.", grandchild_id, ++count);
+ }
+ }
+
+ //pr_local_info("exit!!");
+ return 0;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ //pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ for (int i = 0; i < process_per_group; i++) {
+ int grandchild_id = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(grandchild_id);
+ exit(ret);
+ } else {
+ //pr_local_info("fork grandchild%d, pid: %d", grandchild_id, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", grandchild_id, group_id);
+ goto out;
+ } else {
+ //pr_local_info("add grandchild%d to group %d successfully", grandchild_id, group_id);
+ }
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+ pr_local_info("%dth sp group %d, create %d processes and add group success!!", arg, group_id, process_per_group);
+
+ unsigned int seed = time(NULL);
+ srand(seed);
+ pr_local_info("rand seed: %u\n", seed);
+ /* 随机杀死或者创建进程 */
+ for (int i = 0; i < kill_num; i++) {
+ sleep(1);
+ pr_info("group %d %dth interruption, %d times left.", group_id, i + 1, kill_num - i - 1);
+ int idx = rand() % process_per_group;
+
+ if (childs[idx] > 0) {
+ kill(childs[idx], SIGKILL);
+ waitpid(childs[idx], NULL, 0);
+ childs[idx] = 0;
+ } else {
+ int grandchild_id = arg * MAX_PROC_PER_GRP + idx;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(grandchild_id);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", grandchild_id, pid);
+ childs[idx] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", grandchild_id, group_id);
+ goto out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", grandchild_id, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+ }
+
+out:
+ for (int i = 0; i < process_per_group; i++) {
+ if (!childs[i])
+ continue;
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ if (!WIFSIGNALED(status)) {
+ pr_local_info("grandchild%d test failed", arg * MAX_PROC_PER_GRP + i);
+ ret = -1;
+ }
+ }
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_PER_GRP || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > NR_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 杀死或者创建子进程次数
+ kill_num = atoi(optarg);
+ if (kill_num > 100000 || kill_num <= 0) {
+ printf("kill number invalid\n");
+ return -1;
+ }
+ break;
+ case 's':
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ printf("unsupported options: '%c'\n", opt);
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+#define pr_local_info(fmt, args...) printf("[parent , pid:%d] " fmt "\n", getpid(), ##args)
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[NR_GROUP];
+ pid_t pid;
+
+ pr_local_info("group: %d, process_per_group: %d", group_num, process_per_group);
+
+ pid = fork();
+ if (pid == 0) {
+ while(1) {
+ usleep(200);
+ sharepool_log("sharepool_log");
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_local_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_local_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+
+ pr_local_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ status = (char)WEXITSTATUS(status);
+ if (status) {
+ pr_local_info("child%d test failed, %d", i, status);
+ ret = status;
+ }
+ }
+
+ kill(pid, SIGKILL);
+ waitpid(pid, &status, 0);
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "持续申请->释放内存,同时循环打印维测")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_debug_loop.c b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_debug_loop.c
new file mode 100644
index 000000000000..0c4368244bf9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_debug_loop.c
@@ -0,0 +1,43 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+static int testcase1(void)
+{
+ while(1) {
+ usleep(100);
+ sharepool_print();
+ }
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "维测打印(无限循环,不要单独运行)")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_proc_interface_process.c b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_proc_interface_process.c
new file mode 100644
index 000000000000..fa10c06f1577
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_proc_interface_process.c
@@ -0,0 +1,636 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Sun Jan 31 14:42:01 2021
+ */
+#include <pthread.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+#include "sharepool_lib.h"
+
+
+#define GROUP_ID 1
+#define TIMES 4
+
+#define REPEAT_TIMES 10
+#define THREAD_NUM 30
+#define PROCESS_NUM 30
+
+void *thread_k2u_task(void *arg)
+{
+ int fd, ret;
+ pid_t pid = getpid();
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ struct sp_make_share_info k2u_info = {0}, k2u_huge_info = {0};
+ char *addr;
+
+ //pr_info("enter thread, pid is %d, thread id is %lu\n\n", pid, pthread_self());
+
+ fd = open_device();
+ if (fd < 0) {
+ pr_info("open fd error\n");
+ return NULL;
+ }
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = SP_DVPP;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = SPG_ID_DEFAULT;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = 0;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = SPG_ID_DEFAULT;
+
+ /* k2u */
+ ret = ioctl_k2u(fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ret = ioctl_vfree(fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vfree small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vfree(fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vfree huge page error: %d\n", ret);
+ goto error;
+ }
+
+ //pr_info("\nfinish running thread %lu\n", pthread_self());
+ close_device(fd);
+ pthread_exit((void *)0);
+
+error:
+ close_device(fd);
+ pthread_exit((void *)1);
+}
+
+/*
+ * alloc - u2k - vmalloc - k2u - unshare - vfree - unshare - free
+ * 对大页/大页dvpp/小页/小页dvpp都测一下
+ */
+void *thread_and_process_helper(void)
+{
+ int fd, ret, group_id, i;
+ pid_t pid;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[TIMES] = {0};
+ struct sp_make_share_info u2k_info[TIMES] = {0}, k2u_info = {0}, k2u_huge_info = {0};
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ char *addr;
+
+ fd = open_device();
+ if (fd < 0) {
+ pr_info("open fd error\n");
+ return NULL;
+ }
+
+ /* check sp group */
+ pid = getpid();
+ group_id = ioctl_find_first_group(fd, pid);
+ if (group_id != GROUP_ID) {
+ pr_info("query group id is %d, but expected group id is %d\n",
+ group_id, GROUP_ID);
+ goto error;
+ }
+
+ // 大页
+ alloc_info[0].flag = SP_HUGEPAGE;
+ alloc_info[0].spg_id = GROUP_ID;
+ alloc_info[0].size = 2 * PMD_SIZE;
+
+ // 大页 DVPP
+ alloc_info[1].flag = SP_DVPP | SP_HUGEPAGE;
+ alloc_info[1].spg_id = GROUP_ID;
+ alloc_info[1].size = 2 * PMD_SIZE;
+
+ // 普通页 DVPP
+ alloc_info[2].flag = SP_DVPP;
+ alloc_info[2].spg_id = GROUP_ID;
+ alloc_info[2].size = 4 * PAGE_SIZE;
+
+ // 普通页
+ alloc_info[3].flag = 0;
+ alloc_info[3].spg_id = GROUP_ID;
+ alloc_info[3].size = 4 * PAGE_SIZE;
+
+ for (i = 0; i < TIMES; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("sp_alloc return addr %lx\n", alloc_info[i].addr);
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("addr %lx is a valid share pool addr\n", alloc_info[i].addr);
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ pr_info("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("u2k return addr %lx, check memory content succ.\n",
+ // u2k_info[i].addr);
+ }
+ }
+ //pr_info("\n");
+ }
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = SPG_ID_DEFAULT;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = SPG_ID_DEFAULT;
+
+ /* k2u */
+ ret = ioctl_k2u(fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ret = ioctl_vfree(fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vfree small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vfree(fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vfree huge page error: %d\n", ret);
+ goto error;
+ }
+
+ for (i = 0; i < TIMES; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ close_device(fd);
+ return 0;
+
+error:
+ close_device(fd);
+ return -1;
+}
+
+void *thread(void *arg)
+{
+ int ret;
+ //pr_info("enter thread, pid is %d, thread id is %lu\n\n", getpid(), pthread_self());
+
+ ret = thread_and_process_helper();
+ if (ret == 0) {
+ //pr_info("\nfinish running thread %lu\n", pthread_self());
+ pthread_exit((void *)0);
+ } else {
+ pr_info("\nthread %lu finish running with error\n", pthread_self());
+ pthread_exit((void *)1);
+ }
+}
+
+static int child_process()
+{
+ int fd, ret;
+ struct sp_add_group_info ag_info = {0};
+ pid_t pid = getpid();
+
+ //pr_info("enter process %d\n\n", pid);
+
+ fd = open_device();
+ if (fd < 0) {
+ return -1;
+ }
+
+ ag_info.pid = pid;
+ ag_info.spg_id = GROUP_ID;
+ ag_info.prot = PROT_READ | PROT_WRITE;
+ ret = ioctl_add_group(fd, &ag_info);
+ if (ret < 0) {
+ close_device(fd);
+ return -1;
+ }
+
+ //pr_info("ioctl add group pid is %d, spg_id is %d\n", pid, ag_info.spg_id);
+ ret = thread_and_process_helper();
+
+ close_device(fd);
+ if (ret == 0) {
+ //pr_info("\nfinish running process %d\n", pid);
+ return 0;
+ } else {
+ pr_info("\nprocess %d finish running with error\n", pid);
+ return -1;
+ }
+ return 0;
+}
+
+static int testcase1(void) {
+ int fd;
+ int ret = 0;
+ int status = 0;
+ int sleep_interval = 3;
+ int i, j, k;
+ struct sp_add_group_info ag_info = {0};
+ pthread_t tid[THREAD_NUM];
+ void *tret;
+ pid_t pid_child;
+ pid_t childs[PROCESS_NUM];
+
+ // 创建15个线程,执行单k2task任务。重复创建5次。
+ pr_info("\nmain process begins thread_k2u_task test\n");
+ for (i = 0; i < REPEAT_TIMES; i++) {
+ pr_info("thread k2task %dth test, %d times in total.", i + 1, REPEAT_TIMES);
+ for (j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_create(tid + j, NULL, thread_k2u_task, NULL);
+ if (ret != 0) {
+ pr_info("create thread %d error\n", j);
+ goto finish;
+ }
+ }
+ for (j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_join(tid[j], &tret);
+ if (ret != 0) {
+ pr_info("can't join thread %d\n", j);
+ }
+ if ((long)tret != 0) {
+ pr_info("testcase execution failed, thread %d exited unexpectedly\n", j);
+ ret = -1;
+ }
+ }
+ sleep(sleep_interval);
+ }
+ if (ret) {
+ goto finish;
+ }
+ pr_info("\nthread_k2task test success!!\n");
+ sleep(3);
+
+ fd = open_device();
+ if (fd < 0) {
+ pr_info("open fd error\n");
+ return NULL;
+ }
+
+ // add group
+ pid_t pid = getpid();
+ ag_info.pid = pid;
+ ag_info.spg_id = GROUP_ID;
+ ag_info.prot = PROT_READ | PROT_WRITE;
+ ret = ioctl_add_group(fd, &ag_info);
+ if (ret < 0) {
+ pr_info("add group failed, errno: %d", ret);
+ close_device(fd);
+ return -1;
+ }
+ pr_info("\nioctl add group pid is %d, spg_id is %d\n", pid, ag_info.spg_id);
+
+ // 创建15个线程,执行u2k+k2u混合任务。重复创建5次。
+ pr_info("\nmain process begins thread test\n");
+ for (i = 0; i < REPEAT_TIMES; i++) {
+ pr_info("thread u2k+k2u %dth test, %d times in total.", i + 1, REPEAT_TIMES);
+ for (j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_create(tid + j, NULL, thread, NULL);
+ if (ret != 0) {
+ pr_info("create thread error\n");
+ ret = -1;
+ goto finish;
+ }
+ }
+
+ for (j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_join(tid[j], &tret);
+ if (ret != 0) {
+ pr_info("can't join thread %d\n", j);
+ }
+ if ((long)tret != 0) {
+ pr_info("testcase execution failed, thread %d exited unexpectedly\n", j);
+ }
+ }
+
+ sleep(sleep_interval);
+ }
+ if (ret) {
+ goto finish;
+ }
+ pr_info("\nthread u2k+k2u test success!!\n");
+ sleep(3);
+
+ // 创建15个进程,执行u2k+k2u混合任务。重复创建5次。
+ pr_info("\nmain process begins process test\n");
+ for (i = 0; i < REPEAT_TIMES; i++) {
+ pr_info("process u2k+k2u %dth test, %d times in total.", i + 1, REPEAT_TIMES);
+ for (j = 0; j < PROCESS_NUM; j++) {
+ pid_t pid_child = fork();
+ if (pid_child < 0) {
+ pr_info("fork failed, error %d", pid_child);
+ exit(-1);
+ } else if (pid_child == 0) {
+ ret = child_process();
+ exit(ret);
+ } else {
+ childs[j] = pid_child;
+ pr_info("fork child%d, pid: %d", j, pid_child);
+ }
+ }
+
+ for (int j = 0; j < PROCESS_NUM; j++) {
+ waitpid(childs[j], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", j, status);
+ ret = -1;
+ }
+ }
+
+ sleep(sleep_interval);
+ }
+ if (ret) {
+ goto finish;
+ }
+ pr_info("\nprocess u2k+k2u test success!!\n");
+ sleep(3);
+
+ // 创建15个进程和15个线程,执行u2k+k2u混合任务。重复创建5次。
+ pr_info("\nmain process begins process and thread mix test\n");
+ for (i = 0; i < REPEAT_TIMES; i++) {
+ pr_info("process+thread u2k+k2u %dth test, %d times in total.", i + 1, REPEAT_TIMES);
+ for (j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_create(tid + j, NULL, thread, NULL);
+ if (ret != 0) {
+ pr_info("create thread error\n");
+ ret = -1;
+ goto finish;
+ }
+ }
+
+ for (k = 0; k < PROCESS_NUM; k++) {
+ pid_t pid_child = fork();
+ if (pid_child < 0) {
+ pr_info("fork failed, error %d", pid_child);
+ exit(-1);
+ } else if (pid_child == 0) {
+ ret = child_process();
+ exit(ret);
+ } else {
+ childs[k] = pid_child;
+ pr_info("fork child%d, pid: %d", k, pid_child);
+ }
+ }
+
+ for (j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_join(tid[j], &tret);
+ if (ret != 0) {
+ pr_info("can't join thread %d\n", j);
+ }
+ if ((long)tret != 0) {
+ pr_info("testcase execution failed, thread %d exited unexpectedly\n", j);
+ ret = -1;
+ }
+ }
+
+ for (int k = 0; k < PROCESS_NUM; k++) {
+ waitpid(childs[k], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", k, status);
+ ret = -1;
+ }
+ }
+
+ sleep(sleep_interval);
+ }
+ pr_info("\nprocess+thread u2k+k2u test success!!\n");
+ sleep(3);
+
+finish:
+ if (!ret) {
+ pr_info("testcase execution is successful\n");
+ } else {
+ pr_info("testcase execution failed\n");
+ }
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "多进程/多线程循环执行alloc - u2k - vmalloc - k2u - unshare - vfree - unshare - free(大页/小页/dvpp)")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_statistics_stress.c b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_statistics_stress.c
new file mode 100644
index 000000000000..00da4ca55ac0
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_statistics_stress.c
@@ -0,0 +1,302 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 20 01:38:40 2020
+ */
+
+#include <errno.h>
+#include <fcntl.h> /* For O_* constants */
+#include <semaphore.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/stat.h> /* For mode constants */
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <unistd.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+static int semid;
+#define PROC_NUM 1023
+#define GROUP_NUM 2999
+
+// 进程加入2999个共享组,申请dvpp内存,查看spa_stat,预期成功
+static int testcase1(void)
+{
+ int ret = 0, i, spg_id;
+ struct sp_alloc_info alloc_info[GROUP_NUM + 1];
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+
+ // 再加入2999个共享组并申请内存
+ for (i = 0; i < GROUP_NUM + 1; i++) {
+ alloc_info[i].flag = SP_DVPP | SP_HUGEPAGE,
+ alloc_info[i].size = 4,
+ alloc_info[i].spg_id = i;
+ /* 1、用户态进程A加组 */
+ if (i != 0) {
+ ag_info.spg_id = i;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ return -1;
+ }
+ }
+ /* 2、申请DVPP共享内存 */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ pr_info("Alloc DVPP memory finished.\n");
+
+ // 打印/proc/sharepool/spa_stat
+ cat_attr("/proc/sharepool/spa_stat");
+ sleep(6);
+
+ // 释放所有申请的内存
+ for (i = 0; i < GROUP_NUM + 1; i++) {
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ return -1;
+ }
+ }
+ pr_info("Free DVPP memory finished.\n");
+
+ // 再次打印/proc/sharepool/spa_stat
+ cat_attr("/proc/sharepool/spa_stat");
+ sleep(3);
+
+ return 0;
+}
+
+static int addgroup(int index)
+{
+ int ret = 0;
+ unsigned long addr;
+ unsigned long size = 4096;
+
+ for (int i = 1; i < GROUP_NUM + 1; i++) {
+ /* 1、用户态进程A加组 */
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, i);
+ if (ret < 0) {
+ pr_info("process %d failed add group %d, ret: %d, errno: %d",
+ getpid(), i, ret, errno);
+ sem_inc_by_one(semid);
+ return -1;
+ }
+ }
+ pr_info("process %d add %d groups succecss", index, GROUP_NUM);
+ sem_inc_by_one(semid);
+ while (1) {
+
+ }
+
+ return 0;
+}
+
+
+//#define PROC_NUM 1023
+//#define GROUP_NUM 2999
+
+/* N个进程加入2999个组,申请dvpp内存,打印proc_stat,预期成功
+ * N不要取太大 导致速度慢 */
+static int testcase2(void)
+{
+ int ret = 0, spg_id = 1;
+ struct sp_alloc_info alloc_info[GROUP_NUM];
+ pid_t child[PROC_NUM];
+ unsigned long size = 4096;
+ int proc_num = 1000;
+
+ semid = sem_create(2345, "wait all child add group finish");
+ // N-1个子进程加组
+ for (int i = 0; i < proc_num - 1; i++)
+ FORK_CHILD_ARGS(child[i], addgroup(i));
+ sem_dec_by_val(semid, proc_num - 1);
+ pr_info("add child proc to all groups finished.\n");
+
+ // 测试进程加组后申请
+ for (int i = 1; i < GROUP_NUM + 1; i++) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, i);
+ if (ret < 0)
+ goto out;
+ }
+ ret = 0;
+ pr_info("add main proc to all groups finished.\n");
+
+ for (int i = 1; i < GROUP_NUM + 1; i++) {
+ wrap_sp_alloc(i, size, SP_DVPP);
+ }
+ pr_info("main proc alloc in all groups success");
+
+ // 等一下让所有进程都映射到内存
+ pr_info("Let groups map memory to all childs...");
+ sleep(5);
+
+ // 打印/proc/sharepool/proc_stat
+ cat_attr("/proc/sharepool/proc_stat");
+
+ for (int i = 0; i < proc_num; i++)
+ KILL_CHILD(child[i]);
+
+ sleep(5);
+ // 再次打印/proc/sharepool/proc_stat
+ cat_attr("/proc/sharepool/proc_stat");
+
+out:
+ sem_close(semid);
+ return ret;
+}
+
+/* 1023个进程加入1个共享组,申请dvpp内存,打印proc_overview,预期成功 */
+static int testcase3(void)
+{
+ int ret, spg_id = 1;
+ unsigned long addr;
+ pid_t child[PROC_NUM];
+ pid_t pid;
+ unsigned long size = 4096;
+
+ semid = sem_create(1234, "1023childs");
+
+ // N个子进程
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid = fork();
+ if (pid == 0) {
+ /* 子进程加组 */
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id);
+ if (ret < 0) {
+ pr_info("child %d add group failed!", i);
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ /* 子进程申请内存 */
+ if (i == 0) {
+ addr = (unsigned long)wrap_sp_alloc(spg_id, size, SP_DVPP);
+ if (addr == -1) {
+ pr_info("child %d alloc dvpp memory failed!", i);
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ }
+ sem_inc_by_one(semid);
+ pr_info("child %d created success!", i + 1);
+ /* 子进程idle */
+ while (1) {
+
+ }
+ exit(0);
+ }
+ child[i] = pid;
+ }
+
+ /* 等待所有N个子进程加组 */
+ sem_dec_by_val(semid, PROC_NUM);
+ sleep(8);
+ /* 打印统计信息,预期看到N个进程,且每个进程显示SP_RES是size */
+ cat_attr("/proc/sharepool/proc_overview");
+
+ /* 杀掉子进程 */
+ for (int i = 0; i < PROC_NUM; i++) {
+ KILL_CHILD(child[i]);
+ pr_info("child %d killed", i + 1);
+ }
+ pr_info("All child process killed.\n");
+
+ // 再次打印/proc/sharepool/proc_overview,预期啥也没有
+ cat_attr("/proc/sharepool/proc_overview");
+ sleep(5);
+
+ sem_close(semid);
+ return 0;
+}
+
+// 进程加入2999个共享组,申请dvpp内存,查看spa_stat,预期成功
+static int testcase4(void)
+{
+ int ret = 0, i, spg_id;
+ char attr[SIZE];
+ char cpid[SIZE];
+ struct sp_alloc_info alloc_info[GROUP_NUM + 1];
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+
+ // 再加入2999个共享组并申请内存
+ for (i = 0; i < GROUP_NUM + 1; i++) {
+ alloc_info[i].flag = SP_DVPP | SP_HUGEPAGE,
+ alloc_info[i].size = 4,
+ alloc_info[i].spg_id = i;
+ /* 1、用户态进程A加组 */
+ if (i != 0) {
+ ag_info.spg_id = i;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ return -1;
+ }
+ }
+ /* 2、申请DVPP共享内存 */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ pr_info("Alloc DVPP memory finished.\n");
+ sleep(6);
+
+ // 打印/proc/sharepool/spa_stat
+ strcat(attr, "/proc/");
+ sprintf(cpid, "%d", getpid());
+ strcat(attr, cpid);
+ strcat(attr, "/sp_group");
+ pr_info("attribute is %s", attr);
+ cat_attr(attr);
+
+ // 释放所有申请的内存
+ for (i = 0; i < GROUP_NUM + 1; i++) {
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ return -1;
+ }
+ }
+ pr_info("Free DVPP memory finished.\n");
+ sleep(3);
+
+ // 再次打印/proc/sharepool/spa_stat
+ cat_attr(attr);
+
+ return 0;
+}
+
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "同一进程加入2999个共享组并申请dvpp共享内存,cat /proc/sharepool/spa_stat查看内存申请,预期打印正常。")
+ TESTCASE_CHILD(testcase2, "多个进程加入2999个共享组并申请dvpp共享内存,cat /proc/sharepool/proc_stat查看内存申请,预期打印可能出现softlockup,等待一会儿后打印完成。")
+ TESTCASE_CHILD(testcase3, "1023个进程加入同个共享组并申请dvpp共享内存,cat /proc/sharepool/proc_overview查看内存申请,预期打印正常。")
+ TESTCASE_CHILD(testcase4, "同一进程加入2999个共享组并申请dvpp共享内存,cat /proc/进程号/sp_group查看内存申请,预期打印正常。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/Makefile b/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/Makefile
new file mode 100644
index 000000000000..9a2b520d1b5f
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/test_mult_process
+
+clean:
+ rm -rf $(testcases)
\ No newline at end of file
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_k2u.c b/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_k2u.c
new file mode 100644
index 000000000000..1d7ff1116631
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_k2u.c
@@ -0,0 +1,855 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Nov 30 02:09:42 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <pthread.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/shm.h>
+#include <sys/sem.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include "sem_use.h"
+
+#include "sharepool_lib.h"
+
+
+
+/*
+ * 内核分配共享内存后,调用k2u共享到同一个用户态进程,进程中多线程通过uva并发读。
+ */
+#define TEST1_THREAD_NUM 100
+#define TEST1_K2U_NUM 100
+static struct sp_make_share_info testcase1_k2u_info[TEST1_K2U_NUM];
+
+static void *testcase1_thread(void *arg)
+{
+ for (int i = 0; i < TEST1_K2U_NUM; i++) {
+ char *buf = (char *)testcase1_k2u_info[i].addr;
+ for (int j = 0; j < testcase1_k2u_info[i].size; j++)
+ if (buf[j] != 'm') {
+ pr_info("area check failed, i:%d, j:%d, buf[j]:%d", i, j, buf[j]);
+ return (void *)-1;
+ }
+ }
+
+ return NULL;
+}
+
+static int testcase1(void)
+{
+ int ret, i, j;
+ void *thread_ret = NULL;
+ pthread_t threads[TEST1_THREAD_NUM];
+
+ struct vmalloc_info ka_info = {
+ .size = 3 * PAGE_SIZE,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("ioctl_vmalloc failed");
+ return -1;
+ };
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'm',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto out_free;
+ }
+
+ for (i = 0; i < TEST1_K2U_NUM; i++) {
+ testcase1_k2u_info[i].kva = ka_info.addr,
+ testcase1_k2u_info[i].size = ka_info.size,
+ testcase1_k2u_info[i].spg_id = SPG_ID_DEFAULT,
+ testcase1_k2u_info[i].sp_flags = 0,
+ testcase1_k2u_info[i].pid = getpid(),
+
+ ret = ioctl_k2u(dev_fd, testcase1_k2u_info + i);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto out_unshare;
+ }
+ }
+
+ for (j = 0; j < TEST1_THREAD_NUM; j++) {
+ ret = pthread_create(threads + j, NULL, testcase1_thread, NULL);
+ if (ret < 0) {
+ pr_info("pthread create failed");
+ goto out_pthread_join;
+ }
+ }
+
+out_pthread_join:
+ for (j--; j >= 0; j--) {
+ pthread_join(threads[j], &thread_ret);
+ if (thread_ret != NULL) {
+ pr_info("child thread%d exited unexpected", j + 1);
+ ret = -1;
+ }
+ }
+out_unshare:
+ for (i--; i >= 0; i--)
+ if (ioctl_unshare(dev_fd, testcase1_k2u_info + i) < 0) {
+ pr_info("ioctl_unshare failed");
+ ret = -1;
+ }
+out_free:
+ ioctl_vfree(dev_fd, &ka_info);
+
+ return ret < 0 ? -1 : 0;
+}
+
+/*
+ * 内核分配共享内存后,调用k2u共享到一个共享组(该组有大量进程),该共享组所有进程通过uva并发读写。
+ */
+#define TEST2_CHILD_NUM 100
+#define TEST2_SEM_KEY 98224
+#define TEST2_SHM_KEY 98229
+#define TEST2_K2U_NUM 100
+
+static int testcase2_shmid;
+static int testcase2_semid;
+
+static int testcase2_child(int idx)
+{
+ int ret = 0;
+
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = -1,
+ .sem_flg = 0,
+ };
+ semop(testcase2_semid, &sembuf, 1);
+
+ struct sp_make_share_info *k2u_info = shmat(testcase2_shmid, NULL, 0);
+ if (k2u_info == (void *)-1) {
+ pr_info("child%d, shmat failed, errno: %d", idx, errno);
+ ret = -1;
+ goto out;
+ }
+
+ for (int i = 0; i < TEST2_K2U_NUM; i++) {
+ char *buf = (char *)k2u_info[i].addr;
+ for (int j = 0; j < k2u_info[i].size; j++) {
+ if (buf[j] != 'e') {
+ pr_info("child%d, area check failed, i:%d, j:%d, buf[j]:%d", idx, i, j, buf[j]);
+ ret = -1;
+ goto out;
+ }
+ }
+ }
+
+out:
+ semop(testcase2_semid, &sembuf, 1);
+
+ return ret;
+}
+
+static pid_t fork_and_add_group(int idx, int group_id, int (*child)(int))
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(idx));
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return -1;
+ } else
+ return pid;
+}
+
+static int testcase2(void)
+{
+ int group_id = 20;
+ int ret, i, j, status;
+ pid_t child[TEST2_CHILD_NUM];
+
+ struct vmalloc_info ka_info = {
+ .size = 3 * PAGE_SIZE,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("ioctl_vmalloc failed");
+ return -1;
+ };
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ goto out_free;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'e',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto out_free;
+ }
+
+ testcase2_semid = semget(TEST2_SEM_KEY, 1, IPC_CREAT | 0644);
+ if (testcase2_semid < 0) {
+ pr_info("open systemV semaphore failed, errno: %s", strerror(errno));
+ ret = -1;
+ goto out_free;
+ }
+
+ ret = semctl(testcase2_semid, 0, SETVAL, 0);
+ if (ret < 0) {
+ pr_info("sem setval failed, %s", strerror(errno));
+ goto sem_remove;
+ }
+
+ testcase2_shmid = shmget(TEST2_SHM_KEY, sizeof(struct sp_make_share_info) * TEST2_K2U_NUM,
+ IPC_CREAT | 0666);
+ if (testcase2_shmid < 0) {
+ pr_info("shmget failed, errno: %s", strerror(errno));
+ ret = -1;
+ goto sem_remove;
+ }
+
+ struct sp_make_share_info *k2u_info = shmat(testcase2_shmid, NULL, 0);
+ if (k2u_info == (void *)-1) {
+ pr_info("shmat failed, errno: %d", errno);
+ ret = -1;
+ goto shm_remove;
+ }
+
+ for (i = 0; i < TEST2_CHILD_NUM; i++) {
+ child[i] = fork_and_add_group(i, group_id, testcase2_child);
+ if (child[i] < 0) {
+ pr_info("fork child failed");
+ ret = -1;
+ goto kill_child;
+ }
+ }
+
+ for (j = 0; j < TEST2_K2U_NUM; j++) {
+ k2u_info[j].kva = ka_info.addr,
+ k2u_info[j].size = ka_info.size,
+ k2u_info[j].spg_id = group_id,
+ k2u_info[j].sp_flags = 0,
+ k2u_info[j].pid = getpid(),
+
+ ret = ioctl_k2u(dev_fd, k2u_info + j);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto out_unshare;
+ }
+ }
+
+ // 通知子进程开始读取内存
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = TEST2_CHILD_NUM * 2,
+ .sem_flg = 0,
+ };
+ semop(testcase2_semid, &sembuf, 1);
+
+ // 等待子进程读内存完成
+ sembuf.sem_op = 0;
+ semop(testcase2_semid, &sembuf, 1);
+
+out_unshare:
+ for (j--; j >= 0; j--)
+ if (ioctl_unshare(dev_fd, k2u_info + j) < 0) {
+ pr_info("ioctl_unshare failed");
+ ret = -1;
+ }
+kill_child:
+ for (i--; i >= 0; i--) {
+ kill(child[i], SIGKILL);
+ waitpid(child[i], &status, 0);
+ if (!ret) {
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d exited unexpected", i);
+ ret = -1;
+ }
+ }
+ }
+shm_remove:
+ if (shmctl(testcase2_shmid, IPC_RMID, NULL) < 0)
+ pr_info("shm remove failed, %s", strerror(errno));
+sem_remove:
+ if (semctl(testcase2_semid, 0, IPC_RMID) < 0)
+ pr_info("sem remove failed, %s", strerror(errno));
+out_free:
+ ioctl_vfree(dev_fd, &ka_info);
+
+ return ret < 0 ? -1 : 0;
+}
+
+/*
+ * 内核并发分配多个共享内存并调用k2u将内存都共享到同一个用户态进程,
+ * 该进程通过多个uva分别读写。(观察是否有内存泄漏)
+ */
+#define TEST3_THREAD_NUM 100
+#define TEST3_K2U_NUM 80
+static struct sp_make_share_info testcase3_k2u_info[TEST3_THREAD_NUM][TEST3_K2U_NUM];
+static struct vmalloc_info testcase3_ka_info[TEST3_THREAD_NUM];
+
+#define TEST3_SEM_KEY 88224
+static int testcase3_semid;
+
+static void *testcase3_thread(void *arg)
+{
+ int ret, i;
+ int idx = (int)arg;
+
+ sem_dec_by_one(testcase3_semid);
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = idx,
+ .addr = testcase3_ka_info[idx].addr,
+ .size = testcase3_ka_info[idx].size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto out;
+ }
+
+ for (i = 0; i < TEST3_K2U_NUM; i++) {
+ testcase3_k2u_info[idx][i].kva = testcase3_ka_info[idx].addr;
+ testcase3_k2u_info[idx][i].size = testcase3_ka_info[idx].size;
+ testcase3_k2u_info[idx][i].spg_id = SPG_ID_DEFAULT;
+ testcase3_k2u_info[idx][i].sp_flags = 0;
+ testcase3_k2u_info[idx][i].pid = getpid();
+
+ ret = ioctl_k2u(dev_fd, testcase3_k2u_info[idx] + i);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto out_unshare;
+ }
+ }
+
+ return NULL;
+
+out_unshare:
+ for (i--; i >= 0; i--)
+ if (ioctl_unshare(dev_fd, testcase3_k2u_info[idx] + i) < 0) {
+ pr_info("ioctl_unshare failed");
+ ret = -1;
+ }
+out:
+ return (void *)-1;
+}
+
+int tc3_vmalloc(void)
+{
+ int ret = 0;
+ int i = 0;
+
+ memset(testcase3_ka_info, 0, sizeof(struct vmalloc_info) * TEST3_THREAD_NUM);
+
+ for (i = 0; i < TEST3_THREAD_NUM; i++) {
+ testcase3_ka_info[i].size = 3 * PAGE_SIZE;
+ if (ioctl_vmalloc(dev_fd, testcase3_ka_info + i) < 0)
+ goto vfree;
+ }
+ return 0;
+
+vfree:
+ for (i--; i >= 0; i--)
+ ioctl_vfree(dev_fd, testcase3_ka_info + i);
+ return -1;
+}
+
+static int testcase3(void)
+{
+ int ret, thread_idx;
+ void *thread_ret = NULL;
+ pthread_t threads[TEST3_THREAD_NUM];
+
+ /* 创建sem */
+ testcase3_semid = sem_create(TEST3_SEM_KEY, "key");
+
+ /* 主进程vmalloc */
+ if (tc3_vmalloc()) {
+ pr_info("ioctl_vmalloc failed");
+ goto out_remove_sem;
+ }
+
+ /* 子进程kick start */
+ for (thread_idx = 0; thread_idx < TEST3_THREAD_NUM; thread_idx++) {
+ ret = pthread_create(threads + thread_idx, NULL, testcase3_thread, (void *)thread_idx);
+ if (ret < 0) {
+ pr_info("pthread create failed");
+ goto out_pthread_join;
+ }
+ }
+ sem_inc_by_val(testcase3_semid, TEST3_THREAD_NUM);
+
+ /* 等待子进程退出 */
+out_pthread_join:
+ for (thread_idx--; thread_idx >= 0; thread_idx--) {
+ if (ret < 0) {
+ pthread_kill(threads[thread_idx], SIGKILL);
+ pthread_join(threads[thread_idx], NULL);
+ } else {
+ pthread_join(threads[thread_idx], &thread_ret);
+ if (thread_ret != NULL) {
+ pr_info("child thread%d exited unexpected", thread_idx + 1);
+ ret = -1;
+ }
+ }
+ }
+
+ /* 主进程进行k2u的值校验和unshare */
+ for (int i = 0; i < TEST3_THREAD_NUM; i++) {
+ for (int k = 0; k < TEST3_K2U_NUM; k++) {
+ char *buf = (char *)testcase3_k2u_info[i][k].addr;
+ for (int j = 0; j < testcase3_k2u_info[i][k].size; j++)
+ if (!ret && buf[j] != i) {
+ pr_info("area check failed, i:%d, j:%d, buf[j]:%d", i, j, buf[j]);
+ ret = -1;
+ }
+ if (ioctl_unshare(dev_fd, testcase3_k2u_info[i] + k) < 0) {
+ pr_info("ioctl_unshare failed, i:%d, k:%d, addr:0x%lx",
+ i, k, testcase3_k2u_info[i][k].addr);
+ ret = -1;
+ }
+ }
+ }
+
+ /* 主进程vfree */
+ for (int i = 0; i < TEST3_THREAD_NUM; i++)
+ ioctl_vfree(dev_fd, testcase3_ka_info + i);
+
+out_remove_sem:
+ if (sem_close(testcase3_semid) < 0)
+ pr_info("sem setval failed, %s", strerror(errno));
+
+ return ret < 0 ? -1 : 0;
+}
+
+/*
+ * 多组多进程,并发调用k2u到group,进程读内存,退出
+ * 每组多进程,每进程申请多次vmalloc内存,每次申请的内存做多次k2u映射
+ * 同一组的进程通过所有的映射内存进行读操作,
+ * 然后owner进程执行写操作,其他所有进程和内核执行读操作(check成功)
+ *
+ * 同一组内的进程通过systemV共享内存共享数据,通过systemV信号量同步
+ */
+#define MAX_PROC_NUM 100
+#define GROUP_NUM 2
+#define PROC_NUM 10 //不能超过100
+#define ALLOC_NUM 11
+#define K2U_NUM 12
+#define TEST4_SHM_KEY_BASE 10240
+#define TEST4_SEM_KEY_BASE 10840
+
+static int shmids[GROUP_NUM];
+static int semids[GROUP_NUM];
+
+struct testcase4_shmbuf {
+ struct vmalloc_info ka_infos[PROC_NUM][ALLOC_NUM]; //ka_infos[10][11]
+ struct sp_make_share_info
+ k2u_infos[PROC_NUM][ALLOC_NUM][K2U_NUM]; // k2u_infos[10][11][12]
+};
+
+static void testcase4_vfree_and_unshare(struct vmalloc_info *ka_infos,
+ struct sp_make_share_info (*k2u_infos)[K2U_NUM], int group_idx, int proc_idx)
+{
+ for (int i = 0; i < ALLOC_NUM && ka_infos[i].addr; i++) {
+ for (int j = 0; j < K2U_NUM && k2u_infos[i][j].addr; j++) {
+ if (ioctl_unshare(dev_fd, k2u_infos[i] + j))
+ pr_info("ioctl unshare failed, errno: %d", errno);
+ k2u_infos[i][j].addr = 0;
+ }
+ if (ioctl_vfree(dev_fd, ka_infos + i))
+ pr_info("ioctl vfree failed, errno: %d", errno);
+ ka_infos[i].addr = 0;
+ }
+}
+
+static void testcase4_vfree_and_unshare_all(struct testcase4_shmbuf *shmbuf)
+{
+ pr_info("this is %s", __FUNCTION__);
+ for (int i = 0; i < PROC_NUM; i++)
+ testcase4_vfree_and_unshare(shmbuf->ka_infos[i], shmbuf->k2u_infos[i], -1, -1);
+}
+
+static int testcase4_check_memory(struct sp_make_share_info
+ k2u_infos[ALLOC_NUM][K2U_NUM],
+ char offset, int group_idx, int proc_idx)
+{
+ for (int j = 0; j < ALLOC_NUM; j++) {
+ char expect = offset + j;
+ for (int k = 0; k < K2U_NUM; k++) {
+ char *buf = (char *)k2u_infos[j][k].addr;
+ for (int l = 0; l < k2u_infos[j][k].size; l++)
+ if (buf[l] != expect) {
+ pr_info("memory check failed");
+ return -1;
+ }
+ }
+ }
+ return 0;
+}
+
+static int testcase4_check_memory_all(struct sp_make_share_info
+ (*k2u_infos)[ALLOC_NUM][K2U_NUM],
+ char offset)
+{
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ for (int j = 0; j < ALLOC_NUM; j++) {
+ char expect = offset + j;
+ for (int k = 0; k < K2U_NUM; k++) {
+ char *buf = (char *)k2u_infos[i][j][k].addr;
+ for (int l = 0; l < k2u_infos[i][j][k].size; l++)
+ if (buf[l] != expect) {
+ pr_info("memory check failed");
+ return -1;
+ }
+ }
+ }
+ }
+
+ return 0;
+}
+
+static void testcase4_set_memory(struct sp_make_share_info
+ (*k2u_infos)[ALLOC_NUM][K2U_NUM])
+{
+ for (int i = 0; i < PROC_NUM; i++)
+ for (int j = 0; j < ALLOC_NUM; j++) {
+ char *buf = (char *)k2u_infos[i][j][0].addr;
+ for (int l = 0; l < k2u_infos[i][j][0].size; l++)
+ buf[l]++;
+ }
+}
+
+#if 0
+#define ERRPR(fun, name, idx) \
+do { \
+ int ret = fun; \
+ if (ret < 0) \
+ pr_info(#fun "failed: %s", strerror(errno)); \
+ else \
+ pr_info(name "%d, " #fun "success", idx); \
+} while (0)
+#else
+#define ERRPR(fun, name, idx) fun
+#endif
+
+static int testcase4_grandchild(int idx)
+{
+
+ int ret, i, j;
+ int proc_idx = idx % MAX_PROC_NUM; // 0, 1, 2, ..., 9
+ int group_idx = idx / MAX_PROC_NUM - 1; // 0/1/2/3
+ int group_id = (group_idx + 1) * MAX_PROC_NUM; // 100/200/300/400
+ int semid = semids[group_idx];
+ int shmid = shmids[group_idx];
+
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = -1,
+ .sem_flg = 0,
+ };
+ ERRPR(semop(semid, &sembuf, 1), "grandchild", idx);
+
+ struct testcase4_shmbuf *shmbuf = shmat(shmid, NULL, 0);
+ if (shmbuf == (void *)-1) {
+ pr_info("grandchild%d, shmat failed, errno: %d", idx, errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ // 申请内存,k2group
+ struct vmalloc_info *ka_info = shmbuf->ka_infos[proc_idx];
+ struct sp_make_share_info (*k2u_info)[K2U_NUM] = shmbuf->k2u_infos[proc_idx];
+ for (i = 0; i < ALLOC_NUM; i++) {
+ ka_info[i].size = 3 * PAGE_SIZE;
+ ret = ioctl_vmalloc(dev_fd, ka_info + i);
+ if (ret < 0) {
+ pr_info("ioctl_vmalloc failed");
+ ret = -1;
+ goto out_unshare;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = i + 'A',
+ .addr = ka_info[i].addr,
+ .size = ka_info[i].size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto out_unshare;
+ }
+
+ for (j = 0; j < K2U_NUM; j++) {
+ k2u_info[i][j].kva = ka_info[i].addr;
+ k2u_info[i][j].size = ka_info[i].size;
+ k2u_info[i][j].spg_id = group_id;
+ k2u_info[i][j].sp_flags = 0;
+ k2u_info[i][j].pid = getppid();
+
+ ret = ioctl_k2u(dev_fd, k2u_info[i] + j);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto out_unshare;
+ }
+ }
+ }
+
+ // 信号量减一,通知vmalloc和k2u完成
+ ERRPR(semop(semid, &sembuf, 1), "grandchild", idx);
+ // 等待信号量变为0(所有子进程k2u完成)
+ sembuf.sem_op = 0;
+ ERRPR(semop(semid, &sembuf, 1), "grandchild", idx);
+ // 所有子进程和父进程开始读内存,进行check
+ ret = testcase4_check_memory(shmbuf->k2u_infos[proc_idx], 'A', group_idx, proc_idx);
+ // 本进程check完,通知父进程,信号量减一
+ sembuf.sem_op = -1;
+ ERRPR(semop(semid, &sembuf, 1), "grandchild", idx);
+ // 等待信号量为0(所有进程check完毕)
+ sembuf.sem_op = 0;
+ ERRPR(semop(semid, &sembuf, 1), "grandchild", idx);
+ // 等待父进程写内存完毕
+ sembuf.sem_op = -1;
+ ERRPR(semop(semid, &sembuf, 1), "grandchild", idx);
+ // 所有子进程check内存
+ if (!ret)
+ ret = testcase4_check_memory(shmbuf->k2u_infos[proc_idx], 'A' + 1, group_idx, proc_idx);
+ if (ret < 0)
+ pr_info("child %d grandchild %d check_memory('A' + 1) failed", group_idx, proc_idx);
+ else
+ pr_info("child %d grandchild %d check_memory('A' + 1) success", group_idx, proc_idx);
+ // 通知父进程子进程check内存完毕
+ ERRPR(semop(semid, &sembuf, 1), "grandchild", idx);
+ testcase4_vfree_and_unshare(ka_info, k2u_info, group_idx, proc_idx);
+ return ret;
+
+out_unshare:
+ testcase4_vfree_and_unshare(ka_info, k2u_info, group_idx, proc_idx);
+error_out:
+ kill(getppid(), SIGKILL);
+ return -1;
+}
+
+static int testcase4_child(int idx)
+{
+ int ret, child_num, status;
+ //int group_id = (idx + 1) * 100; // spg_id = 100, 200, 300, 400
+ int group_id = (idx + 1) * MAX_PROC_NUM;
+ int semid = semids[idx];
+ int shmid = shmids[idx];
+ pid_t child[PROC_NUM]; // 10
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct testcase4_shmbuf *shmbuf = shmat(shmid, NULL, 0);
+ if (shmbuf == (void *)-1) {
+ pr_info("grandchild%d, shmat failed, errno: %d", idx, errno);
+ return -1;
+ }
+ memset(shmbuf, 0, sizeof(*shmbuf));
+
+ // 创建子进程并加组
+ for (child_num = 0; child_num < PROC_NUM; child_num++) {
+ child[child_num] = fork_and_add_group(group_id + child_num, group_id, testcase4_grandchild);
+ if (child[child_num] < 0) {
+ ret = -1;
+ goto kill_child;
+ }
+ }
+
+ // 通知子进程开始调用vmalloc,并且进行k2group
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = PROC_NUM * 2,
+ .sem_flg = 0,
+ };
+ ERRPR(semop(semid, &sembuf, 1), "child", idx);
+ // 等待子进程完成
+ sembuf.sem_op = 0;
+ ERRPR(semop(semid, &sembuf, 1), "child", idx);
+ // 所有子进程和父进程开始读内存,进行check
+ ret = testcase4_check_memory_all(shmbuf->k2u_infos, 'A');
+ if (ret < 0)
+ goto unshare;
+ // 等待所有子进程读内存完成
+ sleep(1);
+ sembuf.sem_op = PROC_NUM;
+ ERRPR(semop(semid, &sembuf, 1), "child", idx);
+ sembuf.sem_op = 0;
+ ERRPR(semop(semid, &sembuf, 1), "child", idx);
+ // 写内存
+ testcase4_set_memory(shmbuf->k2u_infos);
+ // 通知子进程写内存完毕
+ sleep(1);
+ sembuf.sem_op = PROC_NUM * 2;
+ ERRPR(semop(semid, &sembuf, 1), "child", idx);
+ // 等待子进程读完
+ sembuf.sem_op = 0;
+ ERRPR(semop(semid, &sembuf, 1), "child", idx);
+
+ // 等待子进程退出
+ for (int i = 0; i < PROC_NUM; i++) {
+ int status;
+ waitpid(child[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d exited unexpected", i);
+ ret = -1;
+ }
+ child[i] = 0;
+ }
+
+ return 0;
+
+unshare:
+ testcase4_vfree_and_unshare_all(shmbuf);
+kill_child:
+ for (child_num--; child_num >= 0; child_num--) {
+ if (ret < 0) {
+ kill(child[child_num], SIGKILL);
+ waitpid(child[child_num], NULL, 0);
+ } else {
+ waitpid(child[child_num], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("grandchild%d exited unexpected", group_id + child_num);
+ ret = -1;
+ }
+ }
+ }
+
+ return ret < 0 ? -1 : 0;
+}
+
+static int testcase4(void)
+{
+ int ret = 0;
+
+ // init semaphores and shared memory areas
+ for (int i = 0; i < GROUP_NUM; shmids[i] = semids[i] = -1, i++);
+ for (int i = 0; i < GROUP_NUM; i++) {
+ semids[i] = semget(TEST4_SEM_KEY_BASE + i, 1, IPC_CREAT | 0644);
+ if (semids[i] < 0) {
+ pr_info("open systemV semaphore failed, errno: %s", strerror(errno));
+ ret = -1;
+ goto sem_remove;
+ }
+
+ ret = semctl(semids[i], 0, SETVAL, 0);
+ if (ret < 0) {
+ pr_info("sem setval failed, %s", strerror(errno));
+ ret = -1;
+ goto sem_remove;
+ }
+ }
+
+ for (int i = 0; i < GROUP_NUM; i++) {
+ shmids[i] = shmget(TEST4_SHM_KEY_BASE + i, sizeof(struct testcase4_shmbuf), IPC_CREAT | 0666);
+ if (shmids[i] < 0) {
+ pr_info("shmget failed, errno: %s", strerror(errno));
+ ret = -1;
+ goto shm_remove;
+ }
+ }
+
+ pid_t child[GROUP_NUM] = {0};
+ for (int i = 0; i < GROUP_NUM; i++) {
+ int group_id = i + 20;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("testcase4, group%d fork failed", group_id);
+ continue;
+ } else if (pid == 0) {
+ exit(testcase4_child(i));
+ }
+ child[i] = pid;
+ }
+
+ for (int i = 0; i < GROUP_NUM; i++) {
+ if (!child[i])
+ continue;
+
+ int status = 0;
+ waitpid(child[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("testcase4, child%d, exited unexpected", i);
+ ret = -1;
+ } else
+ pr_info("testcase4, child%d success!!", i);
+ }
+
+shm_remove:
+ for (int i = 0; semids[i] >= 0 && i < GROUP_NUM; i++)
+ if (shmctl(shmids[i], IPC_RMID, NULL) < 0)
+ pr_info("shm remove failed, %s", strerror(errno));
+sem_remove:
+ for (int i = 0; semids[i] >= 0 && i < GROUP_NUM; i++)
+ if (semctl(semids[i], 0, IPC_RMID) < 0)
+ pr_info("sem remove failed, %s", strerror(errno));
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "内核分配共享内存后,反复调用k2u共享到同一个用户态进程,该进程通过多个uva并发读写。")
+ TESTCASE_CHILD(testcase2, "内核分配共享内存后,反复调用k2u共享到一个共享组(该组有大量进程),该共享组所有进程通过uva并发读写。")
+ TESTCASE_CHILD(testcase3, "内核并发分配多个共享内存并调用k2u将内存都共享到同一个用户态进程, 该进程通过多个uva分别读写。")
+ TESTCASE_CHILD(testcase4, "内核并发分配多个共享内存并调用k2u将内存都共享到同一个共享组, 该共享组所有进程通过多个uva分别读写。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_pass_through.c b/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_pass_through.c
new file mode 100644
index 000000000000..775fc130c92d
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_pass_through.c
@@ -0,0 +1,405 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Description:
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Feb 26 15:49:35 2021
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <pthread.h>
+#include <fcntl.h> /* For O_* constants */
+#include <semaphore.h>
+
+#include <sys/wait.h>
+#include <sys/types.h>
+#include <sys/ipc.h>
+#include <sys/sem.h>
+#include <sys/stat.h> /* For mode constants */
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+#define pr_info(fmt, args...) \
+ printf("[file:%s, func:%s, line:%d] " fmt "\n", __FILE__, __func__, __LINE__, ##args)
+
+#define REPEAT_TIMES 10
+#define TEST1_THREAD_NUM 20
+#define TEST2_ALLOC_TIMES 10
+#define TEST2_PROCESS_NUM 20
+#define TEST4_PROCESS_NUM 10
+#define TEST4_THREAD_NUM 10
+#define TEST4_RUN_TIME 200
+
+static int dev_fd;
+
+static void *testcase1_thread(void *arg)
+{
+ struct sp_alloc_info alloc_info = {0};
+ int ret;
+
+ alloc_info.flag = 0;
+ alloc_info.spg_id = SPG_ID_DEFAULT;
+ alloc_info.size = 4 * PAGE_SIZE;
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed, errno: %d", errno);
+ goto error;
+ }
+
+ pthread_exit((void *)0);
+
+error:
+ pthread_exit((void *)1);
+}
+
+/* 多线程未加组,一起做sp_alloc */
+static int testcase1(void)
+{
+ int ret, i, j;
+ void *thread_ret = NULL;
+ pthread_t threads[TEST1_THREAD_NUM];
+
+ int semid = sem_create(0xbcd996, "hello");
+ if (semid < 0) {
+ pr_info("open systemV semaphore failed, errno: %s", strerror(errno));
+ goto test1_out;
+ }
+
+ for (i = 0; i < REPEAT_TIMES; i++) {
+ pr_info("%s start times %d", __func__, i);
+ for (j = 0; j < TEST1_THREAD_NUM; j++) {
+ ret = pthread_create(threads + j, NULL, testcase1_thread, NULL);
+ if (ret < 0) {
+ pr_info("pthread create failed");
+ goto out_pthread_join;
+ }
+ }
+
+out_pthread_join:
+ for (j--; j >= 0; j--) {
+ pthread_join(threads[j], &thread_ret);
+ if (thread_ret != NULL) {
+ pr_info("child thread%d exited unexpected", j + 1);
+ ret = -1;
+ goto test1_out;
+ }
+ }
+ }
+
+ sem_close(semid);
+test1_out:
+ return ret < 0 ? -1 : 0;
+}
+
+static int semid_tc2;
+
+static pid_t fork_process(int idx, int (*child)(int))
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(idx));
+ }
+
+ return pid;
+}
+
+static int testcase2_child(int idx)
+{
+ int i;
+ int ret = 0;
+ struct sp_alloc_info alloc_info[TEST2_ALLOC_TIMES] = {0};
+
+ sem_dec_by_one(semid_tc2);
+
+ for (i = 0; i < TEST2_ALLOC_TIMES; i++) {
+ if (idx % 2 == 0) {
+ alloc_info[i].flag = 2;
+ } else {
+ alloc_info[i].flag = 2 | SP_DVPP;
+ }
+ alloc_info[i].spg_id = SPG_ID_DEFAULT;
+ alloc_info[i].size = (idx + 1) * PAGE_SIZE;
+ }
+
+ for (i = 0; i < TEST2_ALLOC_TIMES; i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed, errno: %d", errno);
+ goto out;
+ }
+ pr_info("child %d alloc success %d time.", idx, i);
+ }
+
+ for (i = 0; i < TEST2_ALLOC_TIMES; i++) {
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl free failed, errno: %d", errno);
+ goto out;
+ }
+ pr_info("child %d free success %d time.", idx, i);
+ }
+
+out:
+ return ret;
+}
+
+/* 多进程未加组,一起做sp_alloc */
+static int testcase2(void)
+{
+ int ret = 0;
+ int i, status;
+ pid_t child[TEST2_PROCESS_NUM] = {0};
+
+ semid_tc2 = sem_create(0x1234abc, "testcase2");
+ if (semid_tc2 < 0) {
+ pr_info("open systemV semaphore failed, errno: %s", strerror(errno));
+ return -1;
+ }
+
+ for (i = 0; i < TEST2_PROCESS_NUM; i++) {
+ child[i] = fork_process(i, testcase2_child);
+ if (child[i] < 0) {
+ pr_info("fork child failed");
+ ret = -1;
+ goto kill_child;
+ }
+ }
+
+ sem_inc_by_val(semid_tc2, TEST2_PROCESS_NUM);
+
+kill_child:
+ for (i--; i >= 0; i--) {
+ waitpid(child[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d exited unexpected", i);
+ ret = -1;
+ }
+ }
+
+test2_out:
+ return ret < 0 ? -1 : 0;
+}
+
+/* 父进程直调申请内存,子进程故意释放,预期无权限失败 */
+static int testcase3(void)
+{
+ int ret = 0;
+ int status;
+ pid_t pid;
+
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_HUGEPAGE_ONLY,
+ .spg_id = SPG_ID_DEFAULT,
+ .size = PMD_SIZE,
+ };
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed");
+ goto error;
+ }
+
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ goto error;
+ } else if (pid == 0) {
+ /* sp_free deliberately */
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0 && errno == EINVAL) {
+ pr_info("sp_free return EINVAL as expected");
+ } else if (ret < 0) {
+ pr_info("sp_free return %d unexpectedly", errno);
+ exit(1);
+ } else {
+ pr_info("sp_free return success unexpectly");
+ exit(1);
+ }
+ exit(0);
+ }
+
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child process failed, ret is %d", status);
+ goto error;
+ }
+
+ return 0;
+
+error:
+ return -1;
+}
+
+static void *testcase4_thread(void *arg)
+{
+ struct sp_alloc_info alloc_info = {0};
+ int ret;
+ int idx = arg;
+
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto error;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ alloc_info.spg_id = SPG_ID_DEFAULT;
+ if (idx % 2 == 0) {
+ alloc_info.flag = SP_DVPP;
+ alloc_info.size = (idx + 1) * PAGE_SIZE;
+ } else {
+ alloc_info.flag = SP_HUGEPAGE_ONLY;
+ alloc_info.size = (idx + 1) * PMD_SIZE;
+ }
+
+ for (int i = 0; i < TEST4_RUN_TIME; i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed, errno: %d", errno);
+ goto error;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl free failed, errno: %d", errno);
+ goto error;
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto error;
+ }
+
+ char *buf = (char *)k2u_info.addr;
+ for (int j = 0; j < k2u_info.size; j++) {
+ if (buf[j] != 'b') {
+ pr_info("check k2u context failed");
+ goto error;
+ }
+ }
+
+ if (ioctl_unshare(dev_fd, &k2u_info)) {
+ pr_info("unshare memory failed, errno: %d", errno);
+ goto error;
+ }
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+ pthread_exit((void *)0);
+
+error:
+ ioctl_vfree(dev_fd, &ka_info);
+ pthread_exit((void *)1);
+}
+
+static int testcase4_child(int idx)
+{
+ int i, ret;
+ void *thread_ret = NULL;
+ pthread_t threads[TEST4_THREAD_NUM];
+
+ for (i = 0; i < TEST4_THREAD_NUM; i++) {
+ ret = pthread_create(threads + i, NULL, testcase4_thread, idx);
+ if (ret < 0) {
+ pr_info("pthread create failed");
+ goto out_pthread_join;
+ }
+ }
+
+out_pthread_join:
+ for (i--; i >= 0; i--) {
+ pthread_join(threads[i], &thread_ret);
+ if (thread_ret != NULL) {
+ pr_info("child thread%d exited unexpected", i + 1);
+ ret = -1;
+ }
+ }
+
+ return ret;
+}
+
+/*
+ * 多进程,每个进程多线程,直调申请内存与k2u to task混跑
+ */
+static int testcase4(void)
+{
+ int ret = 0;
+ int i, status;
+ pid_t child[TEST4_PROCESS_NUM] = {0};
+
+ for (i = 0; i < TEST4_PROCESS_NUM; i++) {
+ child[i] = fork_process(i, testcase4_child);
+ if (child[i] < 0) {
+ pr_info("fork child failed");
+ ret = -1;
+ goto kill_child;
+ }
+ }
+
+kill_child:
+ for (i--; i >= 0; i--) {
+ if (ret != 0) {
+ kill(child[i], SIGKILL);
+ }
+ waitpid(child[i], &status, 0);
+ if (!ret) {
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d exited unexpected", i);
+ ret = -1;
+ goto test2_out;
+ }
+ }
+ }
+
+test2_out:
+ return ret != 0 ? -1 : 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "多线程未加组,并发申请直调内存")
+ TESTCASE_CHILD(testcase2, "多进程未加组,并发申请直调内存")
+ TESTCASE_CHILD(testcase3, "父进程直调申请内存,子进程故意释放,预期无权限失败")
+ TESTCASE_CHILD(testcase4, "多进程,每个进程多线程,直调申请内存与k2u to task混跑")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_thread_k2u.c b/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_thread_k2u.c
new file mode 100644
index 000000000000..01b569509ce8
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_thread_k2u.c
@@ -0,0 +1,197 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 07:53:12 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <pthread.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/shm.h>
+#include <sys/sem.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include "sharepool_lib.h"
+#include "sem_use.h"
+
+
+/*
+ * 同一组,多进程,多线程,每线程执行流程:
+ * vmalloc->k2u->读写操作->unshare->vfree
+ */
+#define PROCESS_PER_GROUP 2
+#define THREAD_PER_PROCESS 128
+
+static int testcase_semid = -1;
+
+/* 线程:vmalloc -> k2u + 用户态写入 + unshare 100次 -> vfree */
+static void *testcase_thread_routing(void *arg)
+{
+ int ret = 0;
+ struct vmalloc_info ka_info = {
+ .size = PMD_SIZE * 2,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("ioctl_vmalloc failed");
+ return (void *)-1;
+ };
+
+ for (int i = 0; i < 100; i++) {
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ // 进程未加组则k2task,否则k2spg
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ ret = -1;
+ goto out;
+ }
+ memset((void *)k2u_info.addr, 'a', k2u_info.size);
+ ioctl_unshare(dev_fd, &k2u_info);
+ }
+
+out:
+ ioctl_vfree(dev_fd, &ka_info);
+
+ return (void *)ret;
+}
+
+static int testcase_child_process(int idx)
+{
+ int ret = 0;
+ pthread_t threads[THREAD_PER_PROCESS] = {0};
+
+ if (testcase_semid == -1) {
+ pr_info("unexpect semid");
+ return -1;
+ }
+
+ sem_dec_by_one(testcase_semid);
+
+ for (int i = 0; i < THREAD_PER_PROCESS; i++) {
+ ret = pthread_create(threads + i, NULL, testcase_thread_routing, NULL);
+ if (ret < 0) {
+ pr_info("pthread create failed");
+ ret = -1;
+ goto out;
+ }
+ }
+
+ pr_info("child%d create %d threads success", idx, THREAD_PER_PROCESS);
+
+out:
+ for (int i = 0; i < THREAD_PER_PROCESS; i++)
+ if (threads[i])
+ pthread_join(threads[i], NULL);
+
+ return ret;
+}
+
+static pid_t fork_and_add_group(int idx, int group_id, int (*child)(int))
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(idx));
+ }
+
+ // 不加组
+ if (group_id == SPG_ID_DEFAULT)
+ return pid;
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return -1;
+ } else
+ return pid;
+}
+
+#define TEST_SEM_KEY 0xffabc
+static int testcase_routing(int group_id, int (*child_process)(int))
+{
+ int ret = 0, status;
+ pid_t child[PROCESS_PER_GROUP] = {0};
+
+ testcase_semid = sem_create(9612, "test_mult_thread_k2u");
+ if (testcase_semid < 0) {
+ pr_info("open systemV semaphore failed, errno: %s", strerror(errno));
+ return -1;
+ }
+
+ for (int i = 0; i < PROCESS_PER_GROUP; i++) {
+ pid_t pid = fork_and_add_group(i, group_id, child_process);
+ if (pid < 0)
+ goto out;
+ child[i] = pid;
+ }
+
+ sem_inc_by_val(testcase_semid, PROCESS_PER_GROUP);
+
+ for (int i = 0; i < PROCESS_PER_GROUP; i++) {
+ waitpid(child[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("childprocess%d exit unexpected", i);
+ ret = -1;
+ goto out;
+ }
+ child[i] = 0;
+ }
+ goto sem_remove;
+
+out:
+ for (int i = 0; i < PROCESS_PER_GROUP; i++)
+ if (child[i]) {
+ kill(child[i], SIGKILL);
+ waitpid(child[i], NULL, 0);
+ }
+
+sem_remove:
+ if (sem_close(testcase_semid) < 0)
+ pr_info("sem remove failed, %s", strerror(errno));
+
+ return ret;
+}
+
+static int testcase1(void) { return testcase_routing(10, testcase_child_process); }
+static int testcase2(void) { return testcase_routing(SPG_ID_DEFAULT, testcase_child_process); }
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "多线程vmalloc->k2spg->memset->unshare->vfree")
+ TESTCASE_CHILD(testcase2, "多线程vmalloc->k2task->memset->unshare->vfree")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/Makefile b/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/Makefile
new file mode 100644
index 000000000000..9a2b520d1b5f
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/test_mult_process
+
+clean:
+ rm -rf $(testcases)
\ No newline at end of file
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k.c b/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k.c
new file mode 100644
index 000000000000..04a5a3e5c6e0
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k.c
@@ -0,0 +1,514 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Nov 24 15:40:31 2020
+ */
+#define _GNU_SOURCE
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <pthread.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+#define MAX_ALLOC 100000
+#define MAX_SHARE 1000
+#define MAX_READ 100000
+
+static int alloc_num = 20;
+static int share_num = 20;
+static int read_num = 20;
+
+struct __thread_info {
+ struct sp_make_share_info *u2k_info;
+ struct karea_access_info *karea_info;
+};
+
+/*
+ * 用户态进程A加组后分配并写内存N,反复调用u2k(share_num次)共享到内核,内核模块通过每个kva反复读同一块内存(read_num次)N成功。
+ * 进程A停止共享N后,内核模块读N失败。进程A释放内存N。
+ */
+static void *grandchild1(void *arg)
+{
+ struct karea_access_info *karea_info = (struct karea_access_info*)arg;
+ int ret = 0;
+ for (int j = 0; j < read_num; j++) {
+ ret = ioctl_karea_access(dev_fd, karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ pthread_exit((void*)ret);
+ }
+ pr_info("thread read u2k area %dth time success", j);
+ }
+ pr_info("thread read u2k area %d times success", read_num);
+ pthread_exit((void*)ret);
+}
+
+static int child1(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = alloc_info->spg_id;
+ pr_info("want to add group_id: %d", group_id);
+
+ // add group()
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+ pr_info("now added into group_id: %d", alloc_info->spg_id);
+
+ // alloc()
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ pr_info("alloc %0lx memory success", alloc_info->size);
+
+ // write
+ memset((void *)alloc_info->addr, 'o', alloc_info->size);
+ pr_info("memset success");
+
+ // u2k
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info->addr,
+ .size = alloc_info->size,
+ .pid = getpid(),
+ };
+
+ struct karea_access_info *karea_info = (struct karea_access_info*)malloc(share_num * sizeof(struct karea_access_info));
+
+ for (int i = 0; i < share_num; i++) { // 同一段用户态内存可以向内核共享很多次
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+ karea_info[i].mod = KAREA_CHECK;
+ karea_info[i].value = 'o';
+ karea_info[i].addr = u2k_info.addr;
+ karea_info[i].size = u2k_info.size;
+ }
+ pr_info("u2k share %d times success", share_num);
+
+ //内核反复读(太慢了!取消了。)
+ //for (int j = 0; j < read_num; j++) {
+ for (int i = 0; i < share_num; i++) {
+ ret = ioctl_karea_access(dev_fd, &karea_info[i]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+ pr_info("kernel read %dth %0lx area success", i, alloc_info->size);
+ }
+ //}
+ //pr_info("kernel read %d times success", read_num);
+
+ //内核并发读 100个线程
+ pthread_t childs[MAX_SHARE] = {0};
+ int status = 0;
+ for (int i = 0; i < share_num; i++) {
+ ret = pthread_create(&childs[i], NULL, grandchild1, (void *)&karea_info[i]);
+ if (ret != 0) {
+ pr_info("pthread_create failed, errno: %d", errno);
+ exit(-1);
+ }
+ }
+ pr_info("create %d threads success", share_num);
+
+ void *child_ret;
+ for (int i = 0; i < share_num; i++) {
+ pthread_join(childs[i], &child_ret);
+ if ((int)child_ret != 0) {
+ pr_info("grandchild1 %d test failed, %d", i, (int)child_ret);
+ return (int)child_ret;
+ }
+ }
+ pr_info("exit %d threads success", share_num);
+
+ for (int i = 0; i < share_num; i++) {
+ u2k_info.addr = karea_info[i].addr;
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+ }
+ pr_info("unshare u2k area %d times success", share_num);
+
+ /*
+ * unshare之后再访问内存会导致内核挂掉,在手工测试的时候执行
+ */
+#if 0
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info[0]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+#endif
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ free(karea_info);
+ return ret;
+}
+
+/*
+ * 用户态进程A加组后分配并写alloc_num个内存,调用u2k共享到内核,内核模块通过每个kva分别读多个内存成功(每个kva读read_num次)。
+ * 进程A停止共享后,内核模块读N失败。进程A释放内存N。
+ */
+static void* grandchild2(void *arg)
+{
+ int ret = 0;
+ struct __thread_info* thread2_info = (struct __thread_info*)arg;
+ ret = ioctl_u2k(dev_fd, thread2_info->u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ pthread_exit((void*)ret);
+ }
+ thread2_info->karea_info->mod = KAREA_CHECK;
+ thread2_info->karea_info->value = 'p';
+ thread2_info->karea_info->addr = thread2_info->u2k_info->addr;
+ thread2_info->karea_info->size = thread2_info->u2k_info->size;
+ pthread_exit((void*)ret);
+}
+
+static int child2(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = alloc_info->spg_id;
+
+ // add group
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ // alloc N块内存
+ struct sp_alloc_info *all_alloc_info = (struct sp_alloc_info*)malloc(alloc_num * sizeof(struct sp_alloc_info));
+ for (int i = 0; i < alloc_num; i++) {
+ all_alloc_info[i].flag = alloc_info->flag;
+ all_alloc_info[i].spg_id = alloc_info->spg_id;
+ all_alloc_info[i].size = alloc_info->size;
+ ret = ioctl_alloc(dev_fd, &all_alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)all_alloc_info[i].addr, 'p', all_alloc_info[i].size);
+ }
+
+ struct sp_make_share_info *all_u2k_info = (struct sp_make_share_info*)malloc(alloc_num * sizeof(struct sp_make_share_info));
+
+ struct karea_access_info *karea_info = (struct karea_access_info*)malloc(alloc_num * sizeof(struct karea_access_info));
+
+ // 并发调用u2k
+ // 创建100个线程,分别对一块alloc内存调用u2k,并存储返回地址
+ pthread_t childs[MAX_ALLOC] = {0};
+ struct __thread_info thread2_info[MAX_ALLOC];
+ int status = 0;
+ for (int i = 0; i < alloc_num; i++) {
+ all_u2k_info[i].uva = all_alloc_info[i].addr;
+ all_u2k_info[i].size = all_alloc_info[i].size;
+ all_u2k_info[i].pid = getpid();
+ thread2_info[i].u2k_info = &all_u2k_info[i];
+ thread2_info[i].karea_info = &karea_info[i];
+ ret = pthread_create(&childs[i], NULL, grandchild2, (void *)&thread2_info[i]);
+ if (ret != 0) {
+ pr_info("pthread_create failed, errno: %d", errno);
+ exit(-1);
+ }
+ }
+
+ // 结束所有线程
+ void *child_ret;
+ for (int i = 0; i < alloc_num; i++) {
+ pthread_join(childs[i], &child_ret);
+ if ((int)child_ret != 0) {
+ pr_info("grandchild2 %d test failed, %d", i, (int)child_ret);
+ return (int)child_ret;
+ }
+ }
+
+ // 内核读内存
+ for (int j = 0; j < read_num; j++) {
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_karea_access(dev_fd, &karea_info[i]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+ }
+ }
+
+ // unshare所有内存
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_unshare(dev_fd, &all_u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ /*
+ * unshare之后再访问内存会导致内核挂掉,在手工测试的时候执行
+ */
+#if 0
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info[0]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+#endif
+
+ // free所有内存
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_free(dev_fd, &all_alloc_info[i]);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ }
+ free(all_alloc_info);
+ free(all_u2k_info);
+ free(karea_info);
+ return ret;
+}
+
+/*
+ * 用户态进程A加组后分配并写内存N,反复share_num次(调用u2k共享到内核, 内核模块读内存N成功,用户态调用unshare)。
+ * 进程A最后一次停止共享后,内核模块读N失败。进程A释放内存N。
+ */
+static int child3(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = alloc_info->spg_id;
+
+ // add group
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ // alloc
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info->addr, 'q', alloc_info->size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info->addr,
+ .size = alloc_info->size,
+ .pid = getpid(),
+ };
+
+ // 反复调用u2k-内核读-unshare
+ for (int i = 0; i < share_num; i++) {
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'q',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ /*
+ * unshare之后再访问内存会导致内核挂掉,在手工测试的时候执行
+ */
+#if 0
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info[0]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+#endif
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ return ret;
+}
+
+static void print_help()
+{
+ printf("Usage:./test_multi_u2k -n alloc_num -s share_num -r read_num\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "n:s:r:")) != -1) {
+ switch (opt) {
+ case 'n': // 内存申请次数
+ alloc_num = atoi(optarg);
+ if (alloc_num > MAX_ALLOC || alloc_num <= 0) {
+ printf("alloc number invalid\n");
+ return -1;
+ }
+ break;
+ case 's': // u2k共享次数
+ share_num = atoi(optarg);
+ if (share_num > MAX_SHARE || share_num <= 0) {
+ printf("share number invalid\n");
+ return -1;
+ }
+ break;
+ case 'r': // 内核读内存次数
+ read_num = atoi(optarg);
+ if (read_num > MAX_READ || read_num <= 0) {
+ printf("read number invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+#define PROC_NUM 4
+static struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = 0,
+ .spg_id = 10,
+ .size = 100 * PAGE_SIZE, //400K
+ },
+ {
+ .flag = SP_HUGEPAGE,
+ .spg_id = 12,
+ .size = 10 * PMD_SIZE, // 20M
+ },
+ {
+ .flag = SP_DVPP,
+ .spg_id = 19,
+ .size = 100000, // 约100K
+ },
+ {
+ .flag = SP_DVPP | SP_HUGEPAGE,
+ .spg_id = 19,
+ .size = 10000000, // 约1M
+ },
+};
+
+int (*child_funcs[])(struct sp_alloc_info *) = {
+ child1,
+ child2,
+ child3,
+};
+
+static int testcase(int child_idx)
+{
+ int ret = 0;
+ pid_t procs[PROC_NUM];
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ ret = -1;
+ goto error_out;
+ } else if (pid == 0) {
+ exit(child_funcs[child_idx](alloc_infos + i));
+ } else {
+ procs[i] = pid;
+ }
+ }
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ int status = 0;
+ waitpid(procs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("testcase failed!!, func: %d, alloc info: %d", child_idx, i);
+ ret = -1;
+ } else {
+ pr_info("testcase success!!, func: %d, alloc info: %d", child_idx, i);
+ }
+ }
+
+ return 0;
+error_out:
+ return -1;
+}
+
+static int testcase1(void) { return testcase(0); }
+static int testcase2(void) { return testcase(1); }
+static int testcase3(void) { return testcase(2); }
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "用户态反复调用u2k共享到内核,内核模块通过每个kva反复读同一块内存")
+ TESTCASE_CHILD(testcase2, "创建100个线程,分别对一块alloc内存调用u2k,并存储返回地址;内核读内存")
+ TESTCASE_CHILD(testcase3, "用户态u2k-内核读-unshare,重复循环")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k3.c b/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k3.c
new file mode 100644
index 000000000000..0e4b5728df30
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k3.c
@@ -0,0 +1,314 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 27 13:45:03 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * group_num个组,每组process_pre_group个进程,所有进程并发执行:多次(sp_alloc -> u2k -> 内核读内存 -> unshare -> sp_free)
+ * 用信号量进行同步,进程创建完成后先睡眠
+ */
+
+#define MAX_GROUP 500
+#define MAX_PROC_PER_GRP 500
+#define MAX_ALLOC 100000
+
+static sem_t *child_sync[MAX_GROUP];
+static sem_t *grandchild_sync[MAX_GROUP];
+
+static int group_num = 2;
+static int process_pre_group = 100;
+static int alloc_num = 100;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", arg, getpid(), ##args)
+ int ret;
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync[arg / MAX_PROC_PER_GRP]);
+ } while ((ret != 0) && errno == EINTR);
+
+ pr_local_info("start!!");
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ sem_post(grandchild_sync[arg / MAX_PROC_PER_GRP]);
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .spg_id = group_id,
+ .size = alloc_size,
+ };
+
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ return ret;
+ } else {
+ if (IS_ERR_VALUE(alloc_info.addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", alloc_info.addr);
+ return -1;
+ }
+ }
+ memset((void *)alloc_info.addr, 'r', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_local_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'r',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_local_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_local_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_local_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ printf("child %d grandchild %d exit!!\n", arg / MAX_PROC_PER_GRP, getpid());
+ return 0;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ for (int i = 0; i < process_pre_group; i++) {
+ int num = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(num);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", num, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ /* 加组有时候产生-17错误. 尝试retry,有时可以成功,有时还是会一直卡在这里。
+ do {
+ pr_local_info("add grandchild%d to group %d failed, retry", num, group_id);
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ } while (ret < 0);
+ */
+ pr_local_info("add grandchild%d to group %d failed", num, group_id);
+ kill(childs[i], SIGKILL);
+ childs[i] = 0;
+ goto out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", num, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+out:
+ for (int i = 0; i < process_pre_group && childs[i] > 0; i++) {
+ waitpid(childs[i], &status, 0);
+ if (status) {
+ pr_local_info("grandchild%d test failed, %d", arg * MAX_PROC_PER_GRP + i, status);
+ ret = status;
+ }
+ }
+
+ printf("child %d exit!!\n", arg);
+ return ret;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:./test_multi_u2k3 -g group_num -p proc_num -n alloc&share_num -s alloc_size\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_pre_group = atoi(optarg);
+ if (process_pre_group > MAX_PROC_PER_GRP || process_pre_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > MAX_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 内存申请次数
+ alloc_num = atoi(optarg);
+ if (alloc_num > MAX_ALLOC || alloc_num <= 0) {
+ printf("alloc number invalid\n");
+ return -1;
+ }
+ break;
+ case 's': // 内存申请大小
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ printf("unsupported options: '%c'\n", opt);
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+#define pr_local_info(fmt, args...) printf("[parent , pid:%d] " fmt "\n", getpid(), ##args)
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[MAX_GROUP];
+
+ pr_local_info("group: %d, process_pre_group: %d", group_num, process_pre_group);
+
+ dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_local_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_local_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+ pr_local_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ if (status) {
+ pr_local_info("child%d test failed, %d", i, status);
+ ret = status;
+ }
+ }
+
+ pr_local_info("exit!!");
+ return ret;
+#undef pr_local_info
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "进程并发执行sp_alloc -> u2k -> 内核读内存 -> unshare -> sp_free(循环)")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k4.c b/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k4.c
new file mode 100644
index 000000000000..c856fd3a3caa
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k4.c
@@ -0,0 +1,310 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 27 13:45:03 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * group_num个组,每组process_pre_group个进程,所有进程并发执行:多次(sp_alloc -> u2k -> 内核读内存 -> unshare -> sp_free)
+ * 用信号量进行同步,进程创建完成后先睡眠
+ */
+
+#define MAX_GROUP 500
+#define MAX_PROC_PER_GRP 500
+#define MAX_ALLOC 100000
+
+static sem_t *child_sync[MAX_GROUP];
+static sem_t *grandchild_sync[MAX_GROUP];
+
+static int group_num = 2;
+static int process_pre_group = 100;
+static int alloc_num = 100;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", arg, getpid(), ##args)
+ int ret;
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync[arg / MAX_PROC_PER_GRP]);
+ } while ((ret != 0) && errno == EINTR);
+
+ pr_local_info("start!!");
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ sem_post(grandchild_sync[arg / MAX_PROC_PER_GRP]);
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .spg_id = group_id,
+ .size = alloc_size,
+ };
+
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ return ret;
+ } else {
+ if (IS_ERR_VALUE(alloc_info.addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", alloc_info.addr);
+ return -1;
+ }
+ }
+ memset((void *)alloc_info.addr, 'r', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ //ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_local_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'r',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+
+ //ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_local_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+
+ //ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_local_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_local_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ printf("child %d grandchild %d exit!!\n", arg / MAX_PROC_PER_GRP, getpid());
+ return 0;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ for (int i = 0; i < process_pre_group; i++) {
+ int num = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(num);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", num, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ /* 加组有时候产生-17错误. 尝试retry,有时可以成功,有时还是会一直卡在这里。
+ do {
+ pr_local_info("add grandchild%d to group %d failed, retry", num, group_id);
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ } while (ret < 0);
+ */
+ pr_local_info("add grandchild%d to group %d failed", num, group_id);
+ kill(childs[i], SIGKILL);
+ childs[i] = 0;
+ goto out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", num, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+out:
+ for (int i = 0; i < process_pre_group && childs[i] > 0; i++) {
+ waitpid(childs[i], &status, 0);
+ if (status) {
+ pr_local_info("grandchild%d test failed, %d", arg * MAX_PROC_PER_GRP + i, status);
+ ret = status;
+ }
+ }
+
+ printf("child %d exit!!\n", arg);
+ return ret;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:./test_multi_u2k3 -g group_num -p proc_num -n alloc&share_num -s alloc_size\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_pre_group = atoi(optarg);
+ if (process_pre_group > MAX_PROC_PER_GRP || process_pre_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > MAX_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 内存申请次数
+ alloc_num = atoi(optarg);
+ if (alloc_num > MAX_ALLOC || alloc_num <= 0) {
+ printf("alloc number invalid\n");
+ return -1;
+ }
+ break;
+ case 's': // 内存申请大小
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ printf("unsupported options: '%c'\n", opt);
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+#define pr_local_info(fmt, args...) printf("[parent , pid:%d] " fmt "\n", getpid(), ##args)
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[MAX_GROUP];
+
+ pr_local_info("group: %d, process_pre_group: %d", group_num, process_pre_group);
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_local_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_local_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+ pr_local_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ if (status) {
+ pr_local_info("child%d test failed, %d", i, status);
+ ret = status;
+ }
+ }
+
+ pr_local_info("exit!!");
+ return ret;
+#undef pr_local_info
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "进程并发执行:多次(sp_alloc -> u2k -> 内核读内存 -> unshare -> sp_free)")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/stress_test/Makefile b/tools/testing/sharepool/testcase/test_mult_process/stress_test/Makefile
new file mode 100644
index 000000000000..9a2b520d1b5f
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/stress_test/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/test_mult_process
+
+clean:
+ rm -rf $(testcases)
\ No newline at end of file
diff --git a/tools/testing/sharepool/testcase/test_mult_process/stress_test/test_alloc_free_two_process.c b/tools/testing/sharepool/testcase/test_mult_process/stress_test/test_alloc_free_two_process.c
new file mode 100644
index 000000000000..263821bee137
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/stress_test/test_alloc_free_two_process.c
@@ -0,0 +1,303 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 两个组,每组两个进程,同时多次申请释放内存
+ *
+ * 用信号量进行同步,进程创建完成后先睡眠
+ */
+
+#define NR_GROUP 100
+#define MAX_PROC_PER_GRP 100
+
+static int group_num = 2;
+static int process_per_group = 3;
+static int alloc_num = 1000;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int arg, sem_t *child_sync, sem_t *grandchild_sync)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", arg, getpid(), ##args)
+ int ret;
+
+ struct sp_alloc_info *alloc_infos = malloc(sizeof(*alloc_infos) * alloc_num);
+ if (!alloc_infos) {
+ pr_local_info("malloc failed");
+ return -1;
+ }
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync);
+ } while ((ret != 0) && errno == EINTR);
+
+ sleep(1); // it seems sem_wait doesn't work as expected
+ pr_local_info("start!!, ret is %d, errno is %d", ret, errno);
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ goto error_out;
+ }
+
+ for (int i = 0; i < alloc_num; i++) {
+ (alloc_infos + i)->flag = 0,
+ (alloc_infos + i)->spg_id = group_id,
+ (alloc_infos + i)->size = alloc_size,
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, alloc_infos + i);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ goto error_out;
+ } else {
+ if (IS_ERR_VALUE(alloc_infos[i].addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", alloc_infos[i].addr);
+ goto error_out;
+ }
+ }
+
+ memset((void *)alloc_infos[i].addr, 'z', alloc_infos[i].size);
+ }
+
+ sem_post(grandchild_sync);
+ do {
+ ret = sem_wait(child_sync);
+ } while (ret < 0 && errno == EINTR);
+
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_free(dev_fd, alloc_infos + i);
+ if (ret < 0) {
+ pr_local_info("ioctl free failed, errno: %d", errno);
+ goto error_out;
+ }
+ }
+ sem_post(grandchild_sync);
+ free(alloc_infos);
+ pr_local_info("exit!!");
+ return 0;
+
+error_out:
+ sem_post(grandchild_sync);
+ free(alloc_infos);
+ return -1;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+ sem_t *child_sync[MAX_PROC_PER_GRP] = {0};
+ sem_t *grandchild_sync[MAX_PROC_PER_GRP] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ // create syncs for grandchilds
+ for (int i = 0; i < process_per_group; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_local_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // create syncs for childs
+ for (int i = 0; i < process_per_group; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_local_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // 创建子进程并将之加组
+ for (int i = 0; i < process_per_group; i++) {
+ int num = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(num, child_sync[i], grandchild_sync[i]);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", num, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", num, group_id);
+ goto error_out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", num, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[i]);
+ }
+ }
+
+ for (int i = 0; i < process_per_group; i++)
+ do {
+ ret = sem_wait(grandchild_sync[i]);
+ } while (ret < 0 && errno == EINTR);
+
+ for (int i = 0; i < process_per_group; i++)
+ sem_post(child_sync[i]);
+ pr_local_info("grandchild-processes start to do sp_free");
+
+ for (int i = 0; i < process_per_group && childs[i] > 0; i++) {
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_local_info("grandchild%d test failed, %d", arg * MAX_PROC_PER_GRP + i, status);
+ ret = -1;
+ }
+ }
+ pr_local_info("exit!!");
+ return ret;
+
+error_out:
+ for (int i = 0; i < process_per_group && childs[i] > 0; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], NULL, 0);
+ };
+ pr_local_info("exit!!");
+ return -1;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_PER_GRP || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > NR_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 内存申请次数
+ alloc_num = atoi(optarg);
+ if (alloc_num > 100000 || alloc_num <= 0) {
+ printf("alloc number invalid\n");
+ return -1;
+ }
+ break;
+ case 's':
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+#define pr_local_info(fmt, args...) printf("[parent , pid:%d] " fmt "\n", getpid(), ##args)
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[NR_GROUP];
+
+ pr_local_info("group: %d, process_per_group: %d", group_num, process_per_group);
+
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+
+ pr_local_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_local_info("child%d test failed, %d", i, status);
+ ret = -1;
+ }
+ }
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "两个组每组两个进程,同时申请释放内存,简单验证")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/stress_test/test_fuzz.c b/tools/testing/sharepool/testcase/test_mult_process/stress_test/test_fuzz.c
new file mode 100644
index 000000000000..88606c608217
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/stress_test/test_fuzz.c
@@ -0,0 +1,543 @@
+#include "sharepool_lib.h"
+#include <unistd.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <time.h>
+#include <sys/queue.h>
+
+#define MAX_GROUP_NUM 1000
+#define MAX_PROC_NUM 128
+#define MAX_KILL_TIME 1000
+#define FUNCTION_NUM 7 // alloc, free, u2k, unshare, group query, k2u, proc_stat
+
+#define SLEEP_TIME 1 // not used
+#define USLEEP_TIME 2000
+
+#define ALLOC_SIZE PAGE_SIZE
+#define ALLOC_MULTIPLE 10 // alloc size range from 1~10 pages
+#define VMALLOC_SIZE (2 * PAGE_SIZE)
+
+int child[MAX_PROC_NUM];
+int group_ids[MAX_GROUP_NUM];
+
+// use a LIST to keep all alloc areas
+struct alloc_area {
+ unsigned long addr;
+ unsigned long size;
+ LIST_ENTRY(alloc_area) entries;
+ int spg_id;
+};
+struct alloc_list {
+ struct alloc_area *lh_first;
+};
+
+// use a LIST to keep all u2k/k2u areas
+enum sp_share_flag {
+ U2K = 0,
+ K2U = 1
+};
+struct sp_share_area {
+ unsigned long addr;
+ unsigned long size;
+ enum sp_share_flag flag;
+ LIST_ENTRY(sp_share_area) entries;
+};
+struct u2k_list {
+ struct sp_share_area *lh_first;
+};
+struct k2u_list {
+ struct sp_share_area *lh_first;
+};
+
+// use a LIST to keep all vmalloc areas
+struct vmalloc_area {
+ unsigned long addr;
+ unsigned long size;
+ LIST_ENTRY(vmalloc_area) entries;
+};
+struct vmalloc_list {
+ struct vmalloc_area *lh_first;
+};
+
+typedef struct list_arg_ {
+ void *list_head;
+ int *list_length; // we keep the list size manually
+} list_arg;
+
+static int spalloc(list_arg alloc_arg);
+static int spfree(list_arg alloc_arg);
+static int spu2k(list_arg alloc_arg, list_arg u2k_arg);
+static int spunshare(list_arg u2k_arg, list_arg k2u_arg, list_arg vmalloc_arg);
+static int spquery();
+static int spk2u(list_arg k2u_arg, list_arg vmalloc_arg);
+static int spreadproc();
+
+static int random_num(int mod_num);
+static int add_multi_group();
+static int check_multi_group();
+static int parse_opt(int argc, char *argv[]);
+
+static int group_num = 64;
+static int process_per_group = 32;
+static int kill_time = 1000;
+
+int fuzz()
+{
+ int ret = 0;
+ int alloc_list_length = 0, u2k_list_length = 0, vmalloc_list_length = 0, k2u_list_length = 0;
+
+ // initialize lists
+ struct alloc_list alloc_list = LIST_HEAD_INITIALIZER(alloc_list);
+ LIST_INIT(&alloc_list);
+ struct u2k_list u2k_list = LIST_HEAD_INITIALIZER(u2k_list);
+ LIST_INIT(&u2k_list);
+ struct vmalloc_list vmalloc_list = LIST_HEAD_INITIALIZER(vmalloc_list);
+ LIST_INIT(&vmalloc_list);
+ struct k2u_list k2u_list = LIST_HEAD_INITIALIZER(k2u_list);
+ LIST_INIT(&k2u_list);
+
+ list_arg alloc_arg = {
+ .list_head = &alloc_list,
+ .list_length = &alloc_list_length,
+ };
+ list_arg u2k_arg = {
+ .list_head = &u2k_list,
+ .list_length = &u2k_list_length,
+ };
+ list_arg k2u_arg = {
+ .list_head = &k2u_list,
+ .list_length = &k2u_list_length,
+ };
+ list_arg vmalloc_arg = {
+ .list_head = &vmalloc_list,
+ .list_length = &vmalloc_list_length,
+ };
+
+ int repeat_time = 0;
+ // begin to fuzz
+ while (repeat_time++ <= kill_time) {
+ switch(random_num(FUNCTION_NUM)) {
+ case 0:
+ ret = spalloc(alloc_arg);
+ break;
+ case 1:
+ ret = spfree(alloc_arg);
+ break;
+ case 2:
+ ret = spu2k(alloc_arg, u2k_arg);
+ break;
+ case 3:
+ ret = spunshare(u2k_arg, k2u_arg, vmalloc_arg);
+ break;
+ case 4:
+ ret = spquery();
+ break;
+ case 5:
+ ret = spk2u(k2u_arg, vmalloc_arg);
+ break;
+ case 6:
+ ret = spreadproc();
+ break;
+ default:
+ break;
+ }
+ if (ret < 0) {
+ pr_info("test process %d failed.", getpid());
+ return ret;
+ }
+ //sleep(SLEEP_TIME);
+ usleep(USLEEP_TIME);
+ }
+
+ return 0;
+}
+
+int main(int argc, char *argv[])
+{
+ int ret;
+ dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ // get opt args
+ ret = parse_opt(argc, argv);
+ if (ret)
+ return -1;
+ else
+ pr_info("\ngroup: %d, process_per_group: %d, kill time: %d\n", group_num, process_per_group, kill_time);
+
+ // create groups
+ for (int i = 0; i < group_num; i++) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, i + 1);
+ if (ret < 0) {
+ pr_info("main process add group %d failed.", i + 1);
+ return -1;
+ } else {
+ pr_info("main process add group %d success.", ret);
+ group_ids[i] = ret;
+ }
+ }
+
+ // start test processes
+ for (int i = 0; i < process_per_group; i++) {
+ int pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ } else if (pid == 0) {
+ // change the seed
+ srand((unsigned)time(NULL) % getpid());
+ // child add groups
+ if (add_multi_group() < 0) {
+ pr_info("child process add all groups failed.");
+ exit(-1);
+ }
+ if (check_multi_group() < 0) {
+ pr_info("child process check all groups failed.");
+ exit(-1);
+ }
+ exit(fuzz());
+ } else {
+ pr_info("fork child %d success.", pid);
+ child[i] = pid;
+ }
+ }
+
+ ret = 0;
+ // waitpid
+ for (int i = 0; i < process_per_group; i++) {
+ int status;
+ waitpid(child[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", i, status);
+ ret = -1;
+ } else {
+ pr_info("child%d test success.", i);
+ }
+ }
+
+ close_device(dev_fd);
+ return ret;
+}
+
+static int spalloc(list_arg alloc_arg)
+{
+ int ret;
+ struct alloc_list *list_head = (struct alloc_list *)(alloc_arg.list_head);
+
+ struct sp_alloc_info alloc_info = {
+ .size = ALLOC_SIZE * (random_num(ALLOC_MULTIPLE) + 1),
+ .flag = 0,
+ .spg_id = group_ids[random_num(group_num)],
+ };
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("test process %d alloc in group %d failed, errno: %d.",
+ getpid(), alloc_info.spg_id, errno);
+ return -1;
+ }
+
+ // alloc areas are kept in a list
+ pr_info("test process %d alloc in group %d success.", getpid(), alloc_info.spg_id);
+ struct alloc_area *a1 = malloc(sizeof(struct alloc_area));
+ a1->addr = alloc_info.addr;
+ a1->size = alloc_info.size;
+ a1->spg_id = alloc_info.spg_id;
+ LIST_INSERT_HEAD(list_head, a1, entries);
+ ++*alloc_arg.list_length;
+
+ return 0;
+}
+
+static int spfree(list_arg alloc_arg)
+{
+ int ret = 0;
+ struct alloc_list *list_head = (struct alloc_list *)alloc_arg.list_head;
+
+ // return if no alloc areas left
+ if (*alloc_arg.list_length == 0)
+ return 0;
+
+ // free a random one
+ int free_index = random_num(*alloc_arg.list_length);
+ int index = 0;
+ struct alloc_area *alloc_area;
+ LIST_FOREACH(alloc_area, list_head, entries) {
+ if (index++ == free_index) {
+ ret = wrap_sp_free(alloc_area->addr);
+ if (ret < 0) {
+ pr_info("test process %d free %lx failed, %d areas left.",
+ getpid(), alloc_area->addr, *alloc_arg.list_length);
+ return -1;
+ } else {
+ pr_info("test process %d free %lx success, %d areas left.",
+ getpid(), alloc_area->addr, --*alloc_arg.list_length);
+ LIST_REMOVE(alloc_area, entries);
+ }
+ break;
+ }
+ }
+
+ if (--index != free_index)
+ pr_info("index error");
+
+ return 0;
+}
+
+static int spu2k(list_arg alloc_arg, list_arg u2k_arg)
+{
+ int ret;
+ struct alloc_list *alloc_list_head = (struct alloc_list *)alloc_arg.list_head;
+ struct u2k_list *u2k_list_head = (struct u2k_list *)u2k_arg.list_head;
+
+ // return if no alloc areas left
+ if (*alloc_arg.list_length == 0)
+ return 0;
+
+ // u2k a random one
+ int u2k_index = random_num(*alloc_arg.list_length);
+ int index = 0;
+ struct alloc_area *alloc_area;
+ LIST_FOREACH(alloc_area, alloc_list_head, entries) {
+ if (index++ == u2k_index) {
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_area->addr,
+ .size = alloc_area->size,
+ .pid = getpid(),
+ .spg_id = alloc_area->spg_id,
+ };
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("u2k failed.");
+ return -1;
+ }
+
+ pr_info("test process %d u2k in group %d success.", getpid(), u2k_info.spg_id);
+ struct sp_share_area *u1 = malloc(sizeof(struct sp_share_area));
+ u1->addr = u2k_info.addr;
+ u1->size = u2k_info.size;
+ u1->flag = U2K;
+ LIST_INSERT_HEAD(u2k_list_head, u1, entries);
+ (*u2k_arg.list_length)++;
+
+ break;
+ }
+ }
+
+ if (--index != u2k_index)
+ pr_info("index error");
+
+ return 0;
+}
+
+static int spunshare(list_arg u2k_arg, list_arg k2u_arg, list_arg vmalloc_arg)
+{
+ int ret = 0;
+ struct u2k_list *u2k_list_head = (struct u2k_list*)u2k_arg.list_head;
+ struct k2u_list *k2u_list_head = (struct k2u_list*)k2u_arg.list_head;
+ struct vmalloc_list *vmalloc_list_head = (struct vmalloc_list*)vmalloc_arg.list_head;
+
+ // unshare a random u2k area
+ if (*u2k_arg.list_length == 0)
+ goto k2u_unshare;
+ int u2k_unshare_index = random_num(*u2k_arg.list_length);
+ int index = 0;
+ struct sp_share_area *u2k_area;
+ LIST_FOREACH(u2k_area, u2k_list_head, entries) {
+ if (index++ == u2k_unshare_index) {
+ ret = wrap_unshare(u2k_area->addr, u2k_area->size);
+ if (ret < 0) {
+ pr_info("test process %d unshare uva %lx failed, %d areas left.",
+ getpid(), u2k_area->addr, *u2k_arg.list_length);
+ return -1;
+ } else {
+ pr_info("test process %d unshare uva %lx success, %d areas left.",
+ getpid(), u2k_area->addr, --*u2k_arg.list_length);
+ LIST_REMOVE(u2k_area, entries);
+ }
+ break;
+ }
+ }
+
+k2u_unshare:
+ if (*k2u_arg.list_length == 0)
+ return 0;
+
+ // unshare a random k2u area
+ int k2u_unshare_index = random_num(*k2u_arg.list_length);
+ index = 0;
+ struct sp_share_area *k2u_area;
+ LIST_FOREACH(k2u_area, k2u_list_head, entries) {
+ if (index++ == k2u_unshare_index) {
+ ret = wrap_unshare(k2u_area->addr, k2u_area->size);
+ if (ret < 0) {
+ pr_info("test process %d unshare kva %lx failed, %d areas left.",
+ getpid(), k2u_area->addr, *k2u_arg.list_length);
+ return -1;
+ } else {
+ pr_info("test process %d unshare kva %lx success, %d areas left.",
+ getpid(), k2u_area->addr, --*k2u_arg.list_length);
+ LIST_REMOVE(k2u_area, entries);
+ }
+ break;
+ }
+ }
+
+ // vfree the vmalloc area
+ int vfree_index = k2u_unshare_index;
+ index = 0;
+ struct vmalloc_area *vmalloc_area;
+ LIST_FOREACH(vmalloc_area, vmalloc_list_head, entries) {
+ if (index++ == vfree_index) {
+ wrap_vfree(vmalloc_area->addr);
+ pr_info("test process %d vfreed %lx, %d areas left.",
+ getpid(), vmalloc_area->addr, --*vmalloc_arg.list_length);
+ LIST_REMOVE(vmalloc_area, entries);
+ break;
+ }
+ }
+
+ return 0;
+}
+
+static int spquery()
+{
+ return check_multi_group();
+}
+
+static int spk2u(list_arg k2u_arg, list_arg vmalloc_arg)
+{
+ int ret = 0;
+ struct vmalloc_list *vmalloc_list_head = (struct vmalloc_list*)vmalloc_arg.list_head;
+ struct k2u_list *k2u_list_head = (struct k2u_list*)k2u_arg.list_head;
+
+ // vmalloc
+ struct vmalloc_info vmalloc_info = {
+ .size = VMALLOC_SIZE,
+ };
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ struct vmalloc_area *v1 = malloc(sizeof(struct vmalloc_area));
+ v1->addr = vmalloc_info.addr;
+ v1->size = vmalloc_info.size;
+ LIST_INSERT_HEAD(vmalloc_list_head, v1, entries);
+ (*vmalloc_arg.list_length)++;
+
+ // k2u to random group
+ struct sp_make_share_info k2u_info = {
+ .kva = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ .pid = getpid(),
+ .spg_id = random_num(group_num + 1), // 1~MAX_GROUP_NUM :k2spg 0:k2task
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("test process %d k2u failed, errno: %d", getpid(), errno);
+ return -1;
+ }
+ pr_info("test process %d k2u in group %d success.", getpid(), k2u_info.spg_id);
+ struct sp_share_area *k1 = malloc(sizeof(struct sp_share_area));
+ k1->addr = k2u_info.addr;
+ k1->size = k2u_info.size;
+ k1->flag = K2U;
+ LIST_INSERT_HEAD(k2u_list_head, k1, entries);
+ (*k2u_arg.list_length)++;
+
+ return 0;
+}
+
+static int spreadproc()
+{
+ return sharepool_log("-----test proc_stat & spa_stat-------");
+}
+
+static int add_multi_group()
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+ for (int i = 0; i < group_num; i++) {
+ ag_info.spg_id = group_ids[i];
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("proc%d add group%d failed", getpid(), group_ids[i]);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int check_multi_group()
+{
+ int ret;
+ // query groups
+ int spg_ids[MAX_GROUP_NUM];
+ int expect_group_num = MAX_GROUP_NUM;
+ struct sp_group_id_by_pid_info find_group_info = {
+ .num = &expect_group_num,
+ .spg_ids = spg_ids,
+ .pid = getpid(),
+ };
+ ret = ioctl_find_group_by_pid(dev_fd, &find_group_info);
+ if (ret < 0) {
+ pr_info("find group id by pid failed");
+ return ret;
+ } else {
+ for (int i = 0; i < expect_group_num; i++)
+ if (spg_ids[i] != group_ids[i]) {
+ pr_info("group id %d not consistent", i);
+ ret = -1;
+ }
+ pr_info("test process %d add all groups success.", getpid());
+ }
+
+ return ret;
+}
+
+static int random_num(int mod_num)
+{
+ return (int)(mod_num * rand() / (RAND_MAX + 1.0));
+}
+
+static void print_help()
+{
+ printf("Usage:\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_NUM || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > MAX_GROUP_NUM || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 循环次数
+ kill_time = atoi(optarg);
+ if (kill_time > MAX_KILL_TIME || kill_time <= 0) {
+ printf("alloc number invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
\ No newline at end of file
diff --git a/tools/testing/sharepool/testcase/test_mult_process/stress_test/test_mult_proc_interface.c b/tools/testing/sharepool/testcase/test_mult_process/stress_test/test_mult_proc_interface.c
new file mode 100644
index 000000000000..d5966dc1f737
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/stress_test/test_mult_proc_interface.c
@@ -0,0 +1,701 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Sun Jan 31 14:42:01 2021
+ */
+#include <pthread.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+#include "sharepool_lib.h"
+
+//#define GROUP_ID 1
+#define TIMES 4
+
+#define REPEAT_TIMES 5
+#define THREAD_NUM 15
+#define PROCESS_NUM 15
+
+#define GROUP_NUM 10
+static int group_ids[GROUP_NUM];
+
+void *thread_k2u_task(void *arg)
+{
+ int ret;
+ pid_t pid = getpid();
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ struct sp_make_share_info k2u_info = {0}, k2u_huge_info = {0};
+ char *addr;
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = SP_DVPP;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = SPG_ID_DEFAULT;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = 0;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = SPG_ID_DEFAULT;
+
+ /* k2u */
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ret = ioctl_vfree(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vfree small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vfree(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vfree huge page error: %d\n", ret);
+ goto error;
+ }
+
+ //pr_info("\nfinish running thread %lu\n", pthread_self());
+ pthread_exit((void *)0);
+
+error:
+ pthread_exit((void *)1);
+}
+
+/*
+ * alloc - u2k - vmalloc - k2u - unshare - vfree - unshare - free
+ * 对大页/大页dvpp/小页/小页dvpp都测一下
+ * 多组:对每一组都测一下(query 进程所在的group)
+ */
+void *thread_and_process_helper(int group_id)
+{
+ int ret, i;
+ pid_t pid;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[TIMES] = {0};
+ struct sp_make_share_info u2k_info[TIMES] = {0}, k2u_info = {0}, k2u_huge_info = {0};
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ char *addr;
+
+ /* check sp group */
+ pid = getpid();
+
+ // 大页
+ alloc_info[0].flag = SP_HUGEPAGE;
+ alloc_info[0].spg_id = group_id;
+ alloc_info[0].size = 2 * PMD_SIZE;
+
+ // 大页 DVPP
+ alloc_info[1].flag = SP_DVPP | SP_HUGEPAGE;
+ alloc_info[1].spg_id = group_id;
+ alloc_info[1].size = 2 * PMD_SIZE;
+
+ // 普通页 DVPP
+ alloc_info[2].flag = SP_DVPP;
+ alloc_info[2].spg_id = group_id;
+ alloc_info[2].size = 4 * PAGE_SIZE;
+
+ // 普通页
+ alloc_info[3].flag = 0;
+ alloc_info[3].spg_id = group_id;
+ alloc_info[3].size = 4 * PAGE_SIZE;
+
+ /* alloc & u2k */
+ for (i = 0; i < TIMES; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("sp_alloc return addr %lx\n", alloc_info[i].addr);
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(dev_fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("addr %lx is a valid share pool addr\n", alloc_info[i].addr);
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ pr_info("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("u2k return addr %lx, check memory content succ.\n",
+ // u2k_info[i].addr);
+ }
+ }
+ //pr_info("\n");
+ }
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = group_id;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = group_id;
+
+ /* k2u */
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ret = ioctl_vfree(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vfree small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vfree(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vfree huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* unshare kva & sp_free*/
+ for (i = 0; i < TIMES; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ //close_device(dev_fd);
+ return 0;
+
+error:
+ //close_device(dev_fd);
+ return -1;
+}
+
+/* for each spg, do the helper test routine */
+void *thread(void *arg)
+{
+ int ret;
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = thread_and_process_helper(group_ids[i]);
+ if (ret != 0) {
+ pr_info("\nthread %lu finish running with error, spg_id: %d\n", pthread_self(), group_ids[i]);
+ pthread_exit((void *)1);
+ }
+ }
+ //pr_info("\nthread %lu finish running all groups succes", pthread_self());
+ pthread_exit((void *)0);
+}
+
+static int process()
+{
+ int ret;
+ int spg_ids[GROUP_NUM];
+ int group_num = GROUP_NUM;
+ struct sp_group_id_by_pid_info find_group_info = {
+ .num = &group_num,
+ .spg_ids = spg_ids,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_find_group_by_pid(dev_fd, &find_group_info);
+ if (ret < 0) {
+ pr_info("find group id by pid failed");
+ return ret;
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ ret = thread_and_process_helper(group_ids[i]);
+ if (ret < 0) {
+ pr_info("thread_and_process_helper failed");
+ return -1;
+ }
+ }
+
+ return ret;
+}
+
+static int add_multi_group_auto()
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+ for (int i = 0; i < GROUP_NUM; i++) {
+ group_ids[i] = SPG_ID_AUTO;
+ ag_info.spg_id = group_ids[i];
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("proc%d add group%d failed", getpid(), group_ids[i]);
+ return -1;
+ }
+ }
+
+ return ret;
+}
+
+static int add_multi_group()
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+ for (int i = 0; i < GROUP_NUM; i++) {
+ group_ids[i] = i + 1;
+ ag_info.spg_id = group_ids[i];
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("proc%d add group%d failed", getpid(), group_ids[i]);
+ return -1;
+ }
+ }
+
+ return ret;
+}
+
+static int check_multi_group()
+{
+ int ret;
+ // query groups
+ int spg_ids[GROUP_NUM];
+ int group_num = GROUP_NUM;
+ struct sp_group_id_by_pid_info find_group_info = {
+ .num = &group_num,
+ .spg_ids = spg_ids,
+ .pid = getpid(),
+ };
+ ret = ioctl_find_group_by_pid(dev_fd, &find_group_info);
+ if (ret < 0) {
+ pr_info("find group id by pid failed");
+ return ret;
+ } else
+ for (int i = 0; i < GROUP_NUM; i++)
+ if (spg_ids[i] != group_ids[i]) {
+ pr_info("group id %d not consistent", i);
+ ret = -1;
+ }
+ return ret;
+}
+
+/* 创建15个线程,执行单k2task任务。重复创建5次。*/
+static int testcase1(void)
+{
+ int ret = 0;
+ pthread_t tid[THREAD_NUM];
+
+ // add groups
+ if (add_multi_group())
+ return -1;
+
+ //query groups
+ if (check_multi_group())
+ return -1;
+ pr_info("%s add to all groups success.", __FUNCTION__);
+
+ // 创建15个线程,执行单k2task任务。重复创建5次。
+ pr_info("\n --- thread k2task multi group test --- ");
+ for (int i = 0; i < REPEAT_TIMES; i++) {
+
+ pr_info("thread k2task %dth test, %d times left.", i + 1, REPEAT_TIMES - i - 1);
+
+ for (int j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_create(tid + j, NULL, thread_k2u_task, NULL);
+ if (ret != 0) {
+ pr_info("create thread error\n");
+ return -1;
+ }
+ }
+
+ for (int j = 0; j < THREAD_NUM; j++) {
+ void *tret;
+ if (pthread_join(tid[j], &tret) != 0) {
+ pr_info("can't join thread %d\n", j);
+ ret = -1;
+ }
+ if ((long)tret != 0) {
+ pr_info("testcase execution failed, thread %d exited unexpectedly\n", j);
+ ret = -1;
+ }
+ }
+
+ sleep(3);
+ }
+
+ return ret;
+}
+
+/* 创建15个线程,执行u2k+k2u混合任务。重复创建5次。*/
+static int testcase2(void)
+{
+ int ret = 0;
+
+ // add groups
+ if (add_multi_group())
+ return -1;
+
+ //query groups
+ if (check_multi_group())
+ return -1;
+ pr_info("%s add to all groups success.", __FUNCTION__);
+
+ // 创建15个线程,执行u2k+k2u混合任务。重复创建5次。
+ pthread_t tid[THREAD_NUM];
+ void *tret;
+ pr_info("\n --- thread u2k+k2u multi group test --- \n");
+ for (int i = 0; i < REPEAT_TIMES; i++) {
+ pr_info("thread u2k+k2u %dth test, %d times left.", i + 1, REPEAT_TIMES - i - 1);
+ for (int j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_create(tid + j, NULL, thread, NULL);
+ if (ret != 0) {
+ pr_info("create thread error\n");
+ return -1;
+ }
+ }
+ for (int j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_join(tid[j], &tret);
+ if (ret != 0) {
+ pr_info("can't join thread %d\n", j);
+ }
+ if ((long)tret != 0) {
+ pr_info("testcase execution failed, thread %d exited unexpectedly\n", j);
+ }
+ }
+ sleep(3);
+ }
+ return ret;
+}
+
+/* 创建15个进程,加入所有组,并执行u2k+k2u混合任务。重复创建5次。*/
+static int testcase3(void)
+{
+ int ret = 0;
+ pid_t childs[PROCESS_NUM];
+
+ // 创建15个进程,加入所有组,并执行u2k+k2u混合任务。重复创建5次。
+ pr_info("\n --- process u2k+k2u multi group test ---\n");
+ for (int i = 0; i < REPEAT_TIMES; i++) {
+ pr_info("process u2k+k2u %dth test, %d times left.", i + 1, REPEAT_TIMES - i - 1);
+ for (int j = 0; j < PROCESS_NUM; j++) {
+ pid_t pid_child = fork();
+ if (pid_child < 0) {
+ pr_info("fork failed, error %d", pid_child);
+ exit(-1);
+ } else if (pid_child == 0) {
+ if (add_multi_group())
+ pr_info("add group failed");
+ pr_info("%s add %dth child to all groups success, %d left",
+ __FUNCTION__, j + 1, PROCESS_NUM - j - 1);
+ exit(process());
+ } else {
+ childs[j] = pid_child;
+ pr_info("fork child%d, pid: %d", j, pid_child);
+ }
+ }
+
+ for (int j = 0; j < PROCESS_NUM; j++) {
+ int status;
+ waitpid(childs[j], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", j, status);
+ ret = -1;
+ }
+ }
+ sleep(1);
+ }
+
+ return ret;
+}
+
+/* 创建15个进程和15个线程,执行u2k+k2u混合任务。重复创建5次。*/
+static int testcase4(void)
+{
+ pr_info("\n --- process and thread u2k_k2u mix multi group test --- \n");
+
+ int ret = 0;
+
+ // add groups
+ if (add_multi_group())
+ return -1;
+
+ //query groups
+ if (check_multi_group())
+ return -1;
+ pr_info("%s add to all groups success.", __FUNCTION__);
+
+ // 创建15个进程和15个线程,执行u2k+k2u混合任务。重复创建5次。
+ for (int i = 0; i < REPEAT_TIMES; i++) {
+
+ pr_info("process+thread u2k+k2u %dth test, %d times left.", i + 1, REPEAT_TIMES - i - 1);
+
+ pthread_t tid[THREAD_NUM];
+ for (int j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_create(tid + j, NULL, thread, NULL);
+ if (ret != 0) {
+ pr_info("create thread error\n");
+ return -1;
+ }
+ }
+
+ pid_t childs[PROCESS_NUM];
+ for (int k = 0; k < PROCESS_NUM; k++) {
+ pid_t pid_child = fork();
+ if (pid_child < 0) {
+ pr_info("fork failed, error %d", pid_child);
+ exit(-1);
+ } else if (pid_child == 0) {
+ // add groups
+ if (add_multi_group())
+ return -1;
+ //query groups
+ if (check_multi_group())
+ return -1;
+ pr_info("%s add %dth child to all groups success, %d left",
+ __FUNCTION__, k + 1, PROCESS_NUM - k - 1);
+ exit(process());
+ } else {
+ childs[k] = pid_child;
+ //pr_info("fork child%d, pid: %d", k, pid_child);
+ }
+ }
+
+ for (int j = 0; j < THREAD_NUM; j++) {
+ void *tret;
+ ret = pthread_join(tid[j], &tret);
+ if (ret != 0) {
+ pr_info("can't join thread %d\n", j);
+ }
+ if ((long)tret != 0) {
+ pr_info("testcase execution failed, thread %d exited unexpectedly\n", j);
+ ret = -1;
+ }
+ }
+
+ for (int k = 0; k < PROCESS_NUM; k++) {
+ int status;
+ waitpid(childs[k], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", k, status);
+ ret = -1;
+ }
+ }
+
+ sleep(3);
+ }
+ pr_info("\n --- process and thread u2k_k2u mix multi group test success!! --- \n");
+ sleep(3);
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "创建15个线程,执行单k2task任务。重复创建5次。")
+ TESTCASE_CHILD(testcase2, "创建15个线程,执行u2k+k2u混合任务。重复创建5次。")
+ TESTCASE_CHILD(testcase3, "创建15个进程,加入所有组,并执行u2k+k2u混合任务。重复创建5次。")
+ TESTCASE_CHILD(testcase4, "创建15个进程和15个线程,执行u2k+k2u混合任务。重复创建5次")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/test_mult_process.sh b/tools/testing/sharepool/testcase/test_mult_process/test_mult_process.sh
new file mode 100755
index 000000000000..06af521aadb2
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/test_mult_process.sh
@@ -0,0 +1,53 @@
+#!/bin/sh
+
+set -x
+
+echo 'test_u2k_add_and_kill -g 10 -p 20
+ test_alloc_add_and_kill -n 200
+ test_add_multi_cases
+ test_max_group_per_process
+ test_mult_alloc_and_add_group
+ test_mult_process_thread_exit
+ test_mult_thread_add_group
+ test_add_group_and_print
+ test_concurrent_debug -p 10 -g 5 -n 100
+ test_proc_interface_process
+ test_mult_k2u
+ test_mult_pass_through
+ test_mult_thread_k2u
+ test_mult_u2k -n 50
+ test_mult_u2k3
+ test_mult_u2k4
+ test_alloc_free_two_process -p 10 -g 5 -n 100 -s 1000
+ test_fuzz
+ test_mult_proc_interface' | while read line
+do
+ let flag=0
+ ./test_mult_process/$line
+ if [ $? -ne 0 ] ;then
+ echo "testcase test_mult_process/$line failed"
+ let flag=1
+ fi
+ sleep 3
+ #打印spa_stat
+ ret=`cat /proc/sharepool/spa_stat | wc -l`
+ if [ $ret -ge 15 ] ;then
+ cat /proc/sharepool/spa_stat
+ echo spa_stat not clean
+ let flag=1
+ fi
+ #打印proc_stat
+ ret=`cat /proc/sharepool/proc_stat | wc -l`
+ if [ $ret -ge 15 ] ;then
+ cat /proc/sharepool/proc_stat
+ echo proc_stat not clean
+ let flag=1
+ fi
+ #如果泄漏 -->exit
+ if [ $flag -eq 1 ] ;then
+ exit 1
+ fi
+ echo "testcase test_mult_process/$line success"
+ cat /proc/meminfo
+ free -m
+done
diff --git a/tools/testing/sharepool/testcase/test_mult_process/test_proc_interface.sh b/tools/testing/sharepool/testcase/test_mult_process/test_proc_interface.sh
new file mode 100755
index 000000000000..03040d13dd87
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/test_proc_interface.sh
@@ -0,0 +1,19 @@
+#!/bin/sh
+
+set -x
+
+./test_mult_process/test_proc_interface_process &
+
+for i in $(seq 1 5) # 5 processes read
+do {
+ for j in $(seq 1 40) # duration: 40x2 = 80 seconds
+ do
+ cat /proc/sharepool/proc_stat
+ cat /proc/sharepool/spa_stat
+ cat /proc/sharepool/proc_overview
+ sleep 2
+ done
+} &
+done
+wait
+
--
2.43.0
2
1
[PATCH OLK-6.6] wifi: mac80211: always free skb on ieee80211_tx_prepare_skb() failure
by Yi Yang 09 Apr '26
by Yi Yang 09 Apr '26
09 Apr '26
From: Felix Fietkau <nbd(a)nbd.name>
mainline inclusion
from mainline-v7.0-rc5
commit d5ad6ab61cbd89afdb60881f6274f74328af3ee9
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14084
CVE: CVE-2026-23444
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
ieee80211_tx_prepare_skb() has three error paths, but only two of them
free the skb. The first error path (ieee80211_tx_prepare() returning
TX_DROP) does not free it, while invoke_tx_handlers() failure and the
fragmentation check both do.
Add kfree_skb() to the first error path so all three are consistent,
and remove the now-redundant frees in callers (ath9k, mt76,
mac80211_hwsim) to avoid double-free.
Document the skb ownership guarantee in the function's kdoc.
Signed-off-by: Felix Fietkau <nbd(a)nbd.name>
Link: https://patch.msgid.link/20260314065455.2462900-1-nbd@nbd.name
Fixes: 06be6b149f7e ("mac80211: add ieee80211_tx_prepare_skb() helper function")
Signed-off-by: Johannes Berg <johannes.berg(a)intel.com>
Conflicts:
include/net/mac80211.h
[Commit 0e9824e0d59b2 ("wifi: mac80211: Add missing return value
documentation") was not merged. Context conflicts.]
Signed-off-by: Yi Yang <yiyang13(a)huawei.com>
---
drivers/net/wireless/ath/ath9k/channel.c | 6 ++----
drivers/net/wireless/virtual/mac80211_hwsim.c | 1 -
include/net/mac80211.h | 4 ++++
net/mac80211/tx.c | 4 +++-
4 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/drivers/net/wireless/ath/ath9k/channel.c b/drivers/net/wireless/ath/ath9k/channel.c
index 571062f2e82a..ba8ec5112afe 100644
--- a/drivers/net/wireless/ath/ath9k/channel.c
+++ b/drivers/net/wireless/ath/ath9k/channel.c
@@ -1011,7 +1011,7 @@ static void ath_scan_send_probe(struct ath_softc *sc,
skb_set_queue_mapping(skb, IEEE80211_AC_VO);
if (!ieee80211_tx_prepare_skb(sc->hw, vif, skb, band, NULL))
- goto error;
+ return;
txctl.txq = sc->tx.txq_map[IEEE80211_AC_VO];
if (ath_tx_start(sc->hw, skb, &txctl))
@@ -1124,10 +1124,8 @@ ath_chanctx_send_vif_ps_frame(struct ath_softc *sc, struct ath_vif *avp,
skb->priority = 7;
skb_set_queue_mapping(skb, IEEE80211_AC_VO);
- if (!ieee80211_tx_prepare_skb(sc->hw, vif, skb, band, &sta)) {
- dev_kfree_skb_any(skb);
+ if (!ieee80211_tx_prepare_skb(sc->hw, vif, skb, band, &sta))
return false;
- }
break;
default:
return false;
diff --git a/drivers/net/wireless/virtual/mac80211_hwsim.c b/drivers/net/wireless/virtual/mac80211_hwsim.c
index 1214e7dcc812..bf12ff0ab06a 100644
--- a/drivers/net/wireless/virtual/mac80211_hwsim.c
+++ b/drivers/net/wireless/virtual/mac80211_hwsim.c
@@ -2892,7 +2892,6 @@ static void hw_scan_work(struct work_struct *work)
hwsim->tmp_chan->band,
NULL)) {
rcu_read_unlock();
- kfree_skb(probe);
continue;
}
diff --git a/include/net/mac80211.h b/include/net/mac80211.h
index adaa1b2323d2..85d785060e76 100644
--- a/include/net/mac80211.h
+++ b/include/net/mac80211.h
@@ -7032,6 +7032,10 @@ void ieee80211_report_wowlan_wakeup(struct ieee80211_vif *vif,
* @band: the band to transmit on
* @sta: optional pointer to get the station to send the frame to
*
+ * Return: %true if the skb was prepared, %false otherwise.
+ * On failure, the skb is freed by this function; callers must not
+ * free it again.
+ *
* Note: must be called under RCU lock
*/
bool ieee80211_tx_prepare_skb(struct ieee80211_hw *hw,
diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
index 7eddcb6f9645..2a708132320c 100644
--- a/net/mac80211/tx.c
+++ b/net/mac80211/tx.c
@@ -1911,8 +1911,10 @@ bool ieee80211_tx_prepare_skb(struct ieee80211_hw *hw,
struct ieee80211_tx_data tx;
struct sk_buff *skb2;
- if (ieee80211_tx_prepare(sdata, &tx, NULL, skb) == TX_DROP)
+ if (ieee80211_tx_prepare(sdata, &tx, NULL, skb) == TX_DROP) {
+ kfree_skb(skb);
return false;
+ }
info->band = band;
info->control.vif = vif;
--
2.25.1
2
1
[PATCH openEuler-1.0-LTS] media: dvb-core: fix wrong reinitialization of ringbuffer on reopen
by Chen Jinghuang 09 Apr '26
by Chen Jinghuang 09 Apr '26
09 Apr '26
From: Jens Axboe <axboe(a)kernel.dk>
mainline inclusion
from mainline-v7.0-rc2
commit bfbc0b5b32a8f28ce284add619bf226716a59bc0
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13873
CVE: CVE-2026-23253
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
dvb_dvr_open() calls dvb_ringbuffer_init() when a new reader opens the
DVR device. dvb_ringbuffer_init() calls init_waitqueue_head(), which
reinitializes the waitqueue list head to empty.
Since dmxdev->dvr_buffer.queue is a shared waitqueue (all opens of the
same DVR device share it), this orphans any existing waitqueue entries
from io_uring poll or epoll, leaving them with stale prev/next pointers
while the list head is reset to {self, self}.
The waitqueue and spinlock in dvr_buffer are already properly
initialized once in dvb_dmxdev_init(). The open path only needs to
reset the buffer data pointer, size, and read/write positions.
Replace the dvb_ringbuffer_init() call in dvb_dvr_open() with direct
assignment of data/size and a call to dvb_ringbuffer_reset(), which
properly resets pread, pwrite, and error with correct memory ordering
without touching the waitqueue or spinlock.
Cc: stable(a)vger.kernel.org
Fixes: 34731df288a5f ("V4L/DVB (3501): Dmxdev: use dvb_ringbuffer")
Reported-by: syzbot+ab12f0c08dd7ab8d057c(a)syzkaller.appspotmail.com
Tested-by: syzbot+ab12f0c08dd7ab8d057c(a)syzkaller.appspotmail.com
Link: https://lore.kernel.org/all/698a26d3.050a0220.3b3015.007d.GAE@google.com/
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Conflicts:
drivers/media/dvb-core/dmxdev.c
[context conflict]
Signed-off-by: Chen Jinghuang <chenjinghuang2(a)huawei.com>
---
drivers/media/dvb-core/dmxdev.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/media/dvb-core/dmxdev.c b/drivers/media/dvb-core/dmxdev.c
index 12b7f698f562..c32adebd0e16 100644
--- a/drivers/media/dvb-core/dmxdev.c
+++ b/drivers/media/dvb-core/dmxdev.c
@@ -178,7 +178,9 @@ static int dvb_dvr_open(struct inode *inode, struct file *file)
mutex_unlock(&dmxdev->mutex);
return -ENOMEM;
}
- dvb_ringbuffer_init(&dmxdev->dvr_buffer, mem, DVR_BUFFER_SIZE);
+ dmxdev->dvr_buffer.data = mem;
+ dmxdev->dvr_buffer.size = DVR_BUFFER_SIZE;
+ dvb_ringbuffer_reset(&dmxdev->dvr_buffer);
if (dmxdev->may_do_mmap)
dvb_vb2_init(&dmxdev->dvr_vb2_ctx, "dvr",
file->f_flags & O_NONBLOCK);
--
2.34.1
2
1
08 Apr '26
From: John Johansen <john.johansen(a)canonical.com>
mainline inclusion
from mainline-v7.0-rc4
commit 8e135b8aee5a06c52a4347a5a6d51223c6f36ba3
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14042
CVE: CVE-2026-23411
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
AppArmor was putting the reference to i_private data on its end after
removing the original entry from the file system. However the inode
can aand does live beyond that point and it is possible that some of
the fs call back functions will be invoked after the reference has
been put, which results in a race between freeing the data and
accessing it through the fs.
While the rawdata/loaddata is the most likely candidate to fail the
race, as it has the fewest references. If properly crafted it might be
possible to trigger a race for the other types stored in i_private.
Fix this by moving the put of i_private referenced data to the correct
place which is during inode eviction.
Fixes: c961ee5f21b20 ("apparmor: convert from securityfs to apparmorfs for policy ns files")
Reported-by: Qualys Security Advisory <qsa(a)qualys.com>
Reviewed-by: Georgia Garcia <georgia.garcia(a)canonical.com>
Reviewed-by: Maxime Bélair <maxime.belair(a)canonical.com>
Reviewed-by: Cengiz Can <cengiz.can(a)canonical.com>
Signed-off-by: John Johansen <john.johansen(a)canonical.com>
Conflicts:
security/apparmor/apparmorfs.c
[Commit 92de220a7f33 ("update policy capable checks to use a label")
and commit 482e8050aab4 ("apparmor: don't create raw_sha1 symlink if
sha1 hashing is disabled") was not merged. Context conflicts.]
security/apparmor/include/label.h
[Commit 88fec3526e84 ("apparmor: make sure unix socket labeling is
correctly updated.") was not merged. Context conflicts.]
security/apparmor/label.c
[Commit 69050f8d6d07 ("treewide: Replace kmalloc with kmalloc_obj for
non-scalar types") was not merged. Context conflicts.]
Signed-off-by: Yi Yang <yiyang13(a)huawei.com>
---
security/apparmor/apparmorfs.c | 194 +++++++++++++---------
security/apparmor/include/label.h | 16 +-
security/apparmor/include/lib.h | 12 ++
security/apparmor/include/policy.h | 8 +-
security/apparmor/include/policy_unpack.h | 6 +-
security/apparmor/label.c | 12 +-
security/apparmor/policy_unpack.c | 6 +-
7 files changed, 153 insertions(+), 101 deletions(-)
diff --git a/security/apparmor/apparmorfs.c b/security/apparmor/apparmorfs.c
index e034d3346f82..500a2b46ec50 100644
--- a/security/apparmor/apparmorfs.c
+++ b/security/apparmor/apparmorfs.c
@@ -32,6 +32,7 @@
#include "include/crypto.h"
#include "include/ipc.h"
#include "include/label.h"
+#include "include/lib.h"
#include "include/policy.h"
#include "include/policy_ns.h"
#include "include/resource.h"
@@ -61,6 +62,7 @@
* securityfs and apparmorfs filesystems.
*/
+#define IREF_POISON 101
/*
* support fns
@@ -150,6 +152,71 @@ static int aafs_show_path(struct seq_file *seq, struct dentry *dentry)
return 0;
}
+static struct aa_ns *get_ns_common_ref(struct aa_common_ref *ref)
+{
+ if (ref) {
+ struct aa_label *reflabel = container_of(ref, struct aa_label,
+ count);
+ return aa_get_ns(labels_ns(reflabel));
+ }
+
+ return NULL;
+}
+
+static struct aa_proxy *get_proxy_common_ref(struct aa_common_ref *ref)
+{
+ if (ref)
+ return aa_get_proxy(container_of(ref, struct aa_proxy, count));
+
+ return NULL;
+}
+
+static struct aa_loaddata *get_loaddata_common_ref(struct aa_common_ref *ref)
+{
+ if (ref)
+ return aa_get_i_loaddata(container_of(ref, struct aa_loaddata,
+ count));
+ return NULL;
+}
+
+static void aa_put_common_ref(struct aa_common_ref *ref)
+{
+ if (!ref)
+ return;
+
+ switch (ref->reftype) {
+ case REF_RAWDATA:
+ aa_put_i_loaddata(container_of(ref, struct aa_loaddata,
+ count));
+ break;
+ case REF_PROXY:
+ aa_put_proxy(container_of(ref, struct aa_proxy,
+ count));
+ break;
+ case REF_NS:
+ /* ns count is held on its unconfined label */
+ aa_put_ns(labels_ns(container_of(ref, struct aa_label, count)));
+ break;
+ default:
+ AA_BUG(true, "unknown refcount type");
+ break;
+ }
+}
+
+static void aa_get_common_ref(struct aa_common_ref *ref)
+{
+ kref_get(&ref->count);
+}
+
+static void aafs_evict(struct inode *inode)
+{
+ struct aa_common_ref *ref = inode->i_private;
+
+ clear_inode(inode);
+ aa_put_common_ref(ref);
+ inode->i_private = (void *) IREF_POISON;
+}
+
static void aafs_free_inode(struct inode *inode)
{
if (S_ISLNK(inode->i_mode))
@@ -159,6 +226,7 @@ static void aafs_free_inode(struct inode *inode)
static const struct super_operations aafs_super_ops = {
.statfs = simple_statfs,
+ .evict_inode = aafs_evict,
.free_inode = aafs_free_inode,
.show_path = aafs_show_path,
};
@@ -259,7 +327,8 @@ static int __aafs_setup_d_inode(struct inode *dir, struct dentry *dentry,
* aafs_remove(). Will return ERR_PTR on failure.
*/
static struct dentry *aafs_create(const char *name, umode_t mode,
- struct dentry *parent, void *data, void *link,
+ struct dentry *parent,
+ struct aa_common_ref *data, void *link,
const struct file_operations *fops,
const struct inode_operations *iops)
{
@@ -296,6 +365,9 @@ static struct dentry *aafs_create(const char *name, umode_t mode,
goto fail_dentry;
inode_unlock(dir);
+ if (data)
+ aa_get_common_ref(data);
+
return dentry;
fail_dentry:
@@ -320,7 +392,8 @@ static struct dentry *aafs_create(const char *name, umode_t mode,
* see aafs_create
*/
static struct dentry *aafs_create_file(const char *name, umode_t mode,
- struct dentry *parent, void *data,
+ struct dentry *parent,
+ struct aa_common_ref *data,
const struct file_operations *fops)
{
return aafs_create(name, mode, parent, data, NULL, fops, NULL);
@@ -445,7 +518,7 @@ static ssize_t policy_update(u32 mask, const char __user *buf, size_t size,
static ssize_t profile_load(struct file *f, const char __user *buf, size_t size,
loff_t *pos)
{
- struct aa_ns *ns = aa_get_ns(f->f_inode->i_private);
+ struct aa_ns *ns = get_ns_common_ref(f->f_inode->i_private);
int error = policy_update(AA_MAY_LOAD_POLICY, buf, size, pos, ns,
f->f_cred);
@@ -463,7 +536,7 @@ static const struct file_operations aa_fs_profile_load = {
static ssize_t profile_replace(struct file *f, const char __user *buf,
size_t size, loff_t *pos)
{
- struct aa_ns *ns = aa_get_ns(f->f_inode->i_private);
+ struct aa_ns *ns = get_ns_common_ref(f->f_inode->i_private);
int error = policy_update(AA_MAY_LOAD_POLICY | AA_MAY_REPLACE_POLICY,
buf, size, pos, ns, f->f_cred);
aa_put_ns(ns);
@@ -483,7 +556,7 @@ static ssize_t profile_remove(struct file *f, const char __user *buf,
struct aa_loaddata *data;
struct aa_label *label;
ssize_t error;
- struct aa_ns *ns = aa_get_ns(f->f_inode->i_private);
+ struct aa_ns *ns = get_ns_common_ref(f->f_inode->i_private);
label = begin_current_label_crit_section();
/* high level check about policy management - fine grained in
@@ -573,7 +646,7 @@ static int ns_revision_open(struct inode *inode, struct file *file)
if (!rev)
return -ENOMEM;
- rev->ns = aa_get_ns(inode->i_private);
+ rev->ns = get_ns_common_ref(inode->i_private);
if (!rev->ns)
rev->ns = aa_get_current_ns();
file->private_data = rev;
@@ -1051,7 +1124,7 @@ static const struct file_operations seq_profile_ ##NAME ##_fops = { \
static int seq_profile_open(struct inode *inode, struct file *file,
int (*show)(struct seq_file *, void *))
{
- struct aa_proxy *proxy = aa_get_proxy(inode->i_private);
+ struct aa_proxy *proxy = get_proxy_common_ref(inode->i_private);
int error = single_open(file, show, proxy);
if (error) {
@@ -1229,7 +1302,7 @@ static const struct file_operations seq_rawdata_ ##NAME ##_fops = { \
static int seq_rawdata_open(struct inode *inode, struct file *file,
int (*show)(struct seq_file *, void *))
{
- struct aa_loaddata *data = aa_get_i_loaddata(inode->i_private);
+ struct aa_loaddata *data = get_loaddata_common_ref(inode->i_private);
int error;
if (!data)
@@ -1369,7 +1442,7 @@ static int rawdata_open(struct inode *inode, struct file *file)
if (!policy_view_capable(NULL))
return -EACCES;
- loaddata = aa_get_i_loaddata(inode->i_private);
+ loaddata = get_loaddata_common_ref(inode->i_private);
if (!loaddata)
return -ENOENT;
@@ -1414,7 +1487,6 @@ static void remove_rawdata_dents(struct aa_loaddata *rawdata)
if (!IS_ERR_OR_NULL(rawdata->dents[i])) {
aafs_remove(rawdata->dents[i]);
rawdata->dents[i] = NULL;
- aa_put_i_loaddata(rawdata);
}
}
}
@@ -1453,45 +1525,41 @@ int __aa_fs_create_rawdata(struct aa_ns *ns, struct aa_loaddata *rawdata)
if (IS_ERR(dir))
/* ->name freed when rawdata freed */
return PTR_ERR(dir);
- aa_get_i_loaddata(rawdata);
rawdata->dents[AAFS_LOADDATA_DIR] = dir;
- dent = aafs_create_file("abi", S_IFREG | 0444, dir, rawdata,
+ dent = aafs_create_file("abi", S_IFREG | 0444, dir, &rawdata->count,
&seq_rawdata_abi_fops);
if (IS_ERR(dent))
goto fail;
- aa_get_i_loaddata(rawdata);
rawdata->dents[AAFS_LOADDATA_ABI] = dent;
- dent = aafs_create_file("revision", S_IFREG | 0444, dir, rawdata,
- &seq_rawdata_revision_fops);
+ dent = aafs_create_file("revision", S_IFREG | 0444, dir,
+ &rawdata->count,
+ &seq_rawdata_revision_fops);
if (IS_ERR(dent))
goto fail;
- aa_get_i_loaddata(rawdata);
rawdata->dents[AAFS_LOADDATA_REVISION] = dent;
if (aa_g_hash_policy) {
dent = aafs_create_file("sha1", S_IFREG | 0444, dir,
- rawdata, &seq_rawdata_hash_fops);
+ &rawdata->count,
+ &seq_rawdata_hash_fops);
if (IS_ERR(dent))
goto fail;
- aa_get_i_loaddata(rawdata);
rawdata->dents[AAFS_LOADDATA_HASH] = dent;
}
dent = aafs_create_file("compressed_size", S_IFREG | 0444, dir,
- rawdata,
+ &rawdata->count,
&seq_rawdata_compressed_size_fops);
if (IS_ERR(dent))
goto fail;
- aa_get_i_loaddata(rawdata);
rawdata->dents[AAFS_LOADDATA_COMPRESSED_SIZE] = dent;
- dent = aafs_create_file("raw_data", S_IFREG | 0444,
- dir, rawdata, &rawdata_fops);
+ dent = aafs_create_file("raw_data", S_IFREG | 0444, dir,
+ &rawdata->count, &rawdata_fops);
if (IS_ERR(dent))
goto fail;
- aa_get_i_loaddata(rawdata);
rawdata->dents[AAFS_LOADDATA_DATA] = dent;
d_inode(dent)->i_size = rawdata->size;
@@ -1502,7 +1570,6 @@ int __aa_fs_create_rawdata(struct aa_ns *ns, struct aa_loaddata *rawdata)
fail:
remove_rawdata_dents(rawdata);
- aa_put_i_loaddata(rawdata);
return PTR_ERR(dent);
}
@@ -1524,13 +1591,10 @@ void __aafs_profile_rmdir(struct aa_profile *profile)
__aafs_profile_rmdir(child);
for (i = AAFS_PROF_SIZEOF - 1; i >= 0; --i) {
- struct aa_proxy *proxy;
if (!profile->dents[i])
continue;
- proxy = d_inode(profile->dents[i])->i_private;
aafs_remove(profile->dents[i]);
- aa_put_proxy(proxy);
profile->dents[i] = NULL;
}
}
@@ -1560,14 +1624,7 @@ static struct dentry *create_profile_file(struct dentry *dir, const char *name,
struct aa_profile *profile,
const struct file_operations *fops)
{
- struct aa_proxy *proxy = aa_get_proxy(profile->label.proxy);
- struct dentry *dent;
-
- dent = aafs_create_file(name, S_IFREG | 0444, dir, proxy, fops);
- if (IS_ERR(dent))
- aa_put_proxy(proxy);
-
- return dent;
+ return aafs_create_file(name, S_IFREG | 0444, dir, &profile->label.proxy->count, fops);
}
static int profile_depth(struct aa_profile *profile)
@@ -1617,7 +1674,8 @@ static const char *rawdata_get_link_base(struct dentry *dentry,
struct delayed_call *done,
const char *name)
{
- struct aa_proxy *proxy = inode->i_private;
+ struct aa_common_ref *ref = inode->i_private;
+ struct aa_proxy *proxy = container_of(ref, struct aa_proxy, count);
struct aa_label *label;
struct aa_profile *profile;
char *target;
@@ -1748,27 +1806,24 @@ int __aafs_profile_mkdir(struct aa_profile *profile, struct dentry *parent)
if (profile->rawdata) {
dent = aafs_create("raw_sha1", S_IFLNK | 0444, dir,
- profile->label.proxy, NULL, NULL,
- &rawdata_link_sha1_iops);
+ &profile->label.proxy->count, NULL,
+ NULL, &rawdata_link_sha1_iops);
if (IS_ERR(dent))
goto fail;
- aa_get_proxy(profile->label.proxy);
profile->dents[AAFS_PROF_RAW_HASH] = dent;
dent = aafs_create("raw_abi", S_IFLNK | 0444, dir,
- profile->label.proxy, NULL, NULL,
+ &profile->label.proxy->count, NULL, NULL,
&rawdata_link_abi_iops);
if (IS_ERR(dent))
goto fail;
- aa_get_proxy(profile->label.proxy);
profile->dents[AAFS_PROF_RAW_ABI] = dent;
dent = aafs_create("raw_data", S_IFLNK | 0444, dir,
- profile->label.proxy, NULL, NULL,
+ &profile->label.proxy->count, NULL, NULL,
&rawdata_link_data_iops);
if (IS_ERR(dent))
goto fail;
- aa_get_proxy(profile->label.proxy);
profile->dents[AAFS_PROF_RAW_DATA] = dent;
}
@@ -1803,7 +1858,7 @@ static int ns_mkdir_op(struct inode *dir, struct dentry *dentry, umode_t mode)
if (error)
return error;
- parent = aa_get_ns(dir->i_private);
+ parent = get_ns_common_ref(dir->i_private);
AA_BUG(d_inode(ns_subns_dir(parent)) != dir);
/* we have to unlock and then relock to get locking order right
@@ -1853,7 +1908,7 @@ static int ns_rmdir_op(struct inode *dir, struct dentry *dentry)
if (error)
return error;
- parent = aa_get_ns(dir->i_private);
+ parent = get_ns_common_ref(dir->i_private);
/* rmdir calls the generic securityfs functions to remove files
* from the apparmor dir. It is up to the apparmor ns locking
* to avoid races.
@@ -1923,27 +1978,6 @@ void __aafs_ns_rmdir(struct aa_ns *ns)
__aa_fs_list_remove_rawdata(ns);
- if (ns_subns_dir(ns)) {
- sub = d_inode(ns_subns_dir(ns))->i_private;
- aa_put_ns(sub);
- }
- if (ns_subload(ns)) {
- sub = d_inode(ns_subload(ns))->i_private;
- aa_put_ns(sub);
- }
- if (ns_subreplace(ns)) {
- sub = d_inode(ns_subreplace(ns))->i_private;
- aa_put_ns(sub);
- }
- if (ns_subremove(ns)) {
- sub = d_inode(ns_subremove(ns))->i_private;
- aa_put_ns(sub);
- }
- if (ns_subrevision(ns)) {
- sub = d_inode(ns_subrevision(ns))->i_private;
- aa_put_ns(sub);
- }
-
for (i = AAFS_NS_SIZEOF - 1; i >= 0; --i) {
aafs_remove(ns->dents[i]);
ns->dents[i] = NULL;
@@ -1968,40 +2002,40 @@ static int __aafs_ns_mkdir_entries(struct aa_ns *ns, struct dentry *dir)
return PTR_ERR(dent);
ns_subdata_dir(ns) = dent;
- dent = aafs_create_file("revision", 0444, dir, ns,
+ dent = aafs_create_file("revision", 0444, dir,
+ &ns->unconfined->label.count,
&aa_fs_ns_revision_fops);
if (IS_ERR(dent))
return PTR_ERR(dent);
- aa_get_ns(ns);
ns_subrevision(ns) = dent;
- dent = aafs_create_file(".load", 0640, dir, ns,
- &aa_fs_profile_load);
+ dent = aafs_create_file(".load", 0640, dir,
+ &ns->unconfined->label.count,
+ &aa_fs_profile_load);
if (IS_ERR(dent))
return PTR_ERR(dent);
- aa_get_ns(ns);
ns_subload(ns) = dent;
- dent = aafs_create_file(".replace", 0640, dir, ns,
- &aa_fs_profile_replace);
+ dent = aafs_create_file(".replace", 0640, dir,
+ &ns->unconfined->label.count,
+ &aa_fs_profile_replace);
if (IS_ERR(dent))
return PTR_ERR(dent);
- aa_get_ns(ns);
ns_subreplace(ns) = dent;
- dent = aafs_create_file(".remove", 0640, dir, ns,
- &aa_fs_profile_remove);
+ dent = aafs_create_file(".remove", 0640, dir,
+ &ns->unconfined->label.count,
+ &aa_fs_profile_remove);
if (IS_ERR(dent))
return PTR_ERR(dent);
- aa_get_ns(ns);
ns_subremove(ns) = dent;
/* use create_dentry so we can supply private data */
- dent = aafs_create("namespaces", S_IFDIR | 0755, dir, ns, NULL, NULL,
- &ns_dir_inode_operations);
+ dent = aafs_create("namespaces", S_IFDIR | 0755, dir,
+ &ns->unconfined->label.count,
+ NULL, NULL, &ns_dir_inode_operations);
if (IS_ERR(dent))
return PTR_ERR(dent);
- aa_get_ns(ns);
ns_subns_dir(ns) = dent;
return 0;
diff --git a/security/apparmor/include/label.h b/security/apparmor/include/label.h
index 1e90384b1523..55986388dfae 100644
--- a/security/apparmor/include/label.h
+++ b/security/apparmor/include/label.h
@@ -103,7 +103,7 @@ enum label_flags {
struct aa_label;
struct aa_proxy {
- struct kref count;
+ struct aa_common_ref count;
struct aa_label __rcu *label;
};
@@ -123,7 +123,7 @@ struct label_it {
* @ent: set of profiles for label, actual size determined by @size
*/
struct aa_label {
- struct kref count;
+ struct aa_common_ref count;
struct rb_node node;
struct rcu_head rcu;
struct aa_proxy *proxy;
@@ -373,7 +373,7 @@ int aa_label_match(struct aa_profile *profile, struct aa_label *label,
*/
static inline struct aa_label *__aa_get_label(struct aa_label *l)
{
- if (l && kref_get_unless_zero(&l->count))
+ if (l && kref_get_unless_zero(&l->count.count))
return l;
return NULL;
@@ -382,7 +382,7 @@ static inline struct aa_label *__aa_get_label(struct aa_label *l)
static inline struct aa_label *aa_get_label(struct aa_label *l)
{
if (l)
- kref_get(&(l->count));
+ kref_get(&(l->count.count));
return l;
}
@@ -402,7 +402,7 @@ static inline struct aa_label *aa_get_label_rcu(struct aa_label __rcu **l)
rcu_read_lock();
do {
c = rcu_dereference(*l);
- } while (c && !kref_get_unless_zero(&c->count));
+ } while (c && !kref_get_unless_zero(&c->count.count));
rcu_read_unlock();
return c;
@@ -442,7 +442,7 @@ static inline struct aa_label *aa_get_newest_label(struct aa_label *l)
static inline void aa_put_label(struct aa_label *l)
{
if (l)
- kref_put(&l->count, aa_label_kref);
+ kref_put(&l->count.count, aa_label_kref);
}
@@ -452,7 +452,7 @@ void aa_proxy_kref(struct kref *kref);
static inline struct aa_proxy *aa_get_proxy(struct aa_proxy *proxy)
{
if (proxy)
- kref_get(&(proxy->count));
+ kref_get(&(proxy->count.count));
return proxy;
}
@@ -460,7 +460,7 @@ static inline struct aa_proxy *aa_get_proxy(struct aa_proxy *proxy)
static inline void aa_put_proxy(struct aa_proxy *proxy)
{
if (proxy)
- kref_put(&proxy->count, aa_proxy_kref);
+ kref_put(&proxy->count.count, aa_proxy_kref);
}
void __aa_proxy_redirect(struct aa_label *orig, struct aa_label *new);
diff --git a/security/apparmor/include/lib.h b/security/apparmor/include/lib.h
index ac5054899f6f..624178827fd2 100644
--- a/security/apparmor/include/lib.h
+++ b/security/apparmor/include/lib.h
@@ -60,6 +60,18 @@ void aa_info_message(const char *str);
/* Security blob offsets */
extern struct lsm_blob_sizes apparmor_blob_sizes;
+enum reftype {
+ REF_NS,
+ REF_PROXY,
+ REF_RAWDATA,
+};
+
+/* common reference count used by data the shows up in aafs */
+struct aa_common_ref {
+ struct kref count;
+ enum reftype reftype;
+};
+
/**
* aa_strneq - compare null terminated @str to a non null terminated substring
* @str: a null terminated string
diff --git a/security/apparmor/include/policy.h b/security/apparmor/include/policy.h
index f6682a31df23..ea91591659b5 100644
--- a/security/apparmor/include/policy.h
+++ b/security/apparmor/include/policy.h
@@ -243,7 +243,7 @@ static inline unsigned int PROFILE_MEDIATES_AF(struct aa_profile *profile,
static inline struct aa_profile *aa_get_profile(struct aa_profile *p)
{
if (p)
- kref_get(&(p->label.count));
+ kref_get(&(p->label.count.count));
return p;
}
@@ -257,7 +257,7 @@ static inline struct aa_profile *aa_get_profile(struct aa_profile *p)
*/
static inline struct aa_profile *aa_get_profile_not0(struct aa_profile *p)
{
- if (p && kref_get_unless_zero(&p->label.count))
+ if (p && kref_get_unless_zero(&p->label.count.count))
return p;
return NULL;
@@ -277,7 +277,7 @@ static inline struct aa_profile *aa_get_profile_rcu(struct aa_profile __rcu **p)
rcu_read_lock();
do {
c = rcu_dereference(*p);
- } while (c && !kref_get_unless_zero(&c->label.count));
+ } while (c && !kref_get_unless_zero(&c->label.count.count));
rcu_read_unlock();
return c;
@@ -290,7 +290,7 @@ static inline struct aa_profile *aa_get_profile_rcu(struct aa_profile __rcu **p)
static inline void aa_put_profile(struct aa_profile *p)
{
if (p)
- kref_put(&p->label.count, aa_label_kref);
+ kref_put(&p->label.count.count, aa_label_kref);
}
static inline int AUDIT_MODE(struct aa_profile *profile)
diff --git a/security/apparmor/include/policy_unpack.h b/security/apparmor/include/policy_unpack.h
index ff122fd156de..05baab73b578 100644
--- a/security/apparmor/include/policy_unpack.h
+++ b/security/apparmor/include/policy_unpack.h
@@ -67,7 +67,7 @@ enum {
* fs entries and drops the associated @count ref.
*/
struct aa_loaddata {
- struct kref count;
+ struct aa_common_ref count;
struct kref pcount;
struct list_head list;
struct work_struct work;
@@ -102,7 +102,7 @@ aa_get_i_loaddata(struct aa_loaddata *data)
{
if (data)
- kref_get(&(data->count));
+ kref_get(&(data->count.count));
return data;
}
@@ -129,7 +129,7 @@ struct aa_loaddata *aa_loaddata_alloc(size_t size);
static inline void aa_put_i_loaddata(struct aa_loaddata *data)
{
if (data)
- kref_put(&data->count, aa_loaddata_kref);
+ kref_put(&data->count.count, aa_loaddata_kref);
}
static inline void aa_put_profile_loaddata(struct aa_loaddata *data)
diff --git a/security/apparmor/label.c b/security/apparmor/label.c
index 66bc4704f804..7cae71daa0f9 100644
--- a/security/apparmor/label.c
+++ b/security/apparmor/label.c
@@ -52,7 +52,8 @@ static void free_proxy(struct aa_proxy *proxy)
void aa_proxy_kref(struct kref *kref)
{
- struct aa_proxy *proxy = container_of(kref, struct aa_proxy, count);
+ struct aa_proxy *proxy = container_of(kref, struct aa_proxy,
+ count.count);
free_proxy(proxy);
}
@@ -63,7 +64,8 @@ struct aa_proxy *aa_alloc_proxy(struct aa_label *label, gfp_t gfp)
new = kzalloc(sizeof(struct aa_proxy), gfp);
if (new) {
- kref_init(&new->count);
+ kref_init(&new->count.count);
+ new->count.reftype = REF_PROXY;
rcu_assign_pointer(new->label, aa_get_label(label));
}
return new;
@@ -366,7 +368,8 @@ static void label_free_rcu(struct rcu_head *head)
void aa_label_kref(struct kref *kref)
{
- struct aa_label *label = container_of(kref, struct aa_label, count);
+ struct aa_label *label = container_of(kref, struct aa_label,
+ count.count);
struct aa_ns *ns = labels_ns(label);
if (!ns) {
@@ -403,7 +406,8 @@ bool aa_label_init(struct aa_label *label, int size, gfp_t gfp)
label->size = size; /* doesn't include null */
label->vec[size] = NULL; /* null terminate */
- kref_init(&label->count);
+ kref_init(&label->count.count);
+ label->count.reftype = REF_NS; /* for aafs purposes */
RB_CLEAR_NODE(&label->node);
return true;
diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
index 33f5eabd47f1..33fac6489077 100644
--- a/security/apparmor/policy_unpack.c
+++ b/security/apparmor/policy_unpack.c
@@ -157,7 +157,8 @@ static void do_loaddata_free(struct aa_loaddata *d)
void aa_loaddata_kref(struct kref *kref)
{
- struct aa_loaddata *d = container_of(kref, struct aa_loaddata, count);
+ struct aa_loaddata *d = container_of(kref, struct aa_loaddata,
+ count.count);
do_loaddata_free(d);
}
@@ -204,7 +205,8 @@ struct aa_loaddata *aa_loaddata_alloc(size_t size)
kfree(d);
return ERR_PTR(-ENOMEM);
}
- kref_init(&d->count);
+ kref_init(&d->count.count);
+ d->count.reftype = REF_RAWDATA;
kref_init(&d->pcount);
INIT_LIST_HEAD(&d->list);
--
2.25.1
2
1
08 Apr '26
Reuse SUBSYS for xcu and freezer to preserve KABI
Liu Kai (5):
xSched/cgroup: reuse SUBSYS for xcu and freezer to preserve KABI
xSched/cgroup: make xcu.stat invisible at root cgroup
cgroup: sync CGROUP_SUBSYS_COUNT limit with upstream to 16
xSched: enable CONFIG_CGROUP_XCU and CONFIG_XCU_SCHED_CFS in arm64/x86
defconfig
xSched: update xSched manual for xcu cmdline enable option
Documentation/scheduler/xsched.md | 6 +-
arch/arm64/configs/openeuler_defconfig | 3 +-
arch/x86/configs/openeuler_defconfig | 3 +-
include/linux/cgroup_subsys.h | 8 +-
include/linux/freezer.h | 24 ++++
kernel/cgroup/cgroup.c | 2 +-
kernel/cgroup/legacy_freezer.c | 33 ++++-
kernel/xsched/cgroup.c | 187 +++++++++++++++++++++++--
8 files changed, 238 insertions(+), 28 deletions(-)
--
2.34.1
2
6
From: John Johansen <john.johansen(a)canonical.com>
mainline inclusion
from mainline-v7.0-rc4
commit a0b7091c4de45a7325c8780e6934a894f92ac86b
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14041
CVE: CVE-2026-23410
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
There is a race condition that leads to a use-after-free situation:
because the rawdata inodes are not refcounted, an attacker can start
open()ing one of the rawdata files, and at the same time remove the
last reference to this rawdata (by removing the corresponding profile,
for example), which frees its struct aa_loaddata; as a result, when
seq_rawdata_open() is reached, i_private is a dangling pointer and
freed memory is accessed.
The rawdata inodes weren't refcounted to avoid a circular refcount and
were supposed to be held by the profile rawdata reference. However
during profile removal there is a window where the vfs and profile
destruction race, resulting in the use after free.
Fix this by moving to a double refcount scheme. Where the profile
refcount on rawdata is used to break the circular dependency. Allowing
for freeing of the rawdata once all inode references to the rawdata
are put.
Fixes: 5d5182cae401 ("apparmor: move to per loaddata files, instead of replicating in profiles")
Reported-by: Qualys Security Advisory <qsa(a)qualys.com>
Reviewed-by: Georgia Garcia <georgia.garcia(a)canonical.com>
Reviewed-by: Maxime Bélair <maxime.belair(a)canonical.com>
Reviewed-by: Cengiz Can <cengiz.can(a)canonical.com>
Tested-by: Salvatore Bonaccorso <carnil(a)debian.org>
Signed-off-by: John Johansen <john.johansen(a)canonical.com>
Conflicts:
security/apparmor/apparmorfs.c
[Commit d61c57fde819 ("apparmor: make export of raw binary profile to
userspace optional") was not merged. Context conflicts.]
security/apparmor/include/policy_unpack.h
[Commit b11e51dd7094 ("apparmor: test: make static symbols visible during i
kunit testing") was not merged. Context conflicts.]
security/apparmor/policy.c
[Commit d61c57fde819 ("apparmor: make export of raw binary profile to
userspace optional") was not merged. Context conflicts.]
Signed-off-by: Yi Yang <yiyang13(a)huawei.com>
---
security/apparmor/apparmorfs.c | 35 +++++++-----
security/apparmor/include/policy_unpack.h | 70 ++++++++++++++---------
security/apparmor/policy.c | 12 ++--
security/apparmor/policy_unpack.c | 32 ++++++++---
4 files changed, 92 insertions(+), 57 deletions(-)
diff --git a/security/apparmor/apparmorfs.c b/security/apparmor/apparmorfs.c
index 0900fd07def7..e034d3346f82 100644
--- a/security/apparmor/apparmorfs.c
+++ b/security/apparmor/apparmorfs.c
@@ -77,7 +77,7 @@ static void rawdata_f_data_free(struct rawdata_f_data *private)
if (!private)
return;
- aa_put_loaddata(private->loaddata);
+ aa_put_i_loaddata(private->loaddata);
kvfree(private);
}
@@ -401,7 +401,8 @@ static struct aa_loaddata *aa_simple_write_to_buffer(const char __user *userbuf,
data->size = copy_size;
if (copy_from_user(data->data, userbuf, copy_size)) {
- aa_put_loaddata(data);
+ /* trigger free - don't need to put pcount */
+ aa_put_i_loaddata(data);
return ERR_PTR(-EFAULT);
}
@@ -429,7 +430,10 @@ static ssize_t policy_update(u32 mask, const char __user *buf, size_t size,
error = PTR_ERR(data);
if (!IS_ERR(data)) {
error = aa_replace_profiles(ns, label, mask, data);
- aa_put_loaddata(data);
+ /* put pcount, which will put count and free if no
+ * profiles referencing it.
+ */
+ aa_put_profile_loaddata(data);
}
end_section:
end_current_label_crit_section(label);
@@ -500,7 +504,7 @@ static ssize_t profile_remove(struct file *f, const char __user *buf,
if (!IS_ERR(data)) {
data->data[size] = 0;
error = aa_remove_profiles(ns, label, data->data, size);
- aa_put_loaddata(data);
+ aa_put_profile_loaddata(data);
}
out:
end_current_label_crit_section(label);
@@ -1225,18 +1229,17 @@ static const struct file_operations seq_rawdata_ ##NAME ##_fops = { \
static int seq_rawdata_open(struct inode *inode, struct file *file,
int (*show)(struct seq_file *, void *))
{
- struct aa_loaddata *data = __aa_get_loaddata(inode->i_private);
+ struct aa_loaddata *data = aa_get_i_loaddata(inode->i_private);
int error;
if (!data)
- /* lost race this ent is being reaped */
return -ENOENT;
error = single_open(file, show, data);
if (error) {
AA_BUG(file->private_data &&
((struct seq_file *)file->private_data)->private);
- aa_put_loaddata(data);
+ aa_put_i_loaddata(data);
}
return error;
@@ -1247,7 +1250,7 @@ static int seq_rawdata_release(struct inode *inode, struct file *file)
struct seq_file *seq = (struct seq_file *) file->private_data;
if (seq)
- aa_put_loaddata(seq->private);
+ aa_put_i_loaddata(seq->private);
return single_release(inode, file);
}
@@ -1366,9 +1369,8 @@ static int rawdata_open(struct inode *inode, struct file *file)
if (!policy_view_capable(NULL))
return -EACCES;
- loaddata = __aa_get_loaddata(inode->i_private);
+ loaddata = aa_get_i_loaddata(inode->i_private);
if (!loaddata)
- /* lost race: this entry is being reaped */
return -ENOENT;
private = rawdata_f_data_alloc(loaddata->size);
@@ -1393,7 +1395,7 @@ static int rawdata_open(struct inode *inode, struct file *file)
return error;
fail_private_alloc:
- aa_put_loaddata(loaddata);
+ aa_put_i_loaddata(loaddata);
return error;
}
@@ -1410,9 +1412,9 @@ static void remove_rawdata_dents(struct aa_loaddata *rawdata)
for (i = 0; i < AAFS_LOADDATA_NDENTS; i++) {
if (!IS_ERR_OR_NULL(rawdata->dents[i])) {
- /* no refcounts on i_private */
aafs_remove(rawdata->dents[i]);
rawdata->dents[i] = NULL;
+ aa_put_i_loaddata(rawdata);
}
}
}
@@ -1451,18 +1453,21 @@ int __aa_fs_create_rawdata(struct aa_ns *ns, struct aa_loaddata *rawdata)
if (IS_ERR(dir))
/* ->name freed when rawdata freed */
return PTR_ERR(dir);
+ aa_get_i_loaddata(rawdata);
rawdata->dents[AAFS_LOADDATA_DIR] = dir;
dent = aafs_create_file("abi", S_IFREG | 0444, dir, rawdata,
&seq_rawdata_abi_fops);
if (IS_ERR(dent))
goto fail;
+ aa_get_i_loaddata(rawdata);
rawdata->dents[AAFS_LOADDATA_ABI] = dent;
dent = aafs_create_file("revision", S_IFREG | 0444, dir, rawdata,
&seq_rawdata_revision_fops);
if (IS_ERR(dent))
goto fail;
+ aa_get_i_loaddata(rawdata);
rawdata->dents[AAFS_LOADDATA_REVISION] = dent;
if (aa_g_hash_policy) {
@@ -1470,6 +1475,7 @@ int __aa_fs_create_rawdata(struct aa_ns *ns, struct aa_loaddata *rawdata)
rawdata, &seq_rawdata_hash_fops);
if (IS_ERR(dent))
goto fail;
+ aa_get_i_loaddata(rawdata);
rawdata->dents[AAFS_LOADDATA_HASH] = dent;
}
@@ -1478,24 +1484,25 @@ int __aa_fs_create_rawdata(struct aa_ns *ns, struct aa_loaddata *rawdata)
&seq_rawdata_compressed_size_fops);
if (IS_ERR(dent))
goto fail;
+ aa_get_i_loaddata(rawdata);
rawdata->dents[AAFS_LOADDATA_COMPRESSED_SIZE] = dent;
dent = aafs_create_file("raw_data", S_IFREG | 0444,
dir, rawdata, &rawdata_fops);
if (IS_ERR(dent))
goto fail;
+ aa_get_i_loaddata(rawdata);
rawdata->dents[AAFS_LOADDATA_DATA] = dent;
d_inode(dent)->i_size = rawdata->size;
rawdata->ns = aa_get_ns(ns);
list_add(&rawdata->list, &ns->rawdata_list);
- /* no refcount on inode rawdata */
return 0;
fail:
remove_rawdata_dents(rawdata);
-
+ aa_put_i_loaddata(rawdata);
return PTR_ERR(dent);
}
diff --git a/security/apparmor/include/policy_unpack.h b/security/apparmor/include/policy_unpack.h
index e0e1ca7ebc38..ff122fd156de 100644
--- a/security/apparmor/include/policy_unpack.h
+++ b/security/apparmor/include/policy_unpack.h
@@ -46,17 +46,29 @@ enum {
AAFS_LOADDATA_NDENTS /* count of entries */
};
-/*
- * struct aa_loaddata - buffer of policy raw_data set
+/* struct aa_loaddata - buffer of policy raw_data set
+ * @count: inode/filesystem refcount - use aa_get_i_loaddata()
+ * @pcount: profile refcount - use aa_get_profile_loaddata()
+ * @list: list the loaddata is on
+ * @work: used to do a delayed cleanup
+ * @dents: refs to dents created in aafs
+ * @ns: the namespace this loaddata was loaded into
+ * @name:
+ * @size: the size of the data that was loaded
+ * @compressed_size: the size of the data when it is compressed
+ * @revision: unique revision count that this data was loaded as
+ * @abi: the abi number the loaddata uses
+ * @hash: a hash of the loaddata, used to help dedup data
*
- * there is no loaddata ref for being on ns list, nor a ref from
- * d_inode(@dentry) when grab a ref from these, @ns->lock must be held
- * && __aa_get_loaddata() needs to be used, and the return value
- * checked, if NULL the loaddata is already being reaped and should be
- * considered dead.
+ * There is no loaddata ref for being on ns->rawdata_list, so
+ * @ns->lock must be held when walking the list. Dentries and
+ * inode opens hold refs on @count; profiles hold refs on @pcount.
+ * When the last @pcount drops, do_ploaddata_rmfs() removes the
+ * fs entries and drops the associated @count ref.
*/
struct aa_loaddata {
struct kref count;
+ struct kref pcount;
struct list_head list;
struct work_struct work;
struct dentry *dents[AAFS_LOADDATA_NDENTS];
@@ -78,50 +90,52 @@ struct aa_loaddata {
int aa_unpack(struct aa_loaddata *udata, struct list_head *lh, const char **ns);
/**
- * __aa_get_loaddata - get a reference count to uncounted data reference
+ * aa_get_loaddata - get a reference count from a counted data reference
* @data: reference to get a count on
*
- * Returns: pointer to reference OR NULL if race is lost and reference is
- * being repeated.
- * Requires: @data->ns->lock held, and the return code MUST be checked
- *
- * Use only from inode->i_private and @data->list found references
+ * Returns: pointer to reference
+ * Requires: @data to have a valid reference count on it. It is a bug
+ * if the race to reap can be encountered when it is used.
*/
static inline struct aa_loaddata *
-__aa_get_loaddata(struct aa_loaddata *data)
+aa_get_i_loaddata(struct aa_loaddata *data)
{
- if (data && kref_get_unless_zero(&(data->count)))
- return data;
- return NULL;
+ if (data)
+ kref_get(&(data->count));
+ return data;
}
/**
- * aa_get_loaddata - get a reference count from a counted data reference
+ * aa_get_profile_loaddata - get a profile reference count on loaddata
* @data: reference to get a count on
*
- * Returns: point to reference
- * Requires: @data to have a valid reference count on it. It is a bug
- * if the race to reap can be encountered when it is used.
+ * Returns: pointer to reference
+ * Requires: @data to have a valid reference count on it.
*/
static inline struct aa_loaddata *
-aa_get_loaddata(struct aa_loaddata *data)
+aa_get_profile_loaddata(struct aa_loaddata *data)
{
- struct aa_loaddata *tmp = __aa_get_loaddata(data);
-
- AA_BUG(data && !tmp);
-
- return tmp;
+ if (data)
+ kref_get(&(data->pcount));
+ return data;
}
void __aa_loaddata_update(struct aa_loaddata *data, long revision);
bool aa_rawdata_eq(struct aa_loaddata *l, struct aa_loaddata *r);
void aa_loaddata_kref(struct kref *kref);
+void aa_ploaddata_kref(struct kref *kref);
struct aa_loaddata *aa_loaddata_alloc(size_t size);
-static inline void aa_put_loaddata(struct aa_loaddata *data)
+static inline void aa_put_i_loaddata(struct aa_loaddata *data)
{
if (data)
kref_put(&data->count, aa_loaddata_kref);
}
+static inline void aa_put_profile_loaddata(struct aa_loaddata *data)
+{
+ if (data)
+ kref_put(&data->pcount, aa_ploaddata_kref);
+}
+
#endif /* __POLICY_INTERFACE_H */
diff --git a/security/apparmor/policy.c b/security/apparmor/policy.c
index e5f501f89803..3175094aa42b 100644
--- a/security/apparmor/policy.c
+++ b/security/apparmor/policy.c
@@ -241,7 +241,7 @@ void aa_free_profile(struct aa_profile *profile)
}
kfree_sensitive(profile->hash);
- aa_put_loaddata(profile->rawdata);
+ aa_put_profile_loaddata(profile->rawdata);
aa_label_destroy(&profile->label);
kfree_sensitive(profile);
@@ -899,7 +899,7 @@ ssize_t aa_replace_profiles(struct aa_ns *policy_ns, struct aa_label *label,
LIST_HEAD(lh);
op = mask & AA_MAY_REPLACE_POLICY ? OP_PROF_REPL : OP_PROF_LOAD;
- aa_get_loaddata(udata);
+ aa_get_profile_loaddata(udata);
/* released below */
error = aa_unpack(udata, &lh, &ns_name);
if (error)
@@ -949,10 +949,10 @@ ssize_t aa_replace_profiles(struct aa_ns *policy_ns, struct aa_label *label,
if (aa_rawdata_eq(rawdata_ent, udata)) {
struct aa_loaddata *tmp;
- tmp = __aa_get_loaddata(rawdata_ent);
+ tmp = aa_get_profile_loaddata(rawdata_ent);
/* check we didn't fail the race */
if (tmp) {
- aa_put_loaddata(udata);
+ aa_put_profile_loaddata(udata);
udata = tmp;
break;
}
@@ -962,7 +962,7 @@ ssize_t aa_replace_profiles(struct aa_ns *policy_ns, struct aa_label *label,
list_for_each_entry(ent, &lh, list) {
struct aa_policy *policy;
- ent->new->rawdata = aa_get_loaddata(udata);
+ ent->new->rawdata = aa_get_profile_loaddata(udata);
error = __lookup_replace(ns, ent->new->base.hname,
!(mask & AA_MAY_REPLACE_POLICY),
&ent->old, &info);
@@ -1076,7 +1076,7 @@ ssize_t aa_replace_profiles(struct aa_ns *policy_ns, struct aa_label *label,
out:
aa_put_ns(ns);
- aa_put_loaddata(udata);
+ aa_put_profile_loaddata(udata);
kfree(ns_name);
if (error)
diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
index 6c2a536173b5..33f5eabd47f1 100644
--- a/security/apparmor/policy_unpack.c
+++ b/security/apparmor/policy_unpack.c
@@ -147,34 +147,47 @@ bool aa_rawdata_eq(struct aa_loaddata *l, struct aa_loaddata *r)
return memcmp(l->data, r->data, r->compressed_size ?: r->size) == 0;
}
+static void do_loaddata_free(struct aa_loaddata *d)
+{
+ kfree_sensitive(d->hash);
+ kfree_sensitive(d->name);
+ kvfree(d->data);
+ kfree_sensitive(d);
+}
+
+void aa_loaddata_kref(struct kref *kref)
+{
+ struct aa_loaddata *d = container_of(kref, struct aa_loaddata, count);
+
+ do_loaddata_free(d);
+}
+
/*
* need to take the ns mutex lock which is NOT safe most places that
* put_loaddata is called, so we have to delay freeing it
*/
-static void do_loaddata_free(struct work_struct *work)
+static void do_ploaddata_rmfs(struct work_struct *work)
{
struct aa_loaddata *d = container_of(work, struct aa_loaddata, work);
struct aa_ns *ns = aa_get_ns(d->ns);
if (ns) {
mutex_lock_nested(&ns->lock, ns->level);
+ /* remove fs ref to loaddata */
__aa_fs_remove_rawdata(d);
mutex_unlock(&ns->lock);
aa_put_ns(ns);
}
-
- kfree_sensitive(d->hash);
- kfree_sensitive(d->name);
- kvfree(d->data);
- kfree_sensitive(d);
+ /* called by dropping last pcount, so drop its associated icount */
+ aa_put_i_loaddata(d);
}
-void aa_loaddata_kref(struct kref *kref)
+void aa_ploaddata_kref(struct kref *kref)
{
- struct aa_loaddata *d = container_of(kref, struct aa_loaddata, count);
+ struct aa_loaddata *d = container_of(kref, struct aa_loaddata, pcount);
if (d) {
- INIT_WORK(&d->work, do_loaddata_free);
+ INIT_WORK(&d->work, do_ploaddata_rmfs);
schedule_work(&d->work);
}
}
@@ -192,6 +205,7 @@ struct aa_loaddata *aa_loaddata_alloc(size_t size)
return ERR_PTR(-ENOMEM);
}
kref_init(&d->count);
+ kref_init(&d->pcount);
INIT_LIST_HEAD(&d->list);
return d;
--
2.25.1
2
1
[PATCH OLK-5.10] PM: runtime: Fix a race condition related to device removal
by Lin Ruifeng 08 Apr '26
by Lin Ruifeng 08 Apr '26
08 Apr '26
From: Bart Van Assche <bvanassche(a)acm.org>
stable inclusion
from stable-v6.6.130
commit 39f2d86f2ddde8d1beda05732f30c7cd945e0b5a
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14092
CVE: CVE-2026-23452
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 29ab768277617452d88c0607c9299cdc63b6e9ff ]
The following code in pm_runtime_work() may dereference the dev->parent
pointer after the parent device has been freed:
/* Maybe the parent is now able to suspend. */
if (parent && !parent->power.ignore_children) {
spin_unlock(&dev->power.lock);
spin_lock(&parent->power.lock);
rpm_idle(parent, RPM_ASYNC);
spin_unlock(&parent->power.lock);
spin_lock(&dev->power.lock);
}
Fix this by inserting a flush_work() call in pm_runtime_remove().
Without this patch blktest block/001 triggers the following complaint
sporadically:
BUG: KASAN: slab-use-after-free in lock_acquire+0x70/0x160
Read of size 1 at addr ffff88812bef7198 by task kworker/u553:1/3081
Workqueue: pm pm_runtime_work
Call Trace:
<TASK>
dump_stack_lvl+0x61/0x80
print_address_description.constprop.0+0x8b/0x310
print_report+0xfd/0x1d7
kasan_report+0xd8/0x1d0
__kasan_check_byte+0x42/0x60
lock_acquire.part.0+0x38/0x230
lock_acquire+0x70/0x160
_raw_spin_lock+0x36/0x50
rpm_suspend+0xc6a/0xfe0
rpm_idle+0x578/0x770
pm_runtime_work+0xee/0x120
process_one_work+0xde3/0x1410
worker_thread+0x5eb/0xfe0
kthread+0x37b/0x480
ret_from_fork+0x6cb/0x920
ret_from_fork_asm+0x11/0x20
</TASK>
Allocated by task 4314:
kasan_save_stack+0x2a/0x50
kasan_save_track+0x18/0x40
kasan_save_alloc_info+0x3d/0x50
__kasan_kmalloc+0xa0/0xb0
__kmalloc_noprof+0x311/0x990
scsi_alloc_target+0x122/0xb60 [scsi_mod]
__scsi_scan_target+0x101/0x460 [scsi_mod]
scsi_scan_channel+0x179/0x1c0 [scsi_mod]
scsi_scan_host_selected+0x259/0x2d0 [scsi_mod]
store_scan+0x2d2/0x390 [scsi_mod]
dev_attr_store+0x43/0x80
sysfs_kf_write+0xde/0x140
kernfs_fop_write_iter+0x3ef/0x670
vfs_write+0x506/0x1470
ksys_write+0xfd/0x230
__x64_sys_write+0x76/0xc0
x64_sys_call+0x213/0x1810
do_syscall_64+0xee/0xfc0
entry_SYSCALL_64_after_hwframe+0x4b/0x53
Freed by task 4314:
kasan_save_stack+0x2a/0x50
kasan_save_track+0x18/0x40
kasan_save_free_info+0x3f/0x50
__kasan_slab_free+0x67/0x80
kfree+0x225/0x6c0
scsi_target_dev_release+0x3d/0x60 [scsi_mod]
device_release+0xa3/0x220
kobject_cleanup+0x105/0x3a0
kobject_put+0x72/0xd0
put_device+0x17/0x20
scsi_device_dev_release+0xacf/0x12c0 [scsi_mod]
device_release+0xa3/0x220
kobject_cleanup+0x105/0x3a0
kobject_put+0x72/0xd0
put_device+0x17/0x20
scsi_device_put+0x7f/0xc0 [scsi_mod]
sdev_store_delete+0xa5/0x120 [scsi_mod]
dev_attr_store+0x43/0x80
sysfs_kf_write+0xde/0x140
kernfs_fop_write_iter+0x3ef/0x670
vfs_write+0x506/0x1470
ksys_write+0xfd/0x230
__x64_sys_write+0x76/0xc0
x64_sys_call+0x213/0x1810
Reported-by: Ming Lei <ming.lei(a)redhat.com>
Closes: https://lore.kernel.org/all/ZxdNvLNI8QaOfD2d@fedora/
Reported-by: syzbot+6c905ab800f20cf4086c(a)syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/68c13942.050a0220.2ff435.000b.GAE@google.com/
Fixes: 5e928f77a09a ("PM: Introduce core framework for run-time PM of I/O devices (rev. 17)")
Signed-off-by: Bart Van Assche <bvanassche(a)acm.org>
Link: https://patch.msgid.link/20260312182720.2776083-1-bvanassche@acm.org
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Lin Ruifeng <linruifeng4(a)huawei.com>
---
drivers/base/power/runtime.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
index f5c9e6629f0c..a174ca364680 100644
--- a/drivers/base/power/runtime.c
+++ b/drivers/base/power/runtime.c
@@ -1725,6 +1725,7 @@ void pm_runtime_reinit(struct device *dev)
void pm_runtime_remove(struct device *dev)
{
__pm_runtime_disable(dev, false);
+ flush_work(&dev->power.work);
pm_runtime_reinit(dev);
}
--
2.43.0
2
1
[PATCH OLK-6.6] PM: runtime: Fix a race condition related to device removal
by Lin Ruifeng 08 Apr '26
by Lin Ruifeng 08 Apr '26
08 Apr '26
From: Bart Van Assche <bvanassche(a)acm.org>
stable inclusion
from stable-v6.6.130
commit 39f2d86f2ddde8d1beda05732f30c7cd945e0b5a
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14092
CVE: CVE-2026-23452
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 29ab768277617452d88c0607c9299cdc63b6e9ff ]
The following code in pm_runtime_work() may dereference the dev->parent
pointer after the parent device has been freed:
/* Maybe the parent is now able to suspend. */
if (parent && !parent->power.ignore_children) {
spin_unlock(&dev->power.lock);
spin_lock(&parent->power.lock);
rpm_idle(parent, RPM_ASYNC);
spin_unlock(&parent->power.lock);
spin_lock(&dev->power.lock);
}
Fix this by inserting a flush_work() call in pm_runtime_remove().
Without this patch blktest block/001 triggers the following complaint
sporadically:
BUG: KASAN: slab-use-after-free in lock_acquire+0x70/0x160
Read of size 1 at addr ffff88812bef7198 by task kworker/u553:1/3081
Workqueue: pm pm_runtime_work
Call Trace:
<TASK>
dump_stack_lvl+0x61/0x80
print_address_description.constprop.0+0x8b/0x310
print_report+0xfd/0x1d7
kasan_report+0xd8/0x1d0
__kasan_check_byte+0x42/0x60
lock_acquire.part.0+0x38/0x230
lock_acquire+0x70/0x160
_raw_spin_lock+0x36/0x50
rpm_suspend+0xc6a/0xfe0
rpm_idle+0x578/0x770
pm_runtime_work+0xee/0x120
process_one_work+0xde3/0x1410
worker_thread+0x5eb/0xfe0
kthread+0x37b/0x480
ret_from_fork+0x6cb/0x920
ret_from_fork_asm+0x11/0x20
</TASK>
Allocated by task 4314:
kasan_save_stack+0x2a/0x50
kasan_save_track+0x18/0x40
kasan_save_alloc_info+0x3d/0x50
__kasan_kmalloc+0xa0/0xb0
__kmalloc_noprof+0x311/0x990
scsi_alloc_target+0x122/0xb60 [scsi_mod]
__scsi_scan_target+0x101/0x460 [scsi_mod]
scsi_scan_channel+0x179/0x1c0 [scsi_mod]
scsi_scan_host_selected+0x259/0x2d0 [scsi_mod]
store_scan+0x2d2/0x390 [scsi_mod]
dev_attr_store+0x43/0x80
sysfs_kf_write+0xde/0x140
kernfs_fop_write_iter+0x3ef/0x670
vfs_write+0x506/0x1470
ksys_write+0xfd/0x230
__x64_sys_write+0x76/0xc0
x64_sys_call+0x213/0x1810
do_syscall_64+0xee/0xfc0
entry_SYSCALL_64_after_hwframe+0x4b/0x53
Freed by task 4314:
kasan_save_stack+0x2a/0x50
kasan_save_track+0x18/0x40
kasan_save_free_info+0x3f/0x50
__kasan_slab_free+0x67/0x80
kfree+0x225/0x6c0
scsi_target_dev_release+0x3d/0x60 [scsi_mod]
device_release+0xa3/0x220
kobject_cleanup+0x105/0x3a0
kobject_put+0x72/0xd0
put_device+0x17/0x20
scsi_device_dev_release+0xacf/0x12c0 [scsi_mod]
device_release+0xa3/0x220
kobject_cleanup+0x105/0x3a0
kobject_put+0x72/0xd0
put_device+0x17/0x20
scsi_device_put+0x7f/0xc0 [scsi_mod]
sdev_store_delete+0xa5/0x120 [scsi_mod]
dev_attr_store+0x43/0x80
sysfs_kf_write+0xde/0x140
kernfs_fop_write_iter+0x3ef/0x670
vfs_write+0x506/0x1470
ksys_write+0xfd/0x230
__x64_sys_write+0x76/0xc0
x64_sys_call+0x213/0x1810
Reported-by: Ming Lei <ming.lei(a)redhat.com>
Closes: https://lore.kernel.org/all/ZxdNvLNI8QaOfD2d@fedora/
Reported-by: syzbot+6c905ab800f20cf4086c(a)syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/68c13942.050a0220.2ff435.000b.GAE@google.com/
Fixes: 5e928f77a09a ("PM: Introduce core framework for run-time PM of I/O devices (rev. 17)")
Signed-off-by: Bart Van Assche <bvanassche(a)acm.org>
Link: https://patch.msgid.link/20260312182720.2776083-1-bvanassche@acm.org
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Lin Ruifeng <linruifeng4(a)huawei.com>
---
drivers/base/power/runtime.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
index b28fb11cd6db..2766bdc9158a 100644
--- a/drivers/base/power/runtime.c
+++ b/drivers/base/power/runtime.c
@@ -1854,6 +1854,7 @@ void pm_runtime_reinit(struct device *dev)
void pm_runtime_remove(struct device *dev)
{
__pm_runtime_disable(dev, false);
+ flush_work(&dev->power.work);
pm_runtime_reinit(dev);
}
--
2.43.0
2
1
[PATCH OLK-6.6] spi: fix use-after-free on controller registration failure
by Lin Ruifeng 08 Apr '26
by Lin Ruifeng 08 Apr '26
08 Apr '26
From: Johan Hovold <johan(a)kernel.org>
stable inclusion
from stable-v6.6.130
commit 6bbd385b30c7fb6c7ee0669e9ada91490938c051
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14115
CVE: CVE-2026-31389
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 8634e05b08ead636e926022f4a98416e13440df9 upstream.
Make sure to deregister from driver core also in the unlikely event that
per-cpu statistics allocation fails during controller registration to
avoid use-after-free (of driver resources) and unclocked register
accesses.
Fixes: 6598b91b5ac3 ("spi: spi.c: Convert statistics to per-cpu u64_stats_t")
Cc: stable(a)vger.kernel.org # 6.0
Cc: David Jander <david(a)protonic.nl>
Signed-off-by: Johan Hovold <johan(a)kernel.org>
Link: https://patch.msgid.link/20260312151817.32100-2-johan@kernel.org
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Lin Ruifeng <linruifeng4(a)huawei.com>
---
drivers/spi/spi.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
index cd6edfa92101..66f694457a8b 100644
--- a/drivers/spi/spi.c
+++ b/drivers/spi/spi.c
@@ -3209,10 +3209,8 @@ int spi_register_controller(struct spi_controller *ctlr)
dev_info(dev, "controller is unqueued, this is deprecated\n");
} else if (ctlr->transfer_one || ctlr->transfer_one_message) {
status = spi_controller_initialize_queue(ctlr);
- if (status) {
- device_del(&ctlr->dev);
- goto free_bus_id;
- }
+ if (status)
+ goto del_ctrl;
}
/* Add statistics */
ctlr->pcpu_statistics = spi_alloc_pcpu_stats(dev);
@@ -3235,6 +3233,8 @@ int spi_register_controller(struct spi_controller *ctlr)
destroy_queue:
spi_destroy_queue(ctlr);
+del_ctrl:
+ device_del(&ctlr->dev);
free_bus_id:
mutex_lock(&board_lock);
idr_remove(&spi_master_idr, ctlr->bus_num);
--
2.43.0
2
1
Kuniyuki Iwashima (2):
tcp: Clear tcp_sk(sk)->fastopen_rsk in tcp_disconnect().
tcp: Don't call reqsk_fastopen_remove() in tcp_conn_request().
net/ipv4/tcp.c | 5 +++++
net/ipv4/tcp_input.c | 1 -
2 files changed, 5 insertions(+), 1 deletion(-)
--
2.9.5
2
3
[PATCH OLK-6.6] icmp: fix NULL pointer dereference in icmp_tag_validation()
by Zhang Changzhong 08 Apr '26
by Zhang Changzhong 08 Apr '26
08 Apr '26
From: Weiming Shi <bestswngs(a)gmail.com>
mainline inclusion
from mainline-v7.0-rc5
commit 614aefe56af8e13331e50220c936fc0689cf5675
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14029
CVE: CVE-2026-23398
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
icmp_tag_validation() unconditionally dereferences the result of
rcu_dereference(inet_protos[proto]) without checking for NULL.
The inet_protos[] array is sparse -- only about 15 of 256 protocol
numbers have registered handlers. When ip_no_pmtu_disc is set to 3
(hardened PMTU mode) and the kernel receives an ICMP Fragmentation
Needed error with a quoted inner IP header containing an unregistered
protocol number, the NULL dereference causes a kernel panic in
softirq context.
Oops: general protection fault, probably for non-canonical address 0xdffffc0000000002: 0000 [#1] SMP KASAN NOPTI
KASAN: null-ptr-deref in range [0x0000000000000010-0x0000000000000017]
RIP: 0010:icmp_unreach (net/ipv4/icmp.c:1085 net/ipv4/icmp.c:1143)
Call Trace:
<IRQ>
icmp_rcv (net/ipv4/icmp.c:1527)
ip_protocol_deliver_rcu (net/ipv4/ip_input.c:207)
ip_local_deliver_finish (net/ipv4/ip_input.c:242)
ip_local_deliver (net/ipv4/ip_input.c:262)
ip_rcv (net/ipv4/ip_input.c:573)
__netif_receive_skb_one_core (net/core/dev.c:6164)
process_backlog (net/core/dev.c:6628)
handle_softirqs (kernel/softirq.c:561)
</IRQ>
Add a NULL check before accessing icmp_strict_tag_validation. If the
protocol has no registered handler, return false since it cannot
perform strict tag validation.
Fixes: 8ed1dc44d3e9 ("ipv4: introduce hardened ip_no_pmtu_disc mode")
Reported-by: Xiang Mei <xmei5(a)asu.edu>
Signed-off-by: Weiming Shi <bestswngs(a)gmail.com>
Link: https://patch.msgid.link/20260318130558.1050247-4-bestswngs@gmail.com
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Conflicts:
net/ipv4/icmp.c
[context conflicts]
Signed-off-by: Zhang Changzhong <zhangchangzhong(a)huawei.com>
---
net/ipv4/icmp.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
index aed99f8..ca4e926 100644
--- a/net/ipv4/icmp.c
+++ b/net/ipv4/icmp.c
@@ -855,10 +855,12 @@ static void icmp_socket_deliver(struct sk_buff *skb, u32 info)
static bool icmp_tag_validation(int proto)
{
+ const struct net_protocol *ipprot;
bool ok;
rcu_read_lock();
- ok = rcu_dereference(inet_protos[proto])->icmp_strict_tag_validation;
+ ipprot = rcu_dereference(inet_protos[proto]);
+ ok = ipprot ? ipprot->icmp_strict_tag_validation : false;
rcu_read_unlock();
return ok;
}
--
2.9.5
2
1
From: Alexei Starovoitov <ast(a)kernel.org>
mainline inclusion
from mainline-v7.0-rc7
commit a8502a79e832b861e99218cbd2d8f4312d62e225
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8900
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
In case rold->reg->range == BEYOND_PKT_END && rcur->reg->range == N
regsafe() may return true which may lead to current state with
valid packet range not being explored. Fix the bug.
Fixes: 6d94e741a8ff ("bpf: Support for pointers beyond pkt_end.")
Signed-off-by: Alexei Starovoitov <ast(a)kernel.org>
Signed-off-by: Andrii Nakryiko <andrii(a)kernel.org>
Reviewed-by: Daniel Borkmann <daniel(a)iogearbox.net>
Reviewed-by: Amery Hung <ameryhung(a)gmail.com>
Acked-by: Eduard Zingerman <eddyz87(a)gmail.com>
Link: https://lore.kernel.org/bpf/20260331204228.26726-1-alexei.starovoitov@gmail…
Signed-off-by: Pu Lehui <pulehui(a)huawei.com>
---
kernel/bpf/verifier.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 9ff0704cef3d..e63d63ac8a36 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -16307,8 +16307,13 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
* since someone could have accessed through (ptr - k), or
* even done ptr -= k in a register, to get a safe access.
*/
- if (rold->range > rcur->range)
+ if (rold->range < 0 || rcur->range < 0) {
+ /* special case for [BEYOND|AT]_PKT_END */
+ if (rold->range != rcur->range)
+ return false;
+ } else if (rold->range > rcur->range) {
return false;
+ }
/* If the offsets don't match, we can't trust our alignment;
* nor can we be sure that we won't fall out of range.
*/
--
2.34.1
2
1
[PATCH OLK-6.6 v7 0/2] kvm: arm64: Transition from CPU Type to MIDR Register for Virtualization Feature Detection
by liqiqi 08 Apr '26
by liqiqi 08 Apr '26
08 Apr '26
Currently, there are two methods for determining whether a chip supports
specific virtualization features:
1. Reading the chip's CPU type from BIOS
2. Reading the value of the MIDR register
The issue with the first method is that each time a new chip is introduced,
the new CPU type must be defined, which leads to poor code portability and
maintainability.
Therefore, the second method has been adopted to replace the first. This
approach eliminates the dependency on CPU type by using the MIDR register.
liqiqi (2):
kvm: arm64: Add MIDR definitions and use MIDR to determine whether
features are supported
kvm: arm64: Remove cpu_type definition and it's related interfaces
arch/arm64/include/asm/cache.h | 2 +-
arch/arm64/include/asm/cputype.h | 8 +-
arch/arm64/kernel/cpu_errata.c | 4 +-
arch/arm64/kernel/cpufeature.c | 2 +-
arch/arm64/kernel/proton-pack.c | 4 +-
arch/arm64/kvm/arm.c | 1 -
arch/arm64/kvm/hisilicon/hisi_virt.c | 110 +++--------------------
arch/arm64/kvm/hisilicon/hisi_virt.h | 12 ---
drivers/perf/hisilicon/hisi_uncore_pmu.c | 2 +-
tools/arch/arm64/include/asm/cputype.h | 4 +-
10 files changed, 25 insertions(+), 124 deletions(-)
--
2.43.0
2
3
Fix CVE-2026-23378
Eric Dumazet (1):
net/sched: act_ife: avoid possible NULL deref
Jamal Hadi Salim (1):
net/sched: act_ife: Fix metalist update behavior
include/net/tc_act/tc_ife.h | 4 +-
net/sched/act_ife.c | 99 ++++++++++++++++++-------------------
2 files changed, 49 insertions(+), 54 deletions(-)
--
2.25.1
2
3
07 Apr '26
From: Ariel Silver <arielsilver77(a)gmail.com>
stable inclusion
from stable-v6.6.130
commit 1a6da3dbb9985d00743073a1cc1f96e59f5abc30
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14132
CVE: CVE-2026-31405
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 24d87712727a5017ad142d63940589a36cd25647 upstream.
The ule_mandatory_ext_handlers[] and ule_optional_ext_handlers[] tables
in handle_one_ule_extension() are declared with 255 elements (valid
indices 0-254), but the index htype is derived from network-controlled
data as (ule_sndu_type & 0x00FF), giving a range of 0-255. When
htype equals 255, an out-of-bounds read occurs on the function pointer
table, and the OOB value may be called as a function pointer.
Add a bounds check on htype against the array size before either table
is accessed. Out-of-range values now cause the SNDU to be discarded.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Reported-by: Ariel Silver <arielsilver77(a)gmail.com>
Signed-off-by: Ariel Silver <arielsilver77(a)gmail.com>
Cc: stable(a)vger.kernel.org
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Liu Kai <liukai284(a)huawei.com>
---
drivers/media/dvb-core/dvb_net.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/media/dvb-core/dvb_net.c b/drivers/media/dvb-core/dvb_net.c
index 8bb8dd34c223..a2159b2bc176 100644
--- a/drivers/media/dvb-core/dvb_net.c
+++ b/drivers/media/dvb-core/dvb_net.c
@@ -228,6 +228,9 @@ static int handle_one_ule_extension( struct dvb_net_priv *p )
unsigned char hlen = (p->ule_sndu_type & 0x0700) >> 8;
unsigned char htype = p->ule_sndu_type & 0x00FF;
+ if (htype >= ARRAY_SIZE(ule_mandatory_ext_handlers))
+ return -1;
+
/* Discriminate mandatory and optional extension headers. */
if (hlen == 0) {
/* Mandatory extension header */
--
2.34.1
2
1
07 Apr '26
hulk inclusion
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8882
--------------------------------
wake_wide() uses sd_llc_size as the spreading threshold to detect wide
waker/wakee relationships and to disable wake_affine() for those cases.
On SMT systems, sd_llc_size counts logical CPUs rather than physical
cores. This inflates the wake_wide() threshold, allowing wake_affine()
to pack more tasks into one LLC domain than the actual compute capacity
of its physical cores can sustain. The resulting SMT interference may
cost more than the cache-locality benefit wake_affine() intends to gain.
Scale the factor by the SMT width of the current CPU so that it
approximates the number of independent physical cores in the LLC domain,
making wake_wide() more likely to kick in before SMT interference
becomes significant. On non-SMT systems the SMT width is 1 and behaviour
is unchanged.
Fixes: 63b0e9edceec ("sched/fair: Beef up wake_wide()")
Signed-off-by: Zhang Qiao <zhangqiao22(a)huawei.com>
---
kernel/sched/fair.c | 5 +++++
kernel/sched/features.h | 2 ++
2 files changed, 7 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ad30bb800961..4100998e18cd 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7850,6 +7850,11 @@ static int wake_wide(struct task_struct *p)
unsigned int slave = p->wakee_flips;
int factor = __this_cpu_read(sd_llc_size);
+ /* Scale factor to physical-core count to account for SMT interference. */
+ if (sched_feat(WA_SMT))
+ factor = DIV_ROUND_UP(factor,
+ cpumask_weight(cpu_smt_mask(smp_processor_id())));
+
if (master < slave)
swap(master, slave);
if (slave < factor || master < slave * factor)
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index 24a0c853a8a0..c9ad8e72ecd0 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -128,3 +128,5 @@ SCHED_FEAT(SOFT_DOMAIN, false)
#ifdef CONFIG_SCHED_SOFT_QUOTA
SCHED_FEAT(SOFT_QUOTA, false)
#endif
+
+SCHED_FEAT(WA_SMT, false)
--
2.18.0
2
1
Fix riscv check-build warning.
Björn Töpel (1):
riscv: Replace function-like macro by static inline function
Tengda Wu (1):
Revert "riscv: stacktrace: Disable KASAN checks for non-current tasks"
arch/riscv/include/asm/cacheflush.h | 15 ++++++++++-----
arch/riscv/kernel/stacktrace.c | 16 ----------------
2 files changed, 10 insertions(+), 21 deletions(-)
--
2.34.1
2
3
[PATCH OLK-5.10 0/3] Xen privcmd driver: restrict to target domain at boot when not in dom0 to fix secure boot issue
by Zhang Yuwei 07 Apr '26
by Zhang Yuwei 07 Apr '26
07 Apr '26
Patch 1: Restrict the privcmd driver in unprivileged domU
to only allow hypercalls targeting a specific domain obtained
from Xenstore, preventing secure boot bypass.
Patch 2: Unregister the xenstore notifier on module exit
to clean up resources added by patch 1.
Patch 3: Add an unrestricted boot parameter to optionally allow
all hypercalls when secure boot is not active, guarded by a new
lockdown reason.
GuoHan Zhao (1):
xen/privcmd: unregister xenstore notifier on module exit
Juergen Gross (2):
xen/privcmd: restrict usage in unprivileged domU
xen/privcmd: add boot control for restricted usage in domU
drivers/xen/privcmd.c | 78 +++++++++++++++++++++++++++++++++++++---
include/linux/security.h | 1 +
security/security.c | 1 +
3 files changed, 75 insertions(+), 5 deletions(-)
--
2.22.0
2
4
[PATCH OLK-6.6 0/3] Xen privcmd driver: restrict to target domain at boot when not in dom0 to fix secure boot issue
by Zhang Yuwei 07 Apr '26
by Zhang Yuwei 07 Apr '26
07 Apr '26
Patch 1: Restrict the privcmd driver in unprivileged domU
to only allow hypercalls targeting a specific domain obtained
from Xenstore, preventing secure boot bypass.
Patch 2: Unregister the xenstore notifier on module exit
to clean up resources added by patch 1.
Patch 3: Add an unrestricted boot parameter to optionally allow
all hypercalls when secure boot is not active, guarded by a new
lockdown reason.
GuoHan Zhao (1):
xen/privcmd: unregister xenstore notifier on module exit
Juergen Gross (2):
xen/privcmd: restrict usage in unprivileged domU
xen/privcmd: add boot control for restricted usage in domU
drivers/xen/privcmd.c | 78 +++++++++++++++++++++++++++++++++++++---
include/linux/security.h | 1 +
security/security.c | 1 +
3 files changed, 75 insertions(+), 5 deletions(-)
--
2.22.0
2
4
[PATCH OLK-6.6] apparmor: replace recursive profile removal with iterative approach
by Cui GaoSheng 07 Apr '26
by Cui GaoSheng 07 Apr '26
07 Apr '26
From: Massimiliano Pellizzer <massimiliano.pellizzer(a)canonical.com>
stable inclusion
from stable-v6.6.130
commit 33959a491e9fd557abfa5fce5ae4637d400915d3
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14038
CVE: CVE-2026-23404
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit ab09264660f9de5d05d1ef4e225aa447c63a8747 upstream.
The profile removal code uses recursion when removing nested profiles,
which can lead to kernel stack exhaustion and system crashes.
Reproducer:
$ pf='a'; for ((i=0; i<1024; i++)); do
echo -e "profile $pf { \n }" | apparmor_parser -K -a;
pf="$pf//x";
done
$ echo -n a > /sys/kernel/security/apparmor/.remove
Replace the recursive __aa_profile_list_release() approach with an
iterative approach in __remove_profile(). The function repeatedly
finds and removes leaf profiles until the entire subtree is removed,
maintaining the same removal semantic without recursion.
Fixes: c88d4c7b049e ("AppArmor: core policy routines")
Reported-by: Qualys Security Advisory <qsa(a)qualys.com>
Tested-by: Salvatore Bonaccorso <carnil(a)debian.org>
Reviewed-by: Georgia Garcia <georgia.garcia(a)canonical.com>
Reviewed-by: Cengiz Can <cengiz.can(a)canonical.com>
Signed-off-by: Massimiliano Pellizzer <massimiliano.pellizzer(a)canonical.com>
Signed-off-by: John Johansen <john.johansen(a)canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Cui GaoSheng <cuigaosheng1(a)huawei.com>
---
security/apparmor/policy.c | 30 +++++++++++++++++++++++++++---
1 file changed, 27 insertions(+), 3 deletions(-)
diff --git a/security/apparmor/policy.c b/security/apparmor/policy.c
index 83693b8d18d4..2cbe89e135fb 100644
--- a/security/apparmor/policy.c
+++ b/security/apparmor/policy.c
@@ -147,19 +147,43 @@ static void __list_remove_profile(struct aa_profile *profile)
}
/**
- * __remove_profile - remove old profile, and children
- * @profile: profile to be replaced (NOT NULL)
+ * __remove_profile - remove profile, and children
+ * @profile: profile to be removed (NOT NULL)
*
* Requires: namespace list lock be held, or list not be shared
*/
static void __remove_profile(struct aa_profile *profile)
{
+ struct aa_profile *curr, *to_remove;
+
AA_BUG(!profile);
AA_BUG(!profile->ns);
AA_BUG(!mutex_is_locked(&profile->ns->lock));
/* release any children lists first */
- __aa_profile_list_release(&profile->base.profiles);
+ if (!list_empty(&profile->base.profiles)) {
+ curr = list_first_entry(&profile->base.profiles, struct aa_profile, base.list);
+
+ while (curr != profile) {
+
+ while (!list_empty(&curr->base.profiles))
+ curr = list_first_entry(&curr->base.profiles,
+ struct aa_profile, base.list);
+
+ to_remove = curr;
+ if (!list_is_last(&to_remove->base.list,
+ &aa_deref_parent(curr)->base.profiles))
+ curr = list_next_entry(to_remove, base.list);
+ else
+ curr = aa_deref_parent(curr);
+
+ /* released by free_profile */
+ aa_label_remove(&to_remove->label);
+ __aafs_profile_rmdir(to_remove);
+ __list_remove_profile(to_remove);
+ }
+ }
+
/* released by free_profile */
aa_label_remove(&profile->label);
__aafs_profile_rmdir(profile);
--
2.34.1
2
1
[PATCH OLK-6.6] apparmor: fix: limit the number of levels of policy namespaces
by Cui GaoSheng 07 Apr '26
by Cui GaoSheng 07 Apr '26
07 Apr '26
From: John Johansen <john.johansen(a)canonical.com>
stable inclusion
from stable-v6.6.130
commit 3f8699b3ee0c04b4b9bc27b82cd89a40e81e1d2e
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14047
CVE: CVE-2026-23405
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 306039414932c80f8420695a24d4fe10c84ccfb2 upstream.
Currently the number of policy namespaces is not bounded relying on
the user namespace limit. However policy namespaces aren't strictly
tied to user namespaces and it is possible to create them and nest
them arbitrarily deep which can be used to exhaust system resource.
Hard cap policy namespaces to the same depth as user namespaces.
Fixes: c88d4c7b049e8 ("AppArmor: core policy routines")
Reported-by: Qualys Security Advisory <qsa(a)qualys.com>
Reviewed-by: Ryan Lee <ryan.lee(a)canonical.com>
Reviewed-by: Cengiz Can <cengiz.can(a)canonical.com>
Signed-off-by: John Johansen <john.johansen(a)canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Cui GaoSheng <cuigaosheng1(a)huawei.com>
---
security/apparmor/include/policy_ns.h | 2 ++
security/apparmor/policy_ns.c | 2 ++
2 files changed, 4 insertions(+)
diff --git a/security/apparmor/include/policy_ns.h b/security/apparmor/include/policy_ns.h
index 33d665516fc1..dabb69bc87e0 100644
--- a/security/apparmor/include/policy_ns.h
+++ b/security/apparmor/include/policy_ns.h
@@ -18,6 +18,8 @@
#include "label.h"
#include "policy.h"
+/* Match max depth of user namespaces */
+#define MAX_NS_DEPTH 32
/* struct aa_ns_acct - accounting of profiles in namespace
* @max_size: maximum space allowed for all profiles in namespace
diff --git a/security/apparmor/policy_ns.c b/security/apparmor/policy_ns.c
index fd5b7afbcb48..c56ef36baef4 100644
--- a/security/apparmor/policy_ns.c
+++ b/security/apparmor/policy_ns.c
@@ -260,6 +260,8 @@ static struct aa_ns *__aa_create_ns(struct aa_ns *parent, const char *name,
AA_BUG(!name);
AA_BUG(!mutex_is_locked(&parent->lock));
+ if (parent->level > MAX_NS_DEPTH)
+ return ERR_PTR(-ENOSPC);
ns = alloc_ns(parent->base.hname, name);
if (!ns)
return ERR_PTR(-ENOMEM);
--
2.34.1
2
1
[PATCH OLK-5.10] apparmor: fix: limit the number of levels of policy namespaces
by Cui GaoSheng 07 Apr '26
by Cui GaoSheng 07 Apr '26
07 Apr '26
From: John Johansen <john.johansen(a)canonical.com>
stable inclusion
from stable-v6.6.130
commit 3f8699b3ee0c04b4b9bc27b82cd89a40e81e1d2e
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14047
CVE: CVE-2026-23405
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 306039414932c80f8420695a24d4fe10c84ccfb2 upstream.
Currently the number of policy namespaces is not bounded relying on
the user namespace limit. However policy namespaces aren't strictly
tied to user namespaces and it is possible to create them and nest
them arbitrarily deep which can be used to exhaust system resource.
Hard cap policy namespaces to the same depth as user namespaces.
Fixes: c88d4c7b049e8 ("AppArmor: core policy routines")
Reported-by: Qualys Security Advisory <qsa(a)qualys.com>
Reviewed-by: Ryan Lee <ryan.lee(a)canonical.com>
Reviewed-by: Cengiz Can <cengiz.can(a)canonical.com>
Signed-off-by: John Johansen <john.johansen(a)canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Cui GaoSheng <cuigaosheng1(a)huawei.com>
---
security/apparmor/include/policy_ns.h | 2 ++
security/apparmor/policy_ns.c | 2 ++
2 files changed, 4 insertions(+)
diff --git a/security/apparmor/include/policy_ns.h b/security/apparmor/include/policy_ns.h
index 3df6f804922d..e5704947e86e 100644
--- a/security/apparmor/include/policy_ns.h
+++ b/security/apparmor/include/policy_ns.h
@@ -18,6 +18,8 @@
#include "label.h"
#include "policy.h"
+/* Match max depth of user namespaces */
+#define MAX_NS_DEPTH 32
/* struct aa_ns_acct - accounting of profiles in namespace
* @max_size: maximum space allowed for all profiles in namespace
diff --git a/security/apparmor/policy_ns.c b/security/apparmor/policy_ns.c
index 53d24cf63893..5d342ef078e9 100644
--- a/security/apparmor/policy_ns.c
+++ b/security/apparmor/policy_ns.c
@@ -249,6 +249,8 @@ static struct aa_ns *__aa_create_ns(struct aa_ns *parent, const char *name,
AA_BUG(!name);
AA_BUG(!mutex_is_locked(&parent->lock));
+ if (parent->level > MAX_NS_DEPTH)
+ return ERR_PTR(-ENOSPC);
ns = alloc_ns(parent->base.hname, name);
if (!ns)
return ERR_PTR(-ENOMEM);
--
2.34.1
2
1
[PATCH OLK-5.10] apparmor: replace recursive profile removal with iterative approach
by Cui GaoSheng 07 Apr '26
by Cui GaoSheng 07 Apr '26
07 Apr '26
From: Massimiliano Pellizzer <massimiliano.pellizzer(a)canonical.com>
stable inclusion
from stable-v6.6.130
commit 33959a491e9fd557abfa5fce5ae4637d400915d3
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14038
CVE: CVE-2026-23404
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit ab09264660f9de5d05d1ef4e225aa447c63a8747 upstream.
The profile removal code uses recursion when removing nested profiles,
which can lead to kernel stack exhaustion and system crashes.
Reproducer:
$ pf='a'; for ((i=0; i<1024; i++)); do
echo -e "profile $pf { \n }" | apparmor_parser -K -a;
pf="$pf//x";
done
$ echo -n a > /sys/kernel/security/apparmor/.remove
Replace the recursive __aa_profile_list_release() approach with an
iterative approach in __remove_profile(). The function repeatedly
finds and removes leaf profiles until the entire subtree is removed,
maintaining the same removal semantic without recursion.
Fixes: c88d4c7b049e ("AppArmor: core policy routines")
Reported-by: Qualys Security Advisory <qsa(a)qualys.com>
Tested-by: Salvatore Bonaccorso <carnil(a)debian.org>
Reviewed-by: Georgia Garcia <georgia.garcia(a)canonical.com>
Reviewed-by: Cengiz Can <cengiz.can(a)canonical.com>
Signed-off-by: Massimiliano Pellizzer <massimiliano.pellizzer(a)canonical.com>
Signed-off-by: John Johansen <john.johansen(a)canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Cui GaoSheng <cuigaosheng1(a)huawei.com>
---
security/apparmor/policy.c | 30 +++++++++++++++++++++++++++---
1 file changed, 27 insertions(+), 3 deletions(-)
diff --git a/security/apparmor/policy.c b/security/apparmor/policy.c
index e5f501f89803..57be581980b3 100644
--- a/security/apparmor/policy.c
+++ b/security/apparmor/policy.c
@@ -146,19 +146,43 @@ static void __list_remove_profile(struct aa_profile *profile)
}
/**
- * __remove_profile - remove old profile, and children
- * @profile: profile to be replaced (NOT NULL)
+ * __remove_profile - remove profile, and children
+ * @profile: profile to be removed (NOT NULL)
*
* Requires: namespace list lock be held, or list not be shared
*/
static void __remove_profile(struct aa_profile *profile)
{
+ struct aa_profile *curr, *to_remove;
+
AA_BUG(!profile);
AA_BUG(!profile->ns);
AA_BUG(!mutex_is_locked(&profile->ns->lock));
/* release any children lists first */
- __aa_profile_list_release(&profile->base.profiles);
+ if (!list_empty(&profile->base.profiles)) {
+ curr = list_first_entry(&profile->base.profiles, struct aa_profile, base.list);
+
+ while (curr != profile) {
+
+ while (!list_empty(&curr->base.profiles))
+ curr = list_first_entry(&curr->base.profiles,
+ struct aa_profile, base.list);
+
+ to_remove = curr;
+ if (!list_is_last(&to_remove->base.list,
+ &aa_deref_parent(curr)->base.profiles))
+ curr = list_next_entry(to_remove, base.list);
+ else
+ curr = aa_deref_parent(curr);
+
+ /* released by free_profile */
+ aa_label_remove(&to_remove->label);
+ __aafs_profile_rmdir(to_remove);
+ __list_remove_profile(to_remove);
+ }
+ }
+
/* released by free_profile */
aa_label_remove(&profile->label);
__aafs_profile_rmdir(profile);
--
2.34.1
2
1
[PATCH OLK-6.6 v6 0/2] kvm: arm64: Transition from CPU Type to MIDR Register for Virtualization Feature Detection
by liqiqi 07 Apr '26
by liqiqi 07 Apr '26
07 Apr '26
Currently, there are two methods for determining whether a chip supports
specific virtualization features:
1. Reading the chip's CPU type from BIOS
2. Reading the value of the MIDR register
The issue with the first method is that each time a new chip is introduced,
the new CPU type must be defined, which leads to poor code portability and
maintainability.
Therefore, the second method has been adopted to replace the first. This
approach eliminates the dependency on CPU type by using the MIDR register.
liqiqi (2):
kvm: arm64: Add MIDR definitions and use MIDR to determine whether
features are supported
kvm: arm64: Remove cpu_type definition and it's related interfaces
arch/arm64/include/asm/cache.h | 2 +-
arch/arm64/include/asm/cputype.h | 8 +-
arch/arm64/kernel/cpu_errata.c | 4 +-
arch/arm64/kernel/cpufeature.c | 2 +-
arch/arm64/kernel/proton-pack.c | 4 +-
arch/arm64/kvm/arm.c | 1 -
arch/arm64/kvm/hisilicon/hisi_virt.c | 110 +++--------------------
arch/arm64/kvm/hisilicon/hisi_virt.h | 12 ---
drivers/perf/hisilicon/hisi_uncore_pmu.c | 2 +-
tools/arch/arm64/include/asm/cputype.h | 4 +-
10 files changed, 25 insertions(+), 124 deletions(-)
--
2.43.0
2
3
[PATCH OLK-6.6] nf_tables: nft_dynset: fix possible stateful expression memleak in error path
by Dong Chenchen 07 Apr '26
by Dong Chenchen 07 Apr '26
07 Apr '26
From: Pablo Neira Ayuso <pablo(a)netfilter.org>
mainline inclusion
from mainline-v7.0-rc3
commit 0548a13b5a145b16e4da0628b5936baf35f51b43
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14034
CVE: CVE-2026-23399
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
If cloning the second stateful expression in the element via GFP_ATOMIC
fails, then the first stateful expression remains in place without being
released.
unreferenced object (percpu) 0x607b97e9cab8 (size 16):
comm "softirq", pid 0, jiffies 4294931867
hex dump (first 16 bytes on cpu 3):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
backtrace (crc 0):
pcpu_alloc_noprof+0x453/0xd80
nft_counter_clone+0x9c/0x190 [nf_tables]
nft_expr_clone+0x8f/0x1b0 [nf_tables]
nft_dynset_new+0x2cb/0x5f0 [nf_tables]
nft_rhash_update+0x236/0x11c0 [nf_tables]
nft_dynset_eval+0x11f/0x670 [nf_tables]
nft_do_chain+0x253/0x1700 [nf_tables]
nft_do_chain_ipv4+0x18d/0x270 [nf_tables]
nf_hook_slow+0xaa/0x1e0
ip_local_deliver+0x209/0x330
Fixes: 563125a73ac3 ("netfilter: nftables: generalize set extension to support for several expressions")
Reported-by: Gurpreet Shergill <giki.shergill(a)proton.me>
Signed-off-by: Pablo Neira Ayuso <pablo(a)netfilter.org>
Signed-off-by: Florian Westphal <fw(a)strlen.de>
Conflicts:
include/net/netfilter/nf_tables.h
net/netfilter/nft_dynset.c
[commit 9dad402b89e8 is not backport]
Signed-off-by: Dong Chenchen <dongchenchen2(a)huawei.com>
---
include/net/netfilter/nf_tables.h | 2 ++
net/netfilter/nf_tables_api.c | 4 ++--
net/netfilter/nft_dynset.c | 10 +++++++++-
3 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
index e1c4d903f39b..0f890ff1769b 100644
--- a/include/net/netfilter/nf_tables.h
+++ b/include/net/netfilter/nf_tables.h
@@ -859,6 +859,8 @@ void *nft_set_elem_init(const struct nft_set *set,
u64 timeout, u64 expiration, gfp_t gfp);
int nft_set_elem_expr_clone(const struct nft_ctx *ctx, struct nft_set *set,
struct nft_expr *expr_array[]);
+void nft_set_elem_expr_destroy(const struct nft_ctx *ctx,
+ struct nft_set_elem_expr *elem_expr);
void nft_set_elem_destroy(const struct nft_set *set, void *elem,
bool destroy_expr);
void nf_tables_set_elem_destroy(const struct nft_ctx *ctx,
diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
index 98d3156179ee..9d5f7bafa4d6 100644
--- a/net/netfilter/nf_tables_api.c
+++ b/net/netfilter/nf_tables_api.c
@@ -6471,8 +6471,8 @@ static void __nft_set_elem_expr_destroy(const struct nft_ctx *ctx,
}
}
-static void nft_set_elem_expr_destroy(const struct nft_ctx *ctx,
- struct nft_set_elem_expr *elem_expr)
+void nft_set_elem_expr_destroy(const struct nft_ctx *ctx,
+ struct nft_set_elem_expr *elem_expr)
{
struct nft_expr *expr;
u32 size;
diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
index a81bd69b059b..8ea45d61ed88 100644
--- a/net/netfilter/nft_dynset.c
+++ b/net/netfilter/nft_dynset.c
@@ -30,18 +30,26 @@ static int nft_dynset_expr_setup(const struct nft_dynset *priv,
const struct nft_set_ext *ext)
{
struct nft_set_elem_expr *elem_expr = nft_set_ext_expr(ext);
+ struct nft_ctx ctx = {
+ .net = read_pnet(&priv->set->net),
+ .family = priv->set->table->family,
+ };
struct nft_expr *expr;
int i;
for (i = 0; i < priv->num_exprs; i++) {
expr = nft_setelem_expr_at(elem_expr, elem_expr->size);
if (nft_expr_clone(expr, priv->expr_array[i], GFP_ATOMIC) < 0)
- return -1;
+ goto err_out;
elem_expr->size += priv->expr_array[i]->ops->size;
}
return 0;
+err_out:
+ nft_set_elem_expr_destroy(&ctx, elem_expr);
+
+ return -1;
}
static void *nft_dynset_new(struct nft_set *set, const struct nft_expr *expr,
--
2.25.1
2
1
Fix CVE-2026-23343
Larysa Zaremba (2):
xdp: use modulo operation to calculate XDP frag tailroom
xdp: produce a warning when calculated tailroom is negative
net/core/filter.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
--
2.25.1
2
3
07 Apr '26
From: Juergen Gross <jgross(a)suse.com>
mainline inclusion
from mainline-v7.0-rc6
commit 453b8fb68f3641fea970db88b7d9a153ed2a37e8
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14027
CVE: CVE-2026-31788
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
The Xen privcmd driver allows to issue arbitrary hypercalls from
user space processes. This is normally no problem, as access is
usually limited to root and the hypervisor will deny any hypercalls
affecting other domains.
In case the guest is booted using secure boot, however, the privcmd
driver would be enabling a root user process to modify e.g. kernel
memory contents, thus breaking the secure boot feature.
The only known case where an unprivileged domU is really needing to
use the privcmd driver is the case when it is acting as the device
model for another guest. In this case all hypercalls issued via the
privcmd driver will target that other guest.
Fortunately the privcmd driver can already be locked down to allow
only hypercalls targeting a specific domain, but this mode can be
activated from user land only today.
The target domain can be obtained from Xenstore, so when not running
in dom0 restrict the privcmd driver to that target domain from the
beginning, resolving the potential problem of breaking secure boot.
This is XSA-482
Reported-by: Teddy Astie <teddy.astie(a)vates.tech>
Fixes: 1c5de1939c20 ("xen: add privcmd driver")
Signed-off-by: Juergen Gross <jgross(a)suse.com>
Conflicts:
drivers/xen/privcmd.c
[commit bf4afc53b77ae not merged]
Signed-off-by: Zhang Yuwei <zhangyuwei20(a)huawei.com>
it push" to publish your local commits)
---
drivers/xen/privcmd.c | 62 +++++++++++++++++++++++++++++++++++++++----
1 file changed, 57 insertions(+), 5 deletions(-)
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 28537a1a0e0b..cb50038c0c9c 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -10,6 +10,7 @@
#define pr_fmt(fmt) "xen:" KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
+#include <linux/kstrtox.h>
#include <linux/module.h>
#include <linux/sched.h>
#include <linux/slab.h>
@@ -24,7 +25,8 @@
#include <linux/seq_file.h>
#include <linux/miscdevice.h>
#include <linux/moduleparam.h>
-
+#include <linux/notifier.h>
+#include <linux/wait.h>
#include <asm/xen/hypervisor.h>
#include <asm/xen/hypercall.h>
@@ -37,7 +39,7 @@
#include <xen/page.h>
#include <xen/xen-ops.h>
#include <xen/balloon.h>
-
+#include <xen/xenbus.h>
#include "privcmd.h"
MODULE_LICENSE("GPL");
@@ -59,6 +61,11 @@ struct privcmd_data {
domid_t domid;
};
+/* DOMID_INVALID implies no restriction */
+static domid_t target_domain = DOMID_INVALID;
+static bool restrict_wait;
+static DECLARE_WAIT_QUEUE_HEAD(restrict_wait_wq);
+
static int privcmd_vma_range_is_mapped(
struct vm_area_struct *vma,
unsigned long addr,
@@ -878,13 +885,16 @@ static long privcmd_ioctl(struct file *file,
static int privcmd_open(struct inode *ino, struct file *file)
{
- struct privcmd_data *data = kzalloc(sizeof(*data), GFP_KERNEL);
+ struct privcmd_data *data;
+ if (wait_event_interruptible(restrict_wait_wq, !restrict_wait) < 0)
+ return -EINTR;
+
+ data = kzalloc(sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
- /* DOMID_INVALID implies no restriction */
- data->domid = DOMID_INVALID;
+ data->domid = target_domain;
file->private_data = data;
return 0;
@@ -977,6 +987,45 @@ static struct miscdevice privcmd_dev = {
.fops = &xen_privcmd_fops,
};
+static int init_restrict(struct notifier_block *notifier,
+ unsigned long event,
+ void *data)
+{
+ char *target;
+ unsigned int domid;
+
+ /* Default to an guaranteed unused domain-id. */
+ target_domain = DOMID_IDLE;
+
+ target = xenbus_read(XBT_NIL, "target", "", NULL);
+ if (IS_ERR(target) || kstrtouint(target, 10, &domid)) {
+ pr_err("No target domain found, blocking all hypercalls\n");
+ goto out;
+ }
+
+ target_domain = domid;
+
+ out:
+ if (!IS_ERR(target))
+ kfree(target);
+
+ restrict_wait = false;
+ wake_up_all(&restrict_wait_wq);
+
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block xenstore_notifier = {
+ .notifier_call = init_restrict,
+};
+
+static void __init restrict_driver(void)
+{
+ restrict_wait = true;
+
+ register_xenstore_notifier(&xenstore_notifier);
+}
+
static int __init privcmd_init(void)
{
int err;
@@ -984,6 +1033,9 @@ static int __init privcmd_init(void)
if (!xen_domain())
return -ENODEV;
+ if (!xen_initial_domain())
+ restrict_driver();
+
err = misc_register(&privcmd_dev);
if (err != 0) {
pr_err("Could not register Xen privcmd device\n");
--
2.22.0
2
3
07 Apr '26
From: Juergen Gross <jgross(a)suse.com>
mainline inclusion
from mainline-v7.0-rc6
commit 453b8fb68f3641fea970db88b7d9a153ed2a37e8
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14027
CVE: CVE-2026-31788
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
The Xen privcmd driver allows to issue arbitrary hypercalls from
user space processes. This is normally no problem, as access is
usually limited to root and the hypervisor will deny any hypercalls
affecting other domains.
In case the guest is booted using secure boot, however, the privcmd
driver would be enabling a root user process to modify e.g. kernel
memory contents, thus breaking the secure boot feature.
The only known case where an unprivileged domU is really needing to
use the privcmd driver is the case when it is acting as the device
model for another guest. In this case all hypercalls issued via the
privcmd driver will target that other guest.
Fortunately the privcmd driver can already be locked down to allow
only hypercalls targeting a specific domain, but this mode can be
activated from user land only today.
The target domain can be obtained from Xenstore, so when not running
in dom0 restrict the privcmd driver to that target domain from the
beginning, resolving the potential problem of breaking secure boot.
This is XSA-482
Reported-by: Teddy Astie <teddy.astie(a)vates.tech>
Fixes: 1c5de1939c20 ("xen: add privcmd driver")
Signed-off-by: Juergen Gross <jgross(a)suse.com>
Conflicts:
drivers/xen/privcmd.c
[commit bf4afc53b77ae not merged]
Signed-off-by: Zhang Yuwei <zhangyuwei20(a)huawei.com>
---
drivers/xen/privcmd.c | 62 +++++++++++++++++++++++++++++++++++++++----
1 file changed, 57 insertions(+), 5 deletions(-)
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 61aaded483e1..3c97710fc094 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -12,6 +12,7 @@
#include <linux/eventfd.h>
#include <linux/file.h>
#include <linux/kernel.h>
+#include <linux/kstrtox.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/poll.h>
@@ -30,7 +31,8 @@
#include <linux/seq_file.h>
#include <linux/miscdevice.h>
#include <linux/moduleparam.h>
-
+#include <linux/notifier.h>
+#include <linux/wait.h>
#include <asm/xen/hypervisor.h>
#include <asm/xen/hypercall.h>
@@ -43,7 +45,7 @@
#include <xen/page.h>
#include <xen/xen-ops.h>
#include <xen/balloon.h>
-
+#include <xen/xenbus.h>
#include "privcmd.h"
MODULE_LICENSE("GPL");
@@ -65,6 +67,11 @@ struct privcmd_data {
domid_t domid;
};
+/* DOMID_INVALID implies no restriction */
+static domid_t target_domain = DOMID_INVALID;
+static bool restrict_wait;
+static DECLARE_WAIT_QUEUE_HEAD(restrict_wait_wq);
+
static int privcmd_vma_range_is_mapped(
struct vm_area_struct *vma,
unsigned long addr,
@@ -1156,13 +1163,16 @@ static long privcmd_ioctl(struct file *file,
static int privcmd_open(struct inode *ino, struct file *file)
{
- struct privcmd_data *data = kzalloc(sizeof(*data), GFP_KERNEL);
+ struct privcmd_data *data;
+ if (wait_event_interruptible(restrict_wait_wq, !restrict_wait) < 0)
+ return -EINTR;
+
+ data = kzalloc(sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
- /* DOMID_INVALID implies no restriction */
- data->domid = DOMID_INVALID;
+ data->domid = target_domain;
file->private_data = data;
return 0;
@@ -1255,6 +1265,45 @@ static struct miscdevice privcmd_dev = {
.fops = &xen_privcmd_fops,
};
+static int init_restrict(struct notifier_block *notifier,
+ unsigned long event,
+ void *data)
+{
+ char *target;
+ unsigned int domid;
+
+ /* Default to an guaranteed unused domain-id. */
+ target_domain = DOMID_IDLE;
+
+ target = xenbus_read(XBT_NIL, "target", "", NULL);
+ if (IS_ERR(target) || kstrtouint(target, 10, &domid)) {
+ pr_err("No target domain found, blocking all hypercalls\n");
+ goto out;
+ }
+
+ target_domain = domid;
+
+ out:
+ if (!IS_ERR(target))
+ kfree(target);
+
+ restrict_wait = false;
+ wake_up_all(&restrict_wait_wq);
+
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block xenstore_notifier = {
+ .notifier_call = init_restrict,
+};
+
+static void __init restrict_driver(void)
+{
+ restrict_wait = true;
+
+ register_xenstore_notifier(&xenstore_notifier);
+}
+
static int __init privcmd_init(void)
{
int err;
@@ -1262,6 +1311,9 @@ static int __init privcmd_init(void)
if (!xen_domain())
return -ENODEV;
+ if (!xen_initial_domain())
+ restrict_driver();
+
err = misc_register(&privcmd_dev);
if (err != 0) {
pr_err("Could not register Xen privcmd device\n");
--
2.22.0
2
3
07 Apr '26
hulk inclusion
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8882
--------------------------------
wake_wide() uses sd_llc_size as the spreading threshold to detect wide
waker/wakee relationships and to disable wake_affine() for those cases.
On SMT systems, sd_llc_size counts logical CPUs rather than physical
cores. This inflates the wake_wide() threshold, allowing wake_affine()
to pack more tasks into one LLC domain than the actual compute capacity
of its physical cores can sustain. The resulting SMT interference may
cost more than the cache-locality benefit wake_affine() intends to gain.
Scale the factor by the SMT width of the current CPU so that it
approximates the number of independent physical cores in the LLC domain,
making wake_wide() more likely to kick in before SMT interference
becomes significant. On non-SMT systems the SMT width is 1 and behaviour
is unchanged.
Signed-off-by: Zhang Qiao <zhangqiao22(a)huawei.com>
---
kernel/sched/fair.c | 5 +++++
kernel/sched/features.h | 2 ++
2 files changed, 7 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ad30bb800961..4100998e18cd 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7850,6 +7850,11 @@ static int wake_wide(struct task_struct *p)
unsigned int slave = p->wakee_flips;
int factor = __this_cpu_read(sd_llc_size);
+ /* Scale factor to physical-core count to account for SMT interference. */
+ if (sched_feat(WA_SMT))
+ factor = DIV_ROUND_UP(factor,
+ cpumask_weight(cpu_smt_mask(smp_processor_id())));
+
if (master < slave)
swap(master, slave);
if (slave < factor || master < slave * factor)
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index 24a0c853a8a0..c9ad8e72ecd0 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -128,3 +128,5 @@ SCHED_FEAT(SOFT_DOMAIN, false)
#ifdef CONFIG_SCHED_SOFT_QUOTA
SCHED_FEAT(SOFT_QUOTA, false)
#endif
+
+SCHED_FEAT(WA_SMT, false)
--
2.18.0
2
1
07 Apr '26
hulk inclusion
category: perfermance
bugzilla: https://atomgit.com/openeuler/kernel/issues/8882
--------------------------------
wake_wide() uses sd_llc_size as the spreading threshold to detect wide
waker/wakee relationships and to disable wake_affine() for those cases.
On SMT systems, sd_llc_size counts logical CPUs rather than physical
cores. This inflates the wake_wide() threshold, allowing wake_affine()
to pack more tasks into one LLC domain than the actual compute capacity
of its physical cores can sustain. The resulting SMT interference may
cost more than the cache-locality benefit wake_affine() intends to gain.
Scale the factor by the SMT width of the current CPU so that it
approximates the number of independent physical cores in the LLC domain,
making wake_wide() more likely to kick in before SMT interference
becomes significant. On non-SMT systems the SMT width is 1 and behaviour
is unchanged.
Signed-off-by: Zhang Qiao <zhangqiao22(a)huawei.com>
---
kernel/sched/fair.c | 5 +++++
kernel/sched/features.h | 2 ++
2 files changed, 7 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ad30bb800961..4100998e18cd 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7850,6 +7850,11 @@ static int wake_wide(struct task_struct *p)
unsigned int slave = p->wakee_flips;
int factor = __this_cpu_read(sd_llc_size);
+ /* Scale factor to physical-core count to account for SMT interference. */
+ if (sched_feat(WA_SMT))
+ factor = DIV_ROUND_UP(factor,
+ cpumask_weight(cpu_smt_mask(smp_processor_id())));
+
if (master < slave)
swap(master, slave);
if (slave < factor || master < slave * factor)
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index 24a0c853a8a0..c9ad8e72ecd0 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -128,3 +128,5 @@ SCHED_FEAT(SOFT_DOMAIN, false)
#ifdef CONFIG_SCHED_SOFT_QUOTA
SCHED_FEAT(SOFT_QUOTA, false)
#endif
+
+SCHED_FEAT(WA_SMT, false)
--
2.18.0
2
1
Fix rsicv check-build warning.
Björn Töpel (1):
riscv: Replace function-like macro by static inline function
Tengda Wu (1):
Revert "riscv: stacktrace: Disable KASAN checks for non-current tasks"
arch/riscv/include/asm/cacheflush.h | 15 ++++++++++-----
arch/riscv/kernel/stacktrace.c | 16 ----------------
2 files changed, 10 insertions(+), 21 deletions(-)
--
2.34.1
2
3
From: Zilin Guan <zilin(a)seu.edu.cn>
mainline inclusion
from mainline-v7.0-rc3
commit fe868b499d16f55bbeea89992edb98043c9de416
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14020/
CVE: CVE-2026-23389
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
In ice_set_ringparam, tx_rings and xdp_rings are allocated before
rx_rings. If the allocation of rx_rings fails, the code jumps to
the done label leaking both tx_rings and xdp_rings. Furthermore, if
the setup of an individual Rx ring fails during the loop, the code jumps
to the free_tx label which releases tx_rings but leaks xdp_rings.
Fix this by introducing a free_xdp label and updating the error paths to
ensure both xdp_rings and tx_rings are properly freed if rx_rings
allocation or setup fails.
Compile tested only. Issue found using a prototype static analysis tool
and code review.
Fixes: fcea6f3da546 ("ice: Add stats and ethtool support")
Fixes: efc2214b6047 ("ice: Add support for XDP")
Signed-off-by: Zilin Guan <zilin(a)seu.edu.cn>
Reviewed-by: Paul Menzel <pmenzel(a)molgen.mpg.de>
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov(a)intel.com>
Tested-by: Rinitha S <sx.rinitha(a)intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen(a)intel.com>
Conflicts:
drivers/net/ethernet/intel/ice/ice_ethtool.c
[Context conflicts due to different memory allocation function used]
Signed-off-by: Pan Taixi <pantaixi1(a)huawei.com>
---
drivers/net/ethernet/intel/ice/ice_ethtool.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
index dd58b2372dc0..bab857a6dc18 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
@@ -2839,7 +2839,7 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring,
rx_rings = kcalloc(vsi->num_rxq, sizeof(*rx_rings), GFP_KERNEL);
if (!rx_rings) {
err = -ENOMEM;
- goto done;
+ goto free_xdp;
}
ice_for_each_rxq(vsi, i) {
@@ -2869,7 +2869,7 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring,
}
kfree(rx_rings);
err = -ENOMEM;
- goto free_tx;
+ goto free_xdp;
}
}
@@ -2920,6 +2920,13 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring,
}
goto done;
+free_xdp:
+ if (xdp_rings) {
+ ice_for_each_xdp_txq(vsi, i)
+ ice_free_tx_ring(&xdp_rings[i]);
+ kfree(xdp_rings);
+ }
+
free_tx:
/* error cleanup if the Rx allocations failed after getting Tx */
if (tx_rings) {
--
2.34.1
2
1
From: Zilin Guan <zilin(a)seu.edu.cn>
mainline inclusion
from mainline-v7.0-rc3
commit fe868b499d16f55bbeea89992edb98043c9de416
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14020/
CVE: CVE-2026-23389
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
In ice_set_ringparam, tx_rings and xdp_rings are allocated before
rx_rings. If the allocation of rx_rings fails, the code jumps to
the done label leaking both tx_rings and xdp_rings. Furthermore, if
the setup of an individual Rx ring fails during the loop, the code jumps
to the free_tx label which releases tx_rings but leaks xdp_rings.
Fix this by introducing a free_xdp label and updating the error paths to
ensure both xdp_rings and tx_rings are properly freed if rx_rings
allocation or setup fails.
Compile tested only. Issue found using a prototype static analysis tool
and code review.
Fixes: fcea6f3da546 ("ice: Add stats and ethtool support")
Fixes: efc2214b6047 ("ice: Add support for XDP")
Signed-off-by: Zilin Guan <zilin(a)seu.edu.cn>
Reviewed-by: Paul Menzel <pmenzel(a)molgen.mpg.de>
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov(a)intel.com>
Tested-by: Rinitha S <sx.rinitha(a)intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen(a)intel.com>
Conflicts:
drivers/net/ethernet/intel/ice/ice_ethtool.c
[ Context conflicts due to different memory allocation function used.
Also, ice_for_each_xdp_txq is introduced in unmmerged commit
2faf63b650bb ("ice: make use of ice_for_each_* macros"), use normal
for loops here instead. ]
Signed-off-by: Pan Taixi <pantaixi1(a)huawei.com>
---
drivers/net/ethernet/intel/ice/ice_ethtool.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
index 9659668279dc..585ae5e92558 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
@@ -2865,7 +2865,7 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring,
rx_rings = kcalloc(vsi->num_rxq, sizeof(*rx_rings), GFP_KERNEL);
if (!rx_rings) {
err = -ENOMEM;
- goto done;
+ goto free_xdp;
}
ice_for_each_rxq(vsi, i) {
@@ -2894,7 +2894,7 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring,
}
kfree(rx_rings);
err = -ENOMEM;
- goto free_tx;
+ goto free_xdp;
}
}
@@ -2945,6 +2945,13 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring,
}
goto done;
+free_xdp:
+ if (xdp_rings) {
+ for (i = 0; i < vsi->num_xdp_txq; i++)
+ ice_free_tx_ring(&xdp_rings[i]);
+ kfree(xdp_rings);
+ }
+
free_tx:
/* error cleanup if the Rx allocations failed after getting Tx */
if (tx_rings) {
--
2.34.1
2
1
[PATCH OLK-6.6] pstore/ram: fix buffer overflow in persistent_ram_save_old()
by Pan Taixi 07 Apr '26
by Pan Taixi 07 Apr '26
07 Apr '26
From: Sai Ritvik Tanksalkar <stanksal(a)purdue.edu>
stable inclusion
from stable-v6.6.128
commit cff0ef043e16feb5a02307c8f9d0117a96c5587c
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8792
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 5669645c052f235726a85f443769b6fc02f66762 ]
persistent_ram_save_old() can be called multiple times for the same
persistent_ram_zone (e.g., via ramoops_pstore_read -> ramoops_get_next_prz
for PSTORE_TYPE_DMESG records).
Currently, the function only allocates prz->old_log when it is NULL,
but it unconditionally updates prz->old_log_size to the current buffer
size and then performs memcpy_fromio() using this new size. If the
buffer size has grown since the first allocation (which can happen
across different kernel boot cycles), this leads to:
1. A heap buffer overflow (OOB write) in the memcpy_fromio() calls
2. A subsequent OOB read when ramoops_pstore_read() accesses the buffer
using the incorrect (larger) old_log_size
The KASAN splat would look similar to:
BUG: KASAN: slab-out-of-bounds in ramoops_pstore_read+0x...
Read of size N at addr ... by task ...
The conditions are likely extremely hard to hit:
0. Crash with a ramoops write of less-than-record-max-size bytes.
1. Reboot: ramoops registers, pstore_get_records(0) reads old crash,
allocates old_log with size X
2. Crash handler registered, timer started (if pstore_update_ms >= 0)
3. Oops happens (non-fatal, system continues)
4. pstore_dump() writes oops via ramoops_pstore_write() size Y (>X)
5. pstore_new_entry = 1, pstore_timer_kick() called
6. System continues running (not a panic oops)
7. Timer fires after pstore_update_ms milliseconds
8. pstore_timefunc() → schedule_work() → pstore_dowork() → pstore_get_records(1)
9. ramoops_get_next_prz() → persistent_ram_save_old()
10. buffer_size() returns Y, but old_log is X bytes
11. Y > X: memcpy_fromio() overflows heap
Requirements:
- a prior crash record exists that did not fill the record size
(almost impossible since the crash handler writes as much as it
can possibly fit into the record, capped by max record size and
the kmsg buffer almost always exceeds the max record size)
- pstore_update_ms >= 0 (disabled by default)
- Non-fatal oops (system survives)
Free and reallocate the buffer when the new size differs from the
previously allocated size. This ensures old_log always has sufficient
space for the data being copied.
Fixes: 201e4aca5aa1 ("pstore/ram: Should update old dmesg buffer before reading")
Signed-off-by: Sai Ritvik Tanksalkar <stanksal(a)purdue.edu>
Link: https://patch.msgid.link/20260201132240.2948732-1-stanksal@purdue.edu
Signed-off-by: Kees Cook <kees(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Yuan can <yuancan(a)huawei.com>
Signed-off-by: Pan Taixi <pantaixi1(a)huawei.com>
---
fs/pstore/ram_core.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
index f1848cdd6d34..c9eaacdec37e 100644
--- a/fs/pstore/ram_core.c
+++ b/fs/pstore/ram_core.c
@@ -298,6 +298,17 @@ void persistent_ram_save_old(struct persistent_ram_zone *prz)
if (!size)
return;
+ /*
+ * If the existing buffer is differently sized, free it so a new
+ * one is allocated. This can happen when persistent_ram_save_old()
+ * is called early in boot and later for a timer-triggered
+ * survivable crash when the crash dumps don't match in size
+ * (which would be extremely unlikely given kmsg buffers usually
+ * exceed prz buffer sizes).
+ */
+ if (prz->old_log && prz->old_log_size != size)
+ persistent_ram_free_old(prz);
+
if (!prz->old_log) {
persistent_ram_ecc_old(prz);
prz->old_log = kvzalloc(size, GFP_KERNEL);
--
2.34.1
2
1
[PATCH OLK-5.10] pstore/ram: fix buffer overflow in persistent_ram_save_old()
by Pan Taixi 07 Apr '26
by Pan Taixi 07 Apr '26
07 Apr '26
From: Sai Ritvik Tanksalkar <stanksal(a)purdue.edu>
stable inclusion
from stable-v5.10.252
commit 58bda5a1d1ee98254383ef34f76b2c35140513ea
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14048/
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 5669645c052f235726a85f443769b6fc02f66762 ]
persistent_ram_save_old() can be called multiple times for the same
persistent_ram_zone (e.g., via ramoops_pstore_read -> ramoops_get_next_prz
for PSTORE_TYPE_DMESG records).
Currently, the function only allocates prz->old_log when it is NULL,
but it unconditionally updates prz->old_log_size to the current buffer
size and then performs memcpy_fromio() using this new size. If the
buffer size has grown since the first allocation (which can happen
across different kernel boot cycles), this leads to:
1. A heap buffer overflow (OOB write) in the memcpy_fromio() calls
2. A subsequent OOB read when ramoops_pstore_read() accesses the buffer
using the incorrect (larger) old_log_size
The KASAN splat would look similar to:
BUG: KASAN: slab-out-of-bounds in ramoops_pstore_read+0x...
Read of size N at addr ... by task ...
The conditions are likely extremely hard to hit:
0. Crash with a ramoops write of less-than-record-max-size bytes.
1. Reboot: ramoops registers, pstore_get_records(0) reads old crash,
allocates old_log with size X
2. Crash handler registered, timer started (if pstore_update_ms >= 0)
3. Oops happens (non-fatal, system continues)
4. pstore_dump() writes oops via ramoops_pstore_write() size Y (>X)
5. pstore_new_entry = 1, pstore_timer_kick() called
6. System continues running (not a panic oops)
7. Timer fires after pstore_update_ms milliseconds
8. pstore_timefunc() → schedule_work() → pstore_dowork() → pstore_get_records(1)
9. ramoops_get_next_prz() → persistent_ram_save_old()
10. buffer_size() returns Y, but old_log is X bytes
11. Y > X: memcpy_fromio() overflows heap
Requirements:
- a prior crash record exists that did not fill the record size
(almost impossible since the crash handler writes as much as it
can possibly fit into the record, capped by max record size and
the kmsg buffer almost always exceeds the max record size)
- pstore_update_ms >= 0 (disabled by default)
- Non-fatal oops (system survives)
Free and reallocate the buffer when the new size differs from the
previously allocated size. This ensures old_log always has sufficient
space for the data being copied.
Fixes: 201e4aca5aa1 ("pstore/ram: Should update old dmesg buffer before reading")
Signed-off-by: Sai Ritvik Tanksalkar <stanksal(a)purdue.edu>
Link: https://patch.msgid.link/20260201132240.2948732-1-stanksal@purdue.edu
Signed-off-by: Kees Cook <kees(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Lin Yujun <linyujun809(a)h-partners.com>
Signed-off-by: Pan Taixi <pantaixi1(a)huawei.com>
---
fs/pstore/ram_core.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
index 5ac9b1f155a8..97ec9041b9b9 100644
--- a/fs/pstore/ram_core.c
+++ b/fs/pstore/ram_core.c
@@ -298,6 +298,17 @@ void persistent_ram_save_old(struct persistent_ram_zone *prz)
if (!size)
return;
+ /*
+ * If the existing buffer is differently sized, free it so a new
+ * one is allocated. This can happen when persistent_ram_save_old()
+ * is called early in boot and later for a timer-triggered
+ * survivable crash when the crash dumps don't match in size
+ * (which would be extremely unlikely given kmsg buffers usually
+ * exceed prz buffer sizes).
+ */
+ if (prz->old_log && prz->old_log_size != size)
+ persistent_ram_free_old(prz);
+
if (!prz->old_log) {
persistent_ram_ecc_old(prz);
prz->old_log = kmalloc(size, GFP_KERNEL);
--
2.34.1
2
1
07 Apr '26
From: Oleg Nesterov <oleg(a)redhat.com>
stable inclusion
from stable-v6.6.130
commit 9c05cd8f42325a53474093f372f6c08a56ae18d1
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14049/
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit d55c571e4333fac71826e8db3b9753fadfbead6a ]
This script
#!/usr/bin/bash
echo 0 > /proc/sys/kernel/randomize_va_space
echo 'void main(void) {}' > TEST.c
# -fcf-protection to ensure that the 1st endbr32 insn can't be emulated
gcc -m32 -fcf-protection=branch TEST.c -o test
bpftrace -e 'uprobe:./test:main {}' -c ./test
"hangs", the probed ./test task enters an endless loop.
The problem is that with randomize_va_space == 0
get_unmapped_area(TASK_SIZE - PAGE_SIZE) called by xol_add_vma() can not
just return the "addr == TASK_SIZE - PAGE_SIZE" hint, this addr is used
by the stack vma.
arch_get_unmapped_area_topdown() doesn't take TIF_ADDR32 into account and
in_32bit_syscall() is false, this leads to info.high_limit > TASK_SIZE.
vm_unmapped_area() happily returns the high address > TASK_SIZE and then
get_unmapped_area() returns -ENOMEM after the "if (addr > TASK_SIZE - len)"
check.
handle_swbp() doesn't report this failure (probably it should) and silently
restarts the probed insn. Endless loop.
I think that the right fix should change the x86 get_unmapped_area() paths
to rely on TIF_ADDR32 rather than in_32bit_syscall(). Note also that if
CONFIG_X86_X32_ABI=y, in_x32_syscall() falsely returns true in this case
because ->orig_ax = -1.
But we need a simple fix for -stable, so this patch just sets TS_COMPAT if
the probed task is 32-bit to make in_ia32_syscall() true.
Fixes: 1b028f784e8c ("x86/mm: Introduce mmap_compat_base() for 32-bit mmap()")
Reported-by: Paulo Andrade <pandrade(a)redhat.com>
Signed-off-by: Oleg Nesterov <oleg(a)redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Link: https://lore.kernel.org/all/aV5uldEvV7pb4RA8@redhat.com/
Cc: stable(a)vger.kernel.org
Link: https://patch.msgid.link/aWO7Fdxn39piQnxu@redhat.com
Signed-off-by: Pan Taixi <pantaixi1(a)huawei.com>
---
arch/x86/kernel/uprobes.c | 24 ++++++++++++++++++++++++
include/linux/uprobes.h | 1 +
kernel/events/uprobes.c | 10 +++++++---
3 files changed, 32 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c
index 6402fb3089d2..aac2a2c5c6c5 100644
--- a/arch/x86/kernel/uprobes.c
+++ b/arch/x86/kernel/uprobes.c
@@ -1102,3 +1102,27 @@ bool arch_uretprobe_is_alive(struct return_instance *ret, enum rp_check ctx,
else
return regs->sp <= ret->stack;
}
+
+#ifdef CONFIG_IA32_EMULATION
+unsigned long arch_uprobe_get_xol_area(void)
+{
+ struct thread_info *ti = current_thread_info();
+ unsigned long vaddr;
+
+ /*
+ * HACK: we are not in a syscall, but x86 get_unmapped_area() paths
+ * ignore TIF_ADDR32 and rely on in_32bit_syscall() to calculate
+ * vm_unmapped_area_info.high_limit.
+ *
+ * The #ifdef above doesn't cover the CONFIG_X86_X32_ABI=y case,
+ * but in this case in_32bit_syscall() -> in_x32_syscall() always
+ * (falsely) returns true because ->orig_ax == -1.
+ */
+ if (test_thread_flag(TIF_ADDR32))
+ ti->status |= TS_COMPAT;
+ vaddr = get_unmapped_area(NULL, TASK_SIZE - PAGE_SIZE, PAGE_SIZE, 0, 0);
+ ti->status &= ~TS_COMPAT;
+
+ return vaddr;
+}
+#endif
diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
index b0c15a04adcc..7c63f47a497d 100644
--- a/include/linux/uprobes.h
+++ b/include/linux/uprobes.h
@@ -147,6 +147,7 @@ extern bool arch_uretprobe_is_alive(struct return_instance *ret, enum rp_check c
extern bool arch_uprobe_ignore(struct arch_uprobe *aup, struct pt_regs *regs);
extern void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
void *src, unsigned long len);
+extern unsigned long arch_uprobe_get_xol_area(void);
#else /* !CONFIG_UPROBES */
struct uprobes_state {
};
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 9d2d68d171e9..8f73ab934c38 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -1478,6 +1478,12 @@ void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned lon
set_bit(MMF_RECALC_UPROBES, &vma->vm_mm->flags);
}
+unsigned long __weak arch_uprobe_get_xol_area(void)
+{
+ /* Try to map as high as possible, this is only a hint. */
+ return get_unmapped_area(NULL, TASK_SIZE - PAGE_SIZE, PAGE_SIZE, 0, 0);
+}
+
/* Slot allocation for XOL */
static int xol_add_vma(struct mm_struct *mm, struct xol_area *area)
{
@@ -1493,9 +1499,7 @@ static int xol_add_vma(struct mm_struct *mm, struct xol_area *area)
}
if (!area->vaddr) {
- /* Try to map as high as possible, this is only a hint. */
- area->vaddr = get_unmapped_area(NULL, TASK_SIZE - PAGE_SIZE,
- PAGE_SIZE, 0, 0);
+ area->vaddr = arch_uprobe_get_xol_area();
if (IS_ERR_VALUE(area->vaddr)) {
ret = area->vaddr;
goto fail;
--
2.34.1
2
1
07 Apr '26
From: Oleg Nesterov <oleg(a)redhat.com>
mainline inclusion
from mainline-v7.0-rc1
commit d55c571e4333fac71826e8db3b9753fadfbead6a
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14049/
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
This script
#!/usr/bin/bash
echo 0 > /proc/sys/kernel/randomize_va_space
echo 'void main(void) {}' > TEST.c
# -fcf-protection to ensure that the 1st endbr32 insn can't be emulated
gcc -m32 -fcf-protection=branch TEST.c -o test
bpftrace -e 'uprobe:./test:main {}' -c ./test
"hangs", the probed ./test task enters an endless loop.
The problem is that with randomize_va_space == 0
get_unmapped_area(TASK_SIZE - PAGE_SIZE) called by xol_add_vma() can not
just return the "addr == TASK_SIZE - PAGE_SIZE" hint, this addr is used
by the stack vma.
arch_get_unmapped_area_topdown() doesn't take TIF_ADDR32 into account and
in_32bit_syscall() is false, this leads to info.high_limit > TASK_SIZE.
vm_unmapped_area() happily returns the high address > TASK_SIZE and then
get_unmapped_area() returns -ENOMEM after the "if (addr > TASK_SIZE - len)"
check.
handle_swbp() doesn't report this failure (probably it should) and silently
restarts the probed insn. Endless loop.
I think that the right fix should change the x86 get_unmapped_area() paths
to rely on TIF_ADDR32 rather than in_32bit_syscall(). Note also that if
CONFIG_X86_X32_ABI=y, in_x32_syscall() falsely returns true in this case
because ->orig_ax = -1.
But we need a simple fix for -stable, so this patch just sets TS_COMPAT if
the probed task is 32-bit to make in_ia32_syscall() true.
Fixes: 1b028f784e8c ("x86/mm: Introduce mmap_compat_base() for 32-bit mmap()")
Reported-by: Paulo Andrade <pandrade(a)redhat.com>
Signed-off-by: Oleg Nesterov <oleg(a)redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Link: https://lore.kernel.org/all/aV5uldEvV7pb4RA8@redhat.com/
Cc: stable(a)vger.kernel.org
Link: https://patch.msgid.link/aWO7Fdxn39piQnxu@redhat.com
Conflicts:
arch/x86/kernel/uprobes.c
include/linux/uprobes.h
kernel/events/uprobes.c
[Context conflicts only]
Signed-off-by: Pan Taixi <pantaixi1(a)huawei.com>
---
arch/x86/kernel/uprobes.c | 24 ++++++++++++++++++++++++
include/linux/uprobes.h | 1 +
kernel/events/uprobes.c | 10 +++++++---
3 files changed, 32 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c
index 9f948b2d26f6..099ca674e3de 100644
--- a/arch/x86/kernel/uprobes.c
+++ b/arch/x86/kernel/uprobes.c
@@ -1095,3 +1095,27 @@ bool arch_uretprobe_is_alive(struct return_instance *ret, enum rp_check ctx,
else
return regs->sp <= ret->stack;
}
+
+#ifdef CONFIG_IA32_EMULATION
+unsigned long arch_uprobe_get_xol_area(void)
+{
+ struct thread_info *ti = current_thread_info();
+ unsigned long vaddr;
+
+ /*
+ * HACK: we are not in a syscall, but x86 get_unmapped_area() paths
+ * ignore TIF_ADDR32 and rely on in_32bit_syscall() to calculate
+ * vm_unmapped_area_info.high_limit.
+ *
+ * The #ifdef above doesn't cover the CONFIG_X86_X32_ABI=y case,
+ * but in this case in_32bit_syscall() -> in_x32_syscall() always
+ * (falsely) returns true because ->orig_ax == -1.
+ */
+ if (test_thread_flag(TIF_ADDR32))
+ ti->status |= TS_COMPAT;
+ vaddr = get_unmapped_area(NULL, TASK_SIZE - PAGE_SIZE, PAGE_SIZE, 0, 0);
+ ti->status &= ~TS_COMPAT;
+
+ return vaddr;
+}
+#endif
diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
index 2c693d6eb9cb..568c211b617e 100644
--- a/include/linux/uprobes.h
+++ b/include/linux/uprobes.h
@@ -144,6 +144,7 @@ extern bool arch_uretprobe_is_alive(struct return_instance *ret, enum rp_check c
extern bool arch_uprobe_ignore(struct arch_uprobe *aup, struct pt_regs *regs);
extern void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
void *src, unsigned long len);
+extern unsigned long arch_uprobe_get_xol_area(void);
#else /* !CONFIG_UPROBES */
struct uprobes_state {
};
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index e51dbb0e8a56..fe518be0615f 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -1443,6 +1443,12 @@ void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned lon
set_bit(MMF_RECALC_UPROBES, &vma->vm_mm->flags);
}
+unsigned long __weak arch_uprobe_get_xol_area(void)
+{
+ /* Try to map as high as possible, this is only a hint. */
+ return get_unmapped_area(NULL, TASK_SIZE - PAGE_SIZE, PAGE_SIZE, 0, 0);
+}
+
/* Slot allocation for XOL */
static int xol_add_vma(struct mm_struct *mm, struct xol_area *area)
{
@@ -1458,9 +1464,7 @@ static int xol_add_vma(struct mm_struct *mm, struct xol_area *area)
}
if (!area->vaddr) {
- /* Try to map as high as possible, this is only a hint. */
- area->vaddr = get_unmapped_area(NULL, TASK_SIZE - PAGE_SIZE,
- PAGE_SIZE, 0, 0);
+ area->vaddr = arch_uprobe_get_xol_area();
if (IS_ERR_VALUE(area->vaddr)) {
ret = area->vaddr;
goto fail;
--
2.34.1
2
1
Fix riscv compiling error.
Björn Töpel (1):
riscv: Replace function-like macro by static inline function
Tengda Wu (1):
Revert "riscv: stacktrace: Disable KASAN checks for non-current tasks"
arch/riscv/include/asm/cacheflush.h | 15 ++++++++++-----
arch/riscv/kernel/stacktrace.c | 19 ++-----------------
2 files changed, 12 insertions(+), 22 deletions(-)
--
2.34.1
2
3
yt6801 inclusion
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/7229
--------------------------------------------------------------------
fix the link info is not update on os installation.
Fixes: b9f5c0893d16 ("net: yt6801: add link info for yt6801")
Signed-off-by: Frank_Sae <Frank.Sae(a)motor-comm.com>
---
.../net/ethernet/motorcomm/yt6801/yt6801_main.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_main.c b/drivers/net/ethernet/motorcomm/yt6801/yt6801_main.c
index 01eed3ace..3fcb6f853 100644
--- a/drivers/net/ethernet/motorcomm/yt6801/yt6801_main.c
+++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_main.c
@@ -27,6 +27,18 @@
const struct net_device_ops *fxgmac_get_netdev_ops(void);
static void fxgmac_napi_enable(struct fxgmac_pdata *priv);
+const struct ethtool_ops *fxgmac_get_ethtool_ops(void);
+
+static const struct ethtool_ops fxgmac_ethtool_ops = {
+ .get_link = ethtool_op_get_link,
+ .get_link_ksettings = phy_ethtool_get_link_ksettings,
+ .set_link_ksettings = phy_ethtool_set_link_ksettings
+};
+
+const struct ethtool_ops *fxgmac_get_ethtool_ops(void)
+{
+ return &fxgmac_ethtool_ops;
+}
#define PHY_WR_CONFIG(reg_offset) (0x8000205 + ((reg_offset) * 0x10000))
static int fxgmac_phy_write_reg(struct fxgmac_pdata *priv, u32 reg_id, u32 data)
@@ -1898,7 +1910,9 @@ static int fxgmac_init(struct fxgmac_pdata *priv, bool save_private_reg)
ndev->max_mtu =
FXGMAC_JUMBO_PACKET_MTU + (ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN);
+
ndev->netdev_ops = fxgmac_get_netdev_ops();/* Set device operations */
+ ndev->ethtool_ops = fxgmac_get_ethtool_ops();/* Set device operations */
/* Set device features */
if (priv->hw_feat.tso) {
--
2.34.1
2
1
[PATCH OLK-6.6] riscv: Replace function-like macro by static inline function
by Tengda Wu 07 Apr '26
by Tengda Wu 07 Apr '26
07 Apr '26
From: Björn Töpel <bjorn(a)rivosinc.com>
stable inclusion
from stable-v6.6.133
commit 0b1ac9743f3d9cfced2ac3cb9f274c0675bd4189
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8883
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 121f34341d396b666d8a90b24768b40e08ca0d61 upstream.
The flush_icache_range() function is implemented as a "function-like
macro with unused parameters", which can result in "unused variables"
warnings.
Replace the macro with a static inline function, as advised by
Documentation/process/coding-style.rst.
Fixes: 08f051eda33b ("RISC-V: Flush I$ when making a dirty page executable")
Signed-off-by: Björn Töpel <bjorn(a)rivosinc.com>
Link: https://lore.kernel.org/r/20250419111402.1660267-1-bjorn@kernel.org
Signed-off-by: Palmer Dabbelt <palmer(a)rivosinc.com>
Signed-off-by: Ron Economos <re(a)w6rz.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Tengda Wu <wutengda2(a)huawei.com>
---
arch/riscv/include/asm/cacheflush.h | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
index 3f65acd0ef75..2b7f5da96c50 100644
--- a/arch/riscv/include/asm/cacheflush.h
+++ b/arch/riscv/include/asm/cacheflush.h
@@ -34,11 +34,6 @@ static inline void flush_dcache_page(struct page *page)
flush_dcache_folio(page_folio(page));
}
-/*
- * RISC-V doesn't have an instruction to flush parts of the instruction cache,
- * so instead we just flush the whole thing.
- */
-#define flush_icache_range(start, end) flush_icache_all()
#define flush_icache_user_page(vma, pg, addr, len) \
flush_icache_mm(vma->vm_mm, 0)
@@ -59,6 +54,16 @@ void flush_icache_mm(struct mm_struct *mm, bool local);
#endif /* CONFIG_SMP */
+/*
+ * RISC-V doesn't have an instruction to flush parts of the instruction cache,
+ * so instead we just flush the whole thing.
+ */
+#define flush_icache_range flush_icache_range
+static inline void flush_icache_range(unsigned long start, unsigned long end)
+{
+ flush_icache_all();
+}
+
extern unsigned int riscv_cbom_block_size;
extern unsigned int riscv_cboz_block_size;
void riscv_init_cbo_blocksizes(void);
--
2.34.1
2
1
[PATCH OLK-6.6 v5 0/2] kvm: arm64: Transition from CPU Type to MIDR Register for Virtualization Feature Detection
by liqiqi 07 Apr '26
by liqiqi 07 Apr '26
07 Apr '26
Currently, there are two methods for determining whether a chip supports
specific virtualization features:
1. Reading the chip's CPU type from BIOS
2. Reading the value of the MIDR register
The issue with the first method is that each time a new chip is introduced,
the new CPU type must be defined, which leads to poor code portability and
maintainability.
Therefore, the second method has been adopted to replace the first. This
approach eliminates the dependency on CPU type by using the MIDR register.
liqiqi (2):
kvm: arm64: Add MIDR definitions and use MIDR to determine whether
features are supported
kvm: arm64: Remove cpu_type definition and it's related interfaces
arch/arm64/include/asm/cache.h | 2 +-
arch/arm64/include/asm/cputype.h | 8 +-
arch/arm64/kernel/cpu_errata.c | 4 +-
arch/arm64/kernel/cpufeature.c | 2 +-
arch/arm64/kernel/proton-pack.c | 4 +-
arch/arm64/kvm/arm.c | 1 -
arch/arm64/kvm/hisilicon/hisi_virt.c | 110 +++--------------------
arch/arm64/kvm/hisilicon/hisi_virt.h | 12 ---
drivers/perf/hisilicon/hisi_uncore_pmu.c | 2 +-
tools/arch/arm64/include/asm/cputype.h | 4 +-
10 files changed, 25 insertions(+), 124 deletions(-)
--
2.43.0
2
3
[PATCH OLK-5.10] apparmor: fix unprivileged local user can do privileged policy management
by Yi Yang 07 Apr '26
by Yi Yang 07 Apr '26
07 Apr '26
From: John Johansen <john.johansen(a)canonical.com>
mainline inclusion
from mainline-v7.0-rc4
commit 6601e13e82841879406bf9f369032656f441a425
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13879
CVE: CVE-2026-23268
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
An unprivileged local user can load, replace, and remove profiles by
opening the apparmorfs interfaces, via a confused deputy attack, by
passing the opened fd to a privileged process, and getting the
privileged process to write to the interface.
This does require a privileged target that can be manipulated to do
the write for the unprivileged process, but once such access is
achieved full policy management is possible and all the possible
implications that implies: removing confinement, DoS of system or
target applications by denying all execution, by-passing the
unprivileged user namespace restriction, to exploiting kernel bugs for
a local privilege escalation.
The policy management interface can not have its permissions simply
changed from 0666 to 0600 because non-root processes need to be able
to load policy to different policy namespaces.
Instead ensure the task writing the interface has privileges that
are a subset of the task that opened the interface. This is already
done via policy for confined processes, but unconfined can delegate
access to the opened fd, by-passing the usual policy check.
Fixes: b7fd2c0340eac ("apparmor: add per policy ns .load, .replace, .remove interface files")
Reported-by: Qualys Security Advisory <qsa(a)qualys.com>
Tested-by: Salvatore Bonaccorso <carnil(a)debian.org>
Reviewed-by: Georgia Garcia <georgia.garcia(a)canonical.com>
Reviewed-by: Cengiz Can <cengiz.can(a)canonical.com>
Signed-off-by: John Johansen <john.johansen(a)canonical.com>
Conflicts:
security/apparmor/apparmorfs.c
security/apparmor/include/policy.h
security/apparmor/policy.c
[Commit 90c436a64a6e ("apparmor: pass cred through to audit info.") was
not merged. The aa_may_manage_policy function is different.]
Signed-off-by: Yi Yang <yiyang13(a)huawei.com>
---
security/apparmor/apparmorfs.c | 19 +++++++++------
security/apparmor/include/policy.h | 2 +-
security/apparmor/policy.c | 37 ++++++++++++++++++++++++++++--
3 files changed, 48 insertions(+), 10 deletions(-)
diff --git a/security/apparmor/apparmorfs.c b/security/apparmor/apparmorfs.c
index 06eac2266565..0900fd07def7 100644
--- a/security/apparmor/apparmorfs.c
+++ b/security/apparmor/apparmorfs.c
@@ -409,7 +409,8 @@ static struct aa_loaddata *aa_simple_write_to_buffer(const char __user *userbuf,
}
static ssize_t policy_update(u32 mask, const char __user *buf, size_t size,
- loff_t *pos, struct aa_ns *ns)
+ loff_t *pos, struct aa_ns *ns,
+ const struct cred *ocred)
{
struct aa_loaddata *data;
struct aa_label *label;
@@ -420,7 +421,7 @@ static ssize_t policy_update(u32 mask, const char __user *buf, size_t size,
/* high level check about policy management - fine grained in
* below after unpack
*/
- error = aa_may_manage_policy(label, ns, mask);
+ error = aa_may_manage_policy(label, ns, ocred, mask);
if (error)
goto end_section;
@@ -441,7 +442,8 @@ static ssize_t profile_load(struct file *f, const char __user *buf, size_t size,
loff_t *pos)
{
struct aa_ns *ns = aa_get_ns(f->f_inode->i_private);
- int error = policy_update(AA_MAY_LOAD_POLICY, buf, size, pos, ns);
+ int error = policy_update(AA_MAY_LOAD_POLICY, buf, size, pos, ns,
+ f->f_cred);
aa_put_ns(ns);
@@ -459,7 +461,7 @@ static ssize_t profile_replace(struct file *f, const char __user *buf,
{
struct aa_ns *ns = aa_get_ns(f->f_inode->i_private);
int error = policy_update(AA_MAY_LOAD_POLICY | AA_MAY_REPLACE_POLICY,
- buf, size, pos, ns);
+ buf, size, pos, ns, f->f_cred);
aa_put_ns(ns);
return error;
@@ -483,7 +485,8 @@ static ssize_t profile_remove(struct file *f, const char __user *buf,
/* high level check about policy management - fine grained in
* below after unpack
*/
- error = aa_may_manage_policy(label, ns, AA_MAY_REMOVE_POLICY);
+ error = aa_may_manage_policy(label, ns, f->f_cred,
+ AA_MAY_REMOVE_POLICY);
if (error)
goto out;
@@ -1787,7 +1790,8 @@ static int ns_mkdir_op(struct inode *dir, struct dentry *dentry, umode_t mode)
int error;
label = begin_current_label_crit_section();
- error = aa_may_manage_policy(label, NULL, AA_MAY_LOAD_POLICY);
+ error = aa_may_manage_policy(label, NULL, NULL,
+ AA_MAY_LOAD_POLICY);
end_current_label_crit_section(label);
if (error)
return error;
@@ -1836,7 +1840,8 @@ static int ns_rmdir_op(struct inode *dir, struct dentry *dentry)
int error;
label = begin_current_label_crit_section();
- error = aa_may_manage_policy(label, NULL, AA_MAY_LOAD_POLICY);
+ error = aa_may_manage_policy(label, NULL, NULL,
+ AA_MAY_LOAD_POLICY);
end_current_label_crit_section(label);
if (error)
return error;
diff --git a/security/apparmor/include/policy.h b/security/apparmor/include/policy.h
index b5aa4231af68..f6682a31df23 100644
--- a/security/apparmor/include/policy.h
+++ b/security/apparmor/include/policy.h
@@ -304,6 +304,6 @@ static inline int AUDIT_MODE(struct aa_profile *profile)
bool policy_view_capable(struct aa_ns *ns);
bool policy_admin_capable(struct aa_ns *ns);
int aa_may_manage_policy(struct aa_label *label, struct aa_ns *ns,
- u32 mask);
+ const struct cred *ocred, u32 mask);
#endif /* __AA_POLICY_H */
diff --git a/security/apparmor/policy.c b/security/apparmor/policy.c
index fcf22577f606..e5f501f89803 100644
--- a/security/apparmor/policy.c
+++ b/security/apparmor/policy.c
@@ -671,14 +671,42 @@ bool policy_admin_capable(struct aa_ns *ns)
return policy_view_capable(ns) && capable && !aa_g_lock_policy;
}
+static bool is_subset_of_obj_privilege(const struct cred *cred,
+ struct aa_label *label,
+ const struct cred *ocred)
+{
+ if (cred == ocred)
+ return true;
+
+ if (!aa_label_is_subset(label, cred_label(ocred)))
+ return false;
+ /* don't allow crossing userns for now */
+ if (cred->user_ns != ocred->user_ns)
+ return false;
+ if (!cap_issubset(cred->cap_inheritable, ocred->cap_inheritable))
+ return false;
+ if (!cap_issubset(cred->cap_permitted, ocred->cap_permitted))
+ return false;
+ if (!cap_issubset(cred->cap_effective, ocred->cap_effective))
+ return false;
+ if (!cap_issubset(cred->cap_bset, ocred->cap_bset))
+ return false;
+ if (!cap_issubset(cred->cap_ambient, ocred->cap_ambient))
+ return false;
+ return true;
+}
+
/**
* aa_may_manage_policy - can the current task manage policy
* @label: label to check if it can manage policy
- * @op: the policy manipulation operation being done
+ * @ns: namespace being managed by @label (may be NULL if @label's ns)
+ * @ocred: object cred if request is coming from an open object
+ * @mask: contains the policy manipulation operation being done
*
* Returns: 0 if the task is allowed to manipulate policy else error
*/
-int aa_may_manage_policy(struct aa_label *label, struct aa_ns *ns, u32 mask)
+int aa_may_manage_policy(struct aa_label *label, struct aa_ns *ns,
+ const struct cred *ocred, u32 mask)
{
const char *op;
@@ -694,6 +722,11 @@ int aa_may_manage_policy(struct aa_label *label, struct aa_ns *ns, u32 mask)
return audit_policy(label, op, NULL, NULL, "policy_locked",
-EACCES);
+ if (ocred && !is_subset_of_obj_privilege(current_cred(), label, ocred))
+ return audit_policy(label, op, NULL, NULL,
+ "not privileged for target profile",
+ -EACCES);
+
if (!policy_admin_capable(ns))
return audit_policy(label, op, NULL, NULL, "not policy admin",
-EACCES);
--
2.25.1
2
1
From: Eric Biggers <ebiggers(a)kernel.org>
mainline inclusion
from mainline-v7.0-rc2
commit c5794709bc9105935dbedef8b9cf9c06f2b559fa
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13917
CVE: CVE-2026-23364
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
To prevent timing attacks, MAC comparisons need to be constant-time.
Replace the memcmp() with the correct function, crypto_memneq().
Fixes: e2f34481b24d ("cifsd: add server-side procedures for SMB3")
Cc: stable(a)vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers(a)kernel.org>
Acked-by: Namjae Jeon <linkinjeon(a)kernel.org>
Signed-off-by: Steve French <stfrench(a)microsoft.com>
Conflicts:
fs/smb/server/Kconfig
fs/smb/server/auth.c
fs/smb/server/smb2pdu.c
[Commit 38c8a9a52082 ("smb: move client and server files to common
directory fs/smb") move client and server files to common directory
fs/smb;
commit 7033b937e21b ("crypto: lib - create utils module and move
__crypto_memneq into it") change the config of memneq.c.]
Signed-off-by: Li Lingfeng <lilingfeng3(a)huawei.com>
---
fs/ksmbd/Kconfig | 1 +
fs/ksmbd/auth.c | 4 +++-
fs/ksmbd/smb2pdu.c | 5 +++--
3 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/fs/ksmbd/Kconfig b/fs/ksmbd/Kconfig
index e1fe17747ed6..832c6a0dd327 100644
--- a/fs/ksmbd/Kconfig
+++ b/fs/ksmbd/Kconfig
@@ -11,6 +11,7 @@ config SMB_SERVER
select CRYPTO_ECB
select CRYPTO_LIB_DES
select CRYPTO_SHA256
+ select LIB_MEMNEQ
select CRYPTO_CMAC
select CRYPTO_SHA512
select CRYPTO_AEAD2
diff --git a/fs/ksmbd/auth.c b/fs/ksmbd/auth.c
index c5916064b85e..fb4a6f9e4e98 100644
--- a/fs/ksmbd/auth.c
+++ b/fs/ksmbd/auth.c
@@ -13,6 +13,7 @@
#include <linux/xattr.h>
#include <crypto/hash.h>
#include <crypto/aead.h>
+#include <crypto/algapi.h>
#include <linux/random.h>
#include <linux/scatterlist.h>
@@ -280,7 +281,8 @@ int ksmbd_auth_ntlmv2(struct ksmbd_conn *conn, struct ksmbd_session *sess,
goto out;
}
- if (memcmp(ntlmv2->ntlmv2_hash, ntlmv2_rsp, CIFS_HMAC_MD5_HASH_SIZE) != 0)
+ if (crypto_memneq(ntlmv2->ntlmv2_hash, ntlmv2_rsp,
+ CIFS_HMAC_MD5_HASH_SIZE))
rc = -EINVAL;
out:
if (ctx)
diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c
index 1588867b4e71..d75de3035327 100644
--- a/fs/ksmbd/smb2pdu.c
+++ b/fs/ksmbd/smb2pdu.c
@@ -4,6 +4,7 @@
* Copyright (C) 2018 Samsung Electronics Co., Ltd.
*/
+#include <crypto/algapi.h>
#include <linux/inetdevice.h>
#include <net/addrconf.h>
#include <linux/syscalls.h>
@@ -8344,7 +8345,7 @@ int smb2_check_sign_req(struct ksmbd_work *work)
signature))
return 0;
- if (memcmp(signature, signature_req, SMB2_SIGNATURE_SIZE)) {
+ if (crypto_memneq(signature, signature_req, SMB2_SIGNATURE_SIZE)) {
pr_err("bad smb2 signature\n");
return 0;
}
@@ -8456,7 +8457,7 @@ int smb3_check_sign_req(struct ksmbd_work *work)
if (ksmbd_sign_smb3_pdu(conn, signing_key, iov, 1, signature))
return 0;
- if (memcmp(signature, signature_req, SMB2_SIGNATURE_SIZE)) {
+ if (crypto_memneq(signature, signature_req, SMB2_SIGNATURE_SIZE)) {
pr_err("bad smb2 signature\n");
return 0;
}
--
2.52.0
2
1
*** fix CVE-2026-23208 ***
Edward Adam Davis (1):
ALSA: usb-audio: Prevent excessive number of frames
Takashi Iwai (1):
ALSA: usb-audio: Use the right limit for PCM OOB check
sound/usb/pcm.c | 3 +++
1 file changed, 3 insertions(+)
--
2.43.0
2
3
yt6801 inclusion
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/7229
--------------------------------------------------------------------
fix the link info is not update on os installation.
Fixes: b9f5c0893d16 ("net: yt6801: add link info for yt6801")
Signed-off-by: Frank_Sae <Frank.Sae(a)motor-comm.com>
---
.../ethernet/motorcomm/yt6801/yt6801_main.c | 91 +++++++++++++++++++
1 file changed, 91 insertions(+)
diff --git a/drivers/net/ethernet/motorcomm/yt6801/yt6801_main.c b/drivers/net/ethernet/motorcomm/yt6801/yt6801_main.c
index 01eed3ace..83b5ab258 100644
--- a/drivers/net/ethernet/motorcomm/yt6801/yt6801_main.c
+++ b/drivers/net/ethernet/motorcomm/yt6801/yt6801_main.c
@@ -27,6 +27,95 @@
const struct net_device_ops *fxgmac_get_netdev_ops(void);
static void fxgmac_napi_enable(struct fxgmac_pdata *priv);
+const struct ethtool_ops *fxgmac_get_ethtool_ops(void);
+
+#define MII_SPEC_STATUS 0x11 /* PHY specific status */
+#define FXGMAC_EPHY_LINK_STATUS BIT(10)
+#define PHY_MII_SPEC_DUPLEX BIT(13)
+
+static int fxgmac_get_link_ksettings(struct net_device *netdev,
+ struct ethtool_link_ksettings *cmd)
+{
+ struct fxgmac_pdata *pdata = netdev_priv(netdev);
+ struct phy_device *phydev = netdev->phydev;
+ u32 duplex, regval, link_status;
+ u32 adv = 0xFFFFFFFF;
+
+ ethtool_link_ksettings_zero_link_mode(cmd, supported);
+ ethtool_link_ksettings_zero_link_mode(cmd, advertising);
+
+ /* set the supported link speeds */
+ ethtool_link_ksettings_add_link_mode(cmd, supported, 1000baseT_Full);
+ ethtool_link_ksettings_add_link_mode(cmd, supported, 100baseT_Full);
+ ethtool_link_ksettings_add_link_mode(cmd, supported, 100baseT_Half);
+ ethtool_link_ksettings_add_link_mode(cmd, supported, 10baseT_Full);
+ ethtool_link_ksettings_add_link_mode(cmd, supported, 10baseT_Half);
+
+ /* Indicate pause support */
+ ethtool_link_ksettings_add_link_mode(cmd, supported, Pause);
+ ethtool_link_ksettings_add_link_mode(cmd, supported, Asym_Pause);
+
+ adv = phy_read(phydev, MII_ADVERTISE);
+
+ if (field_get(ADVERTISE_PAUSE_CAP, adv))
+ ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause);
+
+ if (field_get(ADVERTISE_PAUSE_ASYM, adv))
+ ethtool_link_ksettings_add_link_mode(cmd, advertising, Asym_Pause);
+
+ ethtool_link_ksettings_add_link_mode(cmd, supported, MII);
+ cmd->base.port = PORT_MII;
+
+ ethtool_link_ksettings_add_link_mode(cmd, supported, Autoneg);
+ regval = phy_read(phydev, MII_BMCR);
+
+ regval = field_get(BMCR_ANENABLE, regval);
+ if (regval) {
+ ethtool_link_ksettings_add_link_mode(cmd, advertising, Autoneg);
+
+ adv = phy_read(phydev, MII_ADVERTISE);
+
+ if (adv & ADVERTISE_10HALF)
+ ethtool_link_ksettings_add_link_mode(cmd, advertising, 10baseT_Half);
+ if (adv & ADVERTISE_10FULL)
+ ethtool_link_ksettings_add_link_mode(cmd, advertising, 10baseT_Full);
+ if (adv & ADVERTISE_100HALF)
+ ethtool_link_ksettings_add_link_mode(cmd, advertising, 100baseT_Half);
+ if (adv & ADVERTISE_100FULL)
+ ethtool_link_ksettings_add_link_mode(cmd, advertising, 100baseT_Full);
+
+ adv = phy_read(phydev, MII_CTRL1000);
+
+ if (adv & ADVERTISE_1000FULL)
+ ethtool_link_ksettings_add_link_mode(cmd, advertising, 1000baseT_Full);
+ }
+
+ cmd->base.autoneg = 1;
+
+ regval = phy_read(phydev, MII_SPEC_STATUS);
+
+ link_status = field_get(FXGMAC_EPHY_LINK_STATUS, regval);
+ if (link_status) {
+ duplex = field_get(PHY_MII_SPEC_DUPLEX, regval);
+ cmd->base.duplex = duplex;
+ cmd->base.speed = pdata->mac_speed;
+ } else {
+ cmd->base.duplex = DUPLEX_UNKNOWN;
+ cmd->base.speed = SPEED_UNKNOWN;
+ }
+
+ return 0;
+}
+
+static const struct ethtool_ops fxgmac_ethtool_ops = {
+ .get_link = ethtool_op_get_link,
+ .get_link_ksettings = fxgmac_get_link_ksettings,
+};
+
+const struct ethtool_ops *fxgmac_get_ethtool_ops(void)
+{
+ return &fxgmac_ethtool_ops;
+}
#define PHY_WR_CONFIG(reg_offset) (0x8000205 + ((reg_offset) * 0x10000))
static int fxgmac_phy_write_reg(struct fxgmac_pdata *priv, u32 reg_id, u32 data)
@@ -1898,7 +1987,9 @@ static int fxgmac_init(struct fxgmac_pdata *priv, bool save_private_reg)
ndev->max_mtu =
FXGMAC_JUMBO_PACKET_MTU + (ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN);
+
ndev->netdev_ops = fxgmac_get_netdev_ops();/* Set device operations */
+ ndev->ethtool_ops = fxgmac_get_ethtool_ops();/* Set device operations */
/* Set device features */
if (priv->hw_feat.tso) {
--
2.34.1
2
1
Aboorva Devarajan (2):
sched_ext: Documentation: Remove mentions of scx_bpf_switch_all
sched: Pass correct scheduling policy to __setscheduler_class
Alan Maguire (16):
kbuild,bpf: Switch to using --btf_features for pahole v1.26 and later
kbuild, bpf: Use test-ge check for v1.25-only pahole
libbpf: Add btf__distill_base() creating split BTF with distilled base
BTF
selftests/bpf: Test distilled base, split BTF generation
libbpf: Split BTF relocation
selftests/bpf: Extend distilled BTF tests to cover BTF relocation
resolve_btfids: Handle presence of .BTF.base section
libbpf: BTF relocation followup fixing naming, loop logic
module, bpf: Store BTF base pointer in struct module
libbpf: Split field iter code into its own file kernel
libbpf,bpf: Share BTF relocate-related code with kernel
kbuild,bpf: Add module-specific pahole flags for distilled base BTF
selftests/bpf: Add kfunc_call test for simple dtor in bpf_testmod
bpf: fix build when CONFIG_DEBUG_INFO_BTF[_MODULES] is undefined
libbpf: Fix error handling in btf__distill_base()
libbpf: Fix license for btf_relocate.c
Alexander Lobakin (1):
bitops: make BYTES_TO_BITS() treewide-available
Alexei Starovoitov (2):
s390/bpf: Fix indirect trampoline generation
bpf: Introduce "volatile compare" macros
Andrea Righi (33):
sched_ext: fix typo in set_weight() description
sched_ext: add CONFIG_DEBUG_INFO_BTF dependency
sched_ext: Provide a sysfs enable_seq counter
sched_ext: improve WAKE_SYNC behavior for default idle CPU selection
sched_ext: Clarify ops.select_cpu() for single-CPU tasks
sched_ext: Introduce LLC awareness to the default idle selection
policy
sched_ext: Introduce NUMA awareness to the default idle selection
policy
sched_ext: Do not enable LLC/NUMA optimizations when domains overlap
sched_ext: Fix incorrect use of bitwise AND
MAINTAINERS: add self as reviewer for sched_ext
sched_ext: idle: Refresh idle masks during idle-to-idle transitions
sched_ext: Use the NUMA scheduling domain for NUMA optimizations
sched_ext: idle: use assign_cpu() to update the idle cpumask
sched_ext: idle: clarify comments
sched_ext: idle: introduce check_builtin_idle_enabled() helper
sched_ext: idle: small CPU iteration refactoring
sched_ext: update scx_bpf_dsq_insert() doc for SCX_DSQ_LOCAL_ON
sched_ext: Include remaining task time slice in error state dump
sched_ext: Include task weight in the error state dump
selftests/sched_ext: Fix enum resolution
tools/sched_ext: Add helper to check task migration state
sched_ext: selftests/dsp_local_on: Fix selftest on UP systems
sched_ext: Fix lock imbalance in dispatch_to_local_dsq()
selftests/sched_ext: Fix exit selftest hang on UP
sched_ext: Move built-in idle CPU selection policy to a separate file
sched_ext: Track currently locked rq
sched_ext: Make scx_locked_rq() inline
sched_ext: Fix missing rq lock in scx_bpf_cpuperf_set()
sched/ext: Fix invalid task state transitions on class switch
sched_ext: Make scx_kf_allowed_if_unlocked() available outside ext.c
sched_ext: Remove duplicate BTF_ID_FLAGS definitions
sched_ext: Fix rq lock state in hotplug ops
sched_ext: Validate prev_cpu in scx_bpf_select_cpu_dfl()
Andrii Nakryiko (14):
bpf: Emit global subprog name in verifier logs
bpf: Validate global subprogs lazily
selftests/bpf: Add lazy global subprog validation tests
libbpf: Add btf__new_split() API that was declared but not implemented
bpf: move sleepable flag from bpf_prog_aux to bpf_prog
libbpf: Add BTF field iterator
libbpf: Make use of BTF field iterator in BPF linker code
libbpf: Make use of BTF field iterator in BTF handling code
bpftool: Use BTF field iterator in btfgen
libbpf: Remove callback-based type/string BTF field visitor helpers
bpf: extract iterator argument type and name validation logic
bpf: allow passing struct bpf_iter_<type> as kfunc arguments
selftests/bpf: test passing iterator to a kfunc
selftests/bpf: validate eliminated global subprog is not freplaceable
Arnaldo Carvalho de Melo (1):
tools include UAPI: Sync linux/sched.h copy with the kernel sources
Atul Kumar Pant (1):
sched_ext: Fixes typos in comments
Benjamin Tissoires (1):
bpf: introduce in_sleepable() helper
Bitao Hu (4):
genirq: Convert kstat_irqs to a struct
genirq: Provide a snapshot mechanism for interrupt statistics
watchdog/softlockup: Low-overhead detection of interrupt storm
watchdog/softlockup: Report the most frequent interrupts
Björn Töpel (1):
selftests: sched_ext: Add sched_ext as proper selftest target
Breno Leitao (3):
rhashtable: Fix potential deadlock by moving schedule_work outside
lock
sched_ext: Use kvzalloc for large exit_dump allocation
sched/ext: Prevent update_locked_rq() calls with NULL rq
Changwoo Min (12):
sched_ext: Clarify sched_ext_ops table for userland scheduler
sched_ext: add a missing rcu_read_lock/unlock pair at
scx_select_cpu_dfl()
MAINTAINERS: add me as reviewer for sched_ext
sched_ext: Replace rq_lock() to raw_spin_rq_lock() in scx_ops_bypass()
sched_ext: Relocate scx_enabled() related code
sched_ext: Implement scx_bpf_now()
sched_ext: Add scx_bpf_now() for BPF scheduler
sched_ext: Add time helpers for BPF schedulers
sched_ext: Replace bpf_ktime_get_ns() to scx_bpf_now()
sched_ext: Use time helpers in BPF schedulers
sched_ext: Fix incorrect time delta calculation in time_delta()
sched_ext: Add scx_bpf_events() and scx_read_event() for BPF
schedulers
Cheng-Yang Chou (1):
sched_ext: Always use SMP versions in kernel/sched/ext.c
Christian Brauner (1):
file: add take_fd() cleanup helper
Christian Loehle (1):
sched/fair: Remove stale FREQUENCY_UTIL comment
Christophe Leroy (2):
bpf: Remove arch_unprotect_bpf_trampoline()
bpf: Check return from set_memory_rox()
Chuyi Zhou (15):
cgroup: Prepare for using css_task_iter_*() in BPF
bpf: Introduce css_task open-coded iterator kfuncs
bpf: Introduce task open coded iterator kfuncs
bpf: Introduce css open-coded iterator kfuncs
bpf: teach the verifier to enforce css_iter and task_iter in RCU CS
bpf: Let bpf_iter_task_new accept null task ptr
selftests/bpf: rename bpf_iter_task.c to bpf_iter_tasks.c
selftests/bpf: Add tests for open-coded task and css iter
bpf: Relax allowlist for css_task iter
selftests/bpf: Add tests for css_task iter combining with cgroup iter
selftests/bpf: Add test for using css_task iter in sleepable progs
bpf: Let verifier consider {task,cgroup} is trusted in bpf_iter_reg
selftests/bpf: get trusted cgrp from bpf_iter__cgroup directly
sched_ext: Fix the incorrect bpf_list kfunc API in common.bpf.h.
sched_ext: Use SCX_CALL_OP_TASK in task_tick_scx
Colin Ian King (1):
sched_ext: Fix spelling mistake: "intead" -> "instead"
Daniel Xu (3):
bpf: btf: Support flags for BTF_SET8 sets
bpf: btf: Add BTF_KFUNCS_START/END macro pair
bpf: treewide: Annotate BPF kfuncs in BTF
Dave Marchevsky (6):
bpf: Don't explicitly emit BTF for struct btf_iter_num
selftests/bpf: Rename bpf_iter_task_vma.c to bpf_iter_task_vmas.c
bpf: Introduce task_vma open-coded iterator kfuncs
selftests/bpf: Add tests for open-coded task_vma iter
bpf: Add __bpf_kfunc_{start,end}_defs macros
bpf: Add __bpf_hook_{start,end} macros
David Vernet (15):
bpf: Add ability to pin bpf timer to calling CPU
selftests/bpf: Test pinning bpf timer to a core
sched_ext: Implement runnable task stall watchdog
sched_ext: Print sched_ext info when dumping stack
sched_ext: Implement SCX_KICK_WAIT
sched_ext: Implement sched_ext_ops.cpu_acquire/release()
sched_ext: Add selftests
bpf: Load vmlinux btf for any struct_ops map
sched_ext: Make scx_bpf_cpuperf_set() @cpu arg signed
scx: Allow calling sleepable kfuncs from BPF_PROG_TYPE_SYSCALL
scx/selftests: Verify we can call create_dsq from prog_run
sched_ext: Remove unnecessary cpu_relax()
scx: Fix exit selftest to use custom DSQ
scx: Fix raciness in scx_ops_bypass()
scx: Fix maximal BPF selftest prog
Dawei Li (1):
genirq: Deduplicate interrupt descriptor initialization
Devaansh Kumar (1):
sched_ext: selftests: Fix grammar in tests description
Devaansh-Kumar (1):
sched_ext: Documentation: Update instructions for running example
schedulers
Eduard Zingerman (2):
libbpf: Make btf_parse_elf process .BTF.base transparently
selftests/bpf: Check if distilled base inherits source endianness
Geliang Tang (1):
bpf, btf: Check btf for register_bpf_struct_ops
Henry Huang (2):
sched_ext: initialize kit->cursor.flags
sched_ext: keep running prev when prev->scx.slice != 0
Herbert Xu (1):
rhashtable: Fix rhashtable_try_insert test
Honglei Wang (3):
sched_ext: use correct function name in pick_task_scx() warning
message
sched_ext: Add __weak to fix the build errors
sched_ext: switch class when preempted by higher priority scheduler
Hongyan Xia (1):
sched/ext: Add BPF function to fetch rq
Hou Tao (7):
bpf: Free dynamically allocated bits in bpf_iter_bits_destroy()
bpf: Add bpf_mem_alloc_check_size() helper
bpf: Check the validity of nr_words in bpf_iter_bits_new()
bpf: Use __u64 to save the bits in bits iterator
selftests/bpf: Add three test cases for bits_iter
selftests/bpf: Use -4095 as the bad address for bits iterator
selftests/bpf: Export map_update_retriable()
Ihor Solodrai (2):
selftests/sched_ext: add order-only dependency of runner.o on BPFOBJ
selftests/sched_ext: fix build after renames in sched_ext API
Ilpo Järvinen (1):
<linux/cleanup.h>: Allow the passing of both iomem and non-iomem
pointers to no_free_ptr()
Ingo Molnar (3):
sched/syscalls: Split out kernel/sched/syscalls.c from
kernel/sched/core.c
sched/fair: Rename check_preempt_wakeup() to
check_preempt_wakeup_fair()
sched/fair: Rename check_preempt_curr() to wakeup_preempt()
Jake Hillion (2):
sched_ext: create_dsq: Return -EEXIST on duplicate request
sched_ext: Drop kfuncs marked for removal in 6.15
Jiapeng Chong (1):
sched_ext: Fixes incorrect type in bpf_scx_init()
Jiayuan Chen (1):
selftests/bpf: Fixes for test_maps test
Kui-Feng Lee (29):
bpf: refactory struct_ops type initialization to a function.
bpf: get type information with BTF_ID_LIST
bpf, net: introduce bpf_struct_ops_desc.
bpf: add struct_ops_tab to btf.
bpf: make struct_ops_map support btfs other than btf_vmlinux.
bpf: pass btf object id in bpf_map_info.
bpf: lookup struct_ops types from a given module BTF.
bpf: pass attached BTF to the bpf_struct_ops subsystem
bpf: hold module refcnt in bpf_struct_ops map creation and prog
verification.
bpf: validate value_type
bpf, net: switch to dynamic registration
libbpf: Find correct module BTFs for struct_ops maps and progs.
bpf: export btf_ctx_access to modules.
selftests/bpf: test case for register_bpf_struct_ops().
bpf: Fix error checks against bpf_get_btf_vmlinux().
bpf: Remove an unnecessary check.
selftests/bpf: Suppress warning message of an unused variable.
bpf: add btf pointer to struct bpf_ctx_arg_aux.
bpf: Move __kfunc_param_match_suffix() to btf.c.
bpf: Create argument information for nullable arguments.
selftests/bpf: Test PTR_MAYBE_NULL arguments of struct_ops operators.
libbpf: Set btf_value_type_id of struct bpf_map for struct_ops.
libbpf: Convert st_ops->data to shadow type.
bpftool: Generated shadow variables for struct_ops maps.
bpftool: Add an example for struct_ops map and shadow type.
selftests/bpf: Test if shadow types work correctly.
bpf, net: validate struct_ops when updating value.
bpf: struct_ops supports more than one page for trampolines.
selftests/bpf: Test struct_ops maps with a large number of struct_ops
program.
Kumar Kartikeya Dwivedi (4):
bpf: Allow calling static subprogs while holding a bpf_spin_lock
selftests/bpf: Add test for static subprog call in lock cs
bpf: Transfer RCU lock state between subprog calls
selftests/bpf: Add tests for RCU lock transfer between subprogs
Liang Jie (1):
sched_ext: Use sizeof_field for key_len in dsq_hash_params
Luo Gengkun (7):
bpf: Fix kabi-breakage for bpf_func_info_aux
bpf: Fix kabi-breakage for bpf_tramp_image
bpf: Fix kabi for bpf_attr
bpf_verifier: Fix kabi for bpf_verifier_env
bpf: Fix kabi for bpf_ctx_arg_aux
bpf: Fix kabi for bpf_prog_aux and bpf_prog
selftests/bpf: modify test_loader that didn't support running
bpf_prog_type_syscall programs
Manu Bretelle (1):
sched_ext: define missing cfi stubs for sched_ext
Martin KaFai Lau (5):
libbpf: Ensure undefined bpf_attr field stays 0
bpf: Remove unnecessary err < 0 check in
bpf_struct_ops_map_update_elem
bpf: Fix a crash when btf_parse_base() returns an error pointer
bpf: Reject struct_ops registration that uses module ptr and the
module btf_id is missing
bpf: Use kallsyms to find the function name of a struct_ops's stub
function
Masahiro Yamada (1):
kbuild: avoid too many execution of scripts/pahole-flags.sh
Matthieu Baerts (1):
bpf: fix compilation error without CGROUPS
Peter Zijlstra (26):
cfi: Flip headers
x86/cfi,bpf: Fix BPF JIT call
x86/cfi,bpf: Fix bpf_callback_t CFI
x86/cfi,bpf: Fix bpf_struct_ops CFI
cfi: Add CFI_NOSEAL()
bpf: Fix dtor CFI
cleanup: Make no_free_ptr() __must_check
sched: Simplify set_user_nice()
sched: Simplify syscalls
sched: Simplify sched_{set,get}affinity()
sched: Simplify yield_to()
sched: Simplify sched_rr_get_interval()
sched: Simplify sched_move_task()
sched: Misc cleanups
sched/deadline: Move bandwidth accounting into {en,de}queue_dl_entity
sched: Allow sched_class::dequeue_task() to fail
sched: Unify runtime accounting across classes
sched: Use set_next_task(.first) where required
sched/fair: Cleanup pick_task_fair() vs throttle
sched/fair: Cleanup pick_task_fair()'s curr
sched/fair: Unify pick_{,next_}_task_fair()
sched: Fixup set_next_task() implementations
sched: Split up put_prev_task_balance()
sched: Rework pick_next_task()
sched: Combine the last put_prev_task() and the first set_next_task()
sched: Add put_prev_task(.next)
Pu Lehui (8):
riscv, bpf: Fix unpredictable kernel crash about RV64 struct_ops
bpf: Fix kabi breakage in struct module
riscv, bpf: Fix out-of-bounds issue when preparing trampoline image
selftests/bpf: Fix btf leak on new btf alloc failure in btf_distill
test
libbpf: Fix return zero when elf_begin failed
libbpf: Fix incorrect traversal end type ID when marking
BTF_IS_EMBEDDED
selftests/bpf: Add distilled BTF test about marking BTF_IS_EMBEDDED
selftests/bpf: Add file_read_pattern to gitignore
Randy Dunlap (1):
sched_ext: fix kernel-doc warnings
Shizhao Chen (1):
sched_ext: Add option -l in selftest runner to list all available
tests
Song Liu (8):
bpf: Charge modmem for struct_ops trampoline
bpf: Let bpf_prog_pack_free handle any pointer
bpf: Adjust argument names of arch_prepare_bpf_trampoline()
bpf: Add helpers for trampoline image management
bpf, x86: Adjust arch_prepare_bpf_trampoline return value
bpf: Add arch_bpf_trampoline_size()
bpf: Use arch_bpf_trampoline_size
x86, bpf: Use bpf_prog_pack for bpf trampoline
T.J. Mercier (1):
bpf, docs: Fix broken link to renamed bpf_iter_task_vmas.c
Tejun Heo (152):
sched: Restructure sched_class order sanity checks in sched_init()
sched: Allow sched_cgroup_fork() to fail and introduce
sched_cancel_fork()
sched: Add sched_class->reweight_task()
sched: Add sched_class->switching_to() and expose
check_class_changing/changed()
sched: Factor out cgroup weight conversion functions
sched: Factor out update_other_load_avgs() from
__update_blocked_others()
sched: Add normal_policy()
sched_ext: Add boilerplate for extensible scheduler class
sched_ext: Implement BPF extensible scheduler class
sched_ext: Add scx_simple and scx_example_qmap example schedulers
sched_ext: Add sysrq-S which disables the BPF scheduler
sched_ext: Allow BPF schedulers to disallow specific tasks from
joining SCHED_EXT
sched_ext: Print debug dump after an error exit
tools/sched_ext: Add scx_show_state.py
sched_ext: Implement scx_bpf_kick_cpu() and task preemption support
sched_ext: Add a central scheduler which makes all scheduling
decisions on one CPU
sched_ext: Make watchdog handle ops.dispatch() looping stall
sched_ext: Add task state tracking operations
sched_ext: Implement tickless support
sched_ext: Track tasks that are subjects of the in-flight SCX
operation
sched_ext: Implement sched_ext_ops.cpu_online/offline()
sched_ext: Bypass BPF scheduler while PM events are in progress
sched_ext: Implement core-sched support
sched_ext: Add vtime-ordered priority queue to dispatch_q's
sched_ext: Documentation: scheduler: Document extensible scheduler
class
sched, sched_ext: Replace scx_next_task_picked() with
sched_class->switch_class()
cpufreq_schedutil: Refactor sugov_cpu_is_busy()
sched_ext: Add cpuperf support
sched_ext: Drop tools_clean target from the top-level Makefile
sched_ext: Swap argument positions in kcalloc() call to avoid compiler
warning
sched, sched_ext: Simplify dl_prio() case handling in sched_fork()
sched_ext: Account for idle policy when setting p->scx.weight in
scx_ops_enable_task()
sched_ext: Disallow loading BPF scheduler if isolcpus= domain
isolation is in effect
sched_ext: Minor cleanups in kernel/sched/ext.h
sched, sched_ext: Open code for_balance_class_range()
sched, sched_ext: Move some declarations from kernel/sched/ext.h to
sched.h
sched_ext: Take out ->priq and ->flags from scx_dsq_node
sched_ext: Implement DSQ iterator
sched_ext/scx_qmap: Add an example usage of DSQ iterator
sched_ext: Reimplement scx_bpf_reenqueue_local()
sched_ext: Make scx_bpf_reenqueue_local() skip tasks that are being
migrated
sched: Move struct balance_callback definition upward
sched_ext: Unpin and repin rq lock from balance_scx()
sched_ext: s/SCX_RQ_BALANCING/SCX_RQ_IN_BALANCE/ and add
SCX_RQ_IN_WAKEUP
sched_ext: Allow SCX_DSQ_LOCAL_ON for direct dispatches
sched_ext/scx_qmap: Pick idle CPU for direct dispatch on !wakeup
enqueues
sched_ext: Build fix on !CONFIG_STACKTRACE[_SUPPORT]
sched_ext: Allow p->scx.disallow only while loading
sched_ext: Simplify scx_can_stop_tick() invocation in
sched_can_stop_tick()
sched_ext: Add scx_enabled() test to @start_class promotion in
put_prev_task_balance()
sched_ext: Use update_curr_common() in update_curr_scx()
sched_ext: Simplify UP support by enabling sched_class->balance() in
UP
sched_ext: Improve comment on idle_sched_class exception in
scx_task_iter_next_locked()
sched_ext: Make task_can_run_on_remote_rq() use common
task_allowed_on_cpu()
sched_ext: Fix unsafe list iteration in process_ddsp_deferred_locals()
sched_ext: Make scx_rq_online() also test cpu_active() in addition to
SCX_RQ_ONLINE
sched_ext: Improve logging around enable/disable
sched_ext: Don't use double locking to migrate tasks across CPUs
scx_central: Fix smatch checker warning
sched_ext: Add missing cfi stub for ops.tick
sched_ext: Use task_can_run_on_remote_rq() test in
dispatch_to_local_dsq()
sched_ext: Use sched_clock_cpu() instead of rq_clock_task() in
touch_core_sched()
sched_ext: Don't call put_prev_task_scx() before picking the next task
sched_ext: Replace SCX_TASK_BAL_KEEP with SCX_RQ_BAL_KEEP
sched_ext: Unify regular and core-sched pick task paths
sched_ext: Relocate functions in kernel/sched/ext.c
sched_ext: Remove switch_class_scx()
sched_ext: Remove sched_class->switch_class()
sched_ext: TASK_DEAD tasks must be switched out of SCX on ops_disable
sched_ext: TASK_DEAD tasks must be switched into SCX on ops_enable
sched: Expose css_tg()
sched: Make cpu_shares_read_u64() use tg_weight()
sched: Introduce CONFIG_GROUP_SCHED_WEIGHT
sched_ext: Add cgroup support
sched_ext: Add a cgroup scheduler which uses flattened hierarchy
sched_ext: Temporarily work around pick_task_scx() being called
without balance_scx()
sched_ext: Add missing static to scx_has_op[]
sched_ext: Add missing static to scx_dump_data
sched_ext: Rename scx_kfunc_set_sleepable to unlocked and relocate
sched_ext: Refactor consume_remote_task()
sched_ext: Make find_dsq_for_dispatch() handle SCX_DSQ_LOCAL_ON
sched_ext: Fix processs_ddsp_deferred_locals() by unifying DTL_INVALID
handling
sched_ext: Restructure dispatch_to_local_dsq()
sched_ext: Reorder args for consume_local/remote_task()
sched_ext: Move sanity check and dsq_mod_nr() into
task_unlink_from_dsq()
sched_ext: Move consume_local_task() upward
sched_ext: Replace consume_local_task() with
move_local_task_to_local_dsq()
sched_ext: Compact struct bpf_iter_scx_dsq_kern
sched_ext: Implement scx_bpf_dispatch[_vtime]_from_dsq()
scx_qmap: Implement highpri boosting
sched_ext: Synchronize bypass state changes with rq lock
sched_ext: Don't trigger ops.quiescent/runnable() on migrations
sched_ext: Fix build when !CONFIG_STACKTRACE
sched_ext: Build fix for !CONFIG_SMP
sched_ext: Add __COMPAT helpers for features added during v6.12 devel
cycle
tools/sched_ext: Receive misc updates from SCX repo
scx_flatcg: Use a user DSQ for fallback instead of SCX_DSQ_GLOBAL
sched_ext: Allow only user DSQs for scx_bpf_consume(),
scx_bpf_dsq_nr_queued() and bpf_iter_scx_dsq_new()
sched_ext: Relocate find_user_dsq()
sched_ext: Split the global DSQ per NUMA node
sched_ext: Use shorter slice while bypassing
sched_ext: Relocate check_hotplug_seq() call in scx_ops_enable()
sched_ext: Remove SCX_OPS_PREPPING
sched_ext: Initialize in bypass mode
sched_ext: Fix SCX_TASK_INIT -> SCX_TASK_READY transitions in
scx_ops_enable()
sched_ext: Enable scx_ops_init_task() separately
sched_ext: Add scx_cgroup_enabled to gate cgroup operations and fix
scx_tg_online()
sched_ext: Decouple locks in scx_ops_disable_workfn()
sched_ext: Decouple locks in scx_ops_enable()
sched_ext: Improve error reporting during loading
sched_ext: scx_cgroup_exit() may be called without successful
scx_cgroup_init()
sched/core: Make select_task_rq() take the pointer to wake_flags
instead of value
sched/core: Add ENQUEUE_RQ_SELECTED to indicate whether
->select_task_rq() was called
sched_ext, scx_qmap: Add and use SCX_ENQ_CPU_SELECTED
Revert "sched_ext: Use shorter slice while bypassing"
sched_ext: Start schedulers with consistent p->scx.slice values
sched_ext: Move scx_buildin_idle_enabled check to
scx_bpf_select_cpu_dfl()
sched_ext: bypass mode shouldn't depend on ops.select_cpu()
sched_ext: Move scx_tasks_lock handling into scx_task_iter helpers
sched_ext: Don't hold scx_tasks_lock for too long
sched_ext: Make cast_mask() inline
sched_ext: Fix enq_last_no_enq_fails selftest
sched_ext: Add a missing newline at the end of an error message
sched_ext: Update scx_show_state.py to match scx_ops_bypass_depth's
new type
sched_ext: Handle cases where pick_task_scx() is called without
preceding balance_scx()
sched_ext: ops.cpu_acquire() should be called with SCX_KF_REST
sched_ext: Factor out move_task_between_dsqs() from
scx_dispatch_from_dsq()
sched_ext: Rename CFI stubs to names that are recognized by BPF
sched_ext: Replace set_arg_maybe_null() with __nullable CFI stub tags
sched_ext: Avoid live-locking bypass mode switching
sched_ext: Enable the ops breather and eject BPF scheduler on
softlockup
sched_ext: scx_bpf_dispatch_from_dsq_set_*() are allowed from unlocked
context
sched_ext: Rename scx_bpf_dispatch[_vtime]() to
scx_bpf_dsq_insert[_vtime]()
sched_ext: Rename scx_bpf_consume() to scx_bpf_dsq_move_to_local()
sched_ext: Rename scx_bpf_dispatch[_vtime]_from_dsq*() ->
scx_bpf_dsq_move[_vtime]*()
sched_ext: Fix invalid irq restore in scx_ops_bypass()
sched_ext: Fix dsq_local_on selftest
tools/sched_ext: Receive updates from SCX repo
sched_ext: selftests/dsp_local_on: Fix sporadic failures
sched_ext: Fix incorrect autogroup migration detection
sched_ext: Implement auto local dispatching of migration disabled
tasks
sched_ext: Fix migration disabled handling in targeted dispatches
sched_ext: Fix incorrect assumption about migration disabled tasks in
task_can_run_on_remote_rq()
sched_ext: Fix pick_task_scx() picking non-queued tasks when it's
called without balance()
sched_ext: Implement SCX_OPS_ALLOW_QUEUED_WAKEUP
sched_ext: bpf_iter_scx_dsq_new() should always initialize iterator
sched_ext: Make scx_group_set_weight() always update tg->scx.weight
sched_ext, sched/core: Don't call scx_group_set_weight() prematurely
from sched_create_group()
sched_ext: Mark scx_bpf_dsq_move_set_[slice|vtime]() with KF_RCU
sched_ext: Don't kick CPUs running higher classes
sched_ext: Use SCX_TASK_READY test instead of tryget_task_struct()
during class switch
tools/sched_ext: Sync with scx repo
Thomas Gleixner (1):
sched/ext: Remove sched_fork() hack
Thorsten Blum (1):
sched_ext: Use str_enabled_disabled() helper in
update_selcpu_topology()
Tianchen Ding (1):
sched_ext: Use btf_ids to resolve task_struct
Tony Ambardar (1):
libbpf: Ensure new BTF objects inherit input endianness
Vincent Guittot (2):
sched/cpufreq: Rework schedutil governor performance estimation
sched/fair: Fix sched_can_stop_tick() for fair tasks
Vishal Chourasia (2):
sched_ext: Add __weak markers to BPF helper function decalarations
sched_ext: Fix function pointer type mismatches in BPF selftests
Wenyu Huang (1):
sched/doc: Update documentation after renames and synchronize Chinese
version
Yafang Shao (2):
bpf: Add bits iterator
selftests/bpf: Add selftest for bits iter
Yipeng Zou (1):
sched_ext: Allow dequeue_task_scx to fail
Yiwei Lin (1):
sched/fair: Remove unused 'curr' argument from pick_next_entity()
Yu Liao (2):
sched: Put task_group::idle under CONFIG_GROUP_SCHED_WEIGHT
sched: Add dummy version of sched_group_set_idle()
Yury Norov (1):
cpumask: introduce assign_cpu() macro
Zhang Qiao (3):
sched_ext: Remove redundant p->nr_cpus_allowed checker
sched/ext: Fix unmatch trailing comment of CONFIG_EXT_GROUP_SCHED
sched/ext: Use tg_cgroup() to elieminate duplicate code
Zhao Mengmeng (1):
sched_ext: Replace scx_next_task_picked() with switch_class() in
comment
Zicheng Qu (18):
sched: Fix kabi for reweight_task in struct sched_class
Revert "sched/deadline: Fix missing ENQUEUE_REPLENISH during PI
de-boosting"
sched/syscalls: Fix kabi for EXPORT_SYMBOL moved from core.c to
syscalls.c
sched: Fix kabi for switching_to in struct sched_class
sched/fair: Fix kabi for check_preempt_curr and wakeup_preempt in
struct sched_class
sched: Fix kabi for dequeue_task in struct sched_class
sched_ext: Fix kabi for scx in struct task_struct
sched_ext: Fix kabi for switch_class in struct sched_class
sched: Fix kabi for exec_max in struct sched_statistics
sched_ext: Fix kabi for balance in struct sched_class
sched_ext: Fix kabi for header in kernel/sched/sched.h
sched: Fix kabi pick_task in struct sched_class
sched: Fix kabi for put_prev_task in struct sched_class
sched_ext: Fix kabi for scx_flags and scx_weight in struct task_group
sched: Fix kabi for int idle in struct task_group
sched: Add __setscheduler_class() for sched_ext
genirq: Fix kabi for kstat_irqs in struct irq_desc
sched_ext: Enable and disable sched_ext configs
Zqiang (1):
sched_ext: Fix unsafe locking in the scx_dump_state()
guanjing (1):
sched_ext: fix application of sizeof to pointer
Documentation/bpf/bpf_iterators.rst | 2 +-
Documentation/bpf/kfuncs.rst | 14 +-
Documentation/scheduler/index.rst | 1 +
Documentation/scheduler/sched-design-CFS.rst | 8 +-
Documentation/scheduler/sched-ext.rst | 325 +
.../zh_CN/scheduler/sched-design-CFS.rst | 8 +-
MAINTAINERS | 16 +-
Makefile | 4 +-
arch/arm64/configs/openeuler_defconfig | 3 +
arch/arm64/kernel/bpf-rvi.c | 4 +-
arch/arm64/net/bpf_jit_comp.c | 55 +-
arch/mips/dec/setup.c | 2 +-
arch/parisc/kernel/smp.c | 2 +-
arch/powerpc/kvm/book3s_hv_rm_xics.c | 2 +-
arch/riscv/include/asm/cfi.h | 3 +-
arch/riscv/kernel/cfi.c | 2 +-
arch/riscv/net/bpf_jit_comp64.c | 48 +-
arch/s390/net/bpf_jit_comp.c | 59 +-
arch/x86/configs/openeuler_defconfig | 2 +
arch/x86/include/asm/cfi.h | 126 +-
arch/x86/kernel/alternative.c | 87 +-
arch/x86/kernel/cfi.c | 4 +-
arch/x86/net/bpf_jit_comp.c | 261 +-
block/blk-cgroup.c | 4 +-
drivers/hid/bpf/hid_bpf_dispatch.c | 12 +-
drivers/tty/sysrq.c | 1 +
fs/proc/stat.c | 4 +-
include/asm-generic/Kbuild | 1 +
include/asm-generic/cfi.h | 5 +
include/asm-generic/vmlinux.lds.h | 1 +
include/linux/bitops.h | 2 +
include/linux/bpf.h | 130 +-
include/linux/bpf_mem_alloc.h | 3 +
include/linux/bpf_verifier.h | 21 +-
include/linux/btf.h | 105 +
include/linux/btf_ids.h | 21 +-
include/linux/cfi.h | 12 +
include/linux/cgroup.h | 14 +-
include/linux/cleanup.h | 42 +-
include/linux/cpumask.h | 41 +-
include/linux/energy_model.h | 1 -
include/linux/file.h | 20 +
include/linux/filter.h | 2 +-
include/linux/irqdesc.h | 17 +-
include/linux/kernel_stat.h | 8 +
include/linux/module.h | 8 +-
include/linux/sched.h | 8 +-
include/linux/sched/ext.h | 216 +
include/linux/sched/task.h | 8 +-
include/trace/events/sched_ext.h | 32 +
include/uapi/linux/bpf.h | 16 +-
include/uapi/linux/sched.h | 1 +
init/Kconfig | 10 +
init/init_task.c | 12 +
kernel/Kconfig.preempt | 27 +-
kernel/bpf-rvi/common_kfuncs.c | 4 +-
kernel/bpf/Makefile | 8 +-
kernel/bpf/bpf_iter.c | 12 +-
kernel/bpf/bpf_struct_ops.c | 745 +-
kernel/bpf/bpf_struct_ops_types.h | 12 -
kernel/bpf/btf.c | 431 +-
kernel/bpf/cgroup_iter.c | 65 +-
kernel/bpf/core.c | 76 +-
kernel/bpf/cpumask.c | 18 +-
kernel/bpf/dispatcher.c | 7 +-
kernel/bpf/helpers.c | 202 +-
kernel/bpf/map_iter.c | 10 +-
kernel/bpf/memalloc.c | 14 +-
kernel/bpf/syscall.c | 12 +-
kernel/bpf/task_iter.c | 242 +-
kernel/bpf/trampoline.c | 99 +-
kernel/bpf/verifier.c | 317 +-
kernel/cgroup/cgroup.c | 18 +-
kernel/cgroup/cpuset.c | 4 +-
kernel/cgroup/rstat.c | 13 +-
kernel/events/core.c | 2 +-
kernel/fork.c | 17 +-
kernel/irq/Kconfig | 4 +
kernel/irq/internals.h | 2 +-
kernel/irq/irqdesc.c | 144 +-
kernel/irq/proc.c | 5 +-
kernel/module/main.c | 5 +-
kernel/sched/autogroup.c | 4 +-
kernel/sched/bpf_sched.c | 8 +-
kernel/sched/build_policy.c | 13 +
kernel/sched/core.c | 2522 +-----
kernel/sched/cpuacct.c | 4 +-
kernel/sched/cpufreq_schedutil.c | 83 +-
kernel/sched/deadline.c | 175 +-
kernel/sched/debug.c | 3 +
kernel/sched/ext.c | 7155 +++++++++++++++++
kernel/sched/ext.h | 119 +
kernel/sched/ext_idle.c | 755 ++
kernel/sched/ext_idle.h | 39 +
kernel/sched/fair.c | 306 +-
kernel/sched/idle.c | 31 +-
kernel/sched/rt.c | 40 +-
kernel/sched/sched.h | 473 +-
kernel/sched/stop_task.c | 35 +-
kernel/sched/syscalls.c | 1713 ++++
kernel/trace/bpf_trace.c | 12 +-
kernel/trace/trace_probe.c | 2 -
kernel/watchdog.c | 223 +-
lib/Kconfig.debug | 14 +
lib/dump_stack.c | 1 +
lib/rhashtable.c | 12 +-
net/bpf/bpf_dummy_struct_ops.c | 72 +-
net/bpf/test_run.c | 30 +-
net/core/filter.c | 33 +-
net/core/xdp.c | 10 +-
net/ipv4/bpf_tcp_ca.c | 93 +-
net/ipv4/fou_bpf.c | 10 +-
net/ipv4/tcp_bbr.c | 4 +-
net/ipv4/tcp_cong.c | 6 +-
net/ipv4/tcp_cubic.c | 4 +-
net/ipv4/tcp_dctcp.c | 4 +-
net/netfilter/nf_conntrack_bpf.c | 10 +-
net/netfilter/nf_nat_bpf.c | 10 +-
net/socket.c | 8 +-
net/xfrm/xfrm_interface_bpf.c | 10 +-
scripts/Makefile.btf | 33 +
scripts/Makefile.modfinal | 2 +-
scripts/gdb/linux/interrupts.py | 6 +-
scripts/pahole-flags.sh | 30 -
tools/Makefile | 10 +-
.../bpf/bpftool/Documentation/bpftool-gen.rst | 58 +-
tools/bpf/bpftool/gen.c | 253 +-
tools/bpf/resolve_btfids/main.c | 8 +
tools/include/linux/bitops.h | 2 +
tools/include/uapi/linux/bpf.h | 14 +-
tools/include/uapi/linux/sched.h | 1 +
tools/lib/bpf/Build | 2 +-
tools/lib/bpf/bpf.c | 4 +-
tools/lib/bpf/bpf.h | 4 +-
tools/lib/bpf/btf.c | 704 +-
tools/lib/bpf/btf.h | 36 +
tools/lib/bpf/btf_iter.c | 177 +
tools/lib/bpf/btf_relocate.c | 519 ++
tools/lib/bpf/libbpf.c | 97 +-
tools/lib/bpf/libbpf.map | 4 +-
tools/lib/bpf/libbpf_internal.h | 29 +-
tools/lib/bpf/libbpf_probes.c | 1 +
tools/lib/bpf/linker.c | 58 +-
tools/perf/util/probe-finder.c | 4 +-
tools/sched_ext/.gitignore | 2 +
tools/sched_ext/Makefile | 246 +
tools/sched_ext/README.md | 270 +
.../sched_ext/include/bpf-compat/gnu/stubs.h | 11 +
tools/sched_ext/include/scx/common.bpf.h | 647 ++
tools/sched_ext/include/scx/common.h | 81 +
tools/sched_ext/include/scx/compat.bpf.h | 143 +
tools/sched_ext/include/scx/compat.h | 187 +
.../sched_ext/include/scx/enums.autogen.bpf.h | 105 +
tools/sched_ext/include/scx/enums.autogen.h | 41 +
tools/sched_ext/include/scx/enums.bpf.h | 12 +
tools/sched_ext/include/scx/enums.h | 27 +
tools/sched_ext/include/scx/user_exit_info.h | 118 +
tools/sched_ext/scx_central.bpf.c | 356 +
tools/sched_ext/scx_central.c | 145 +
tools/sched_ext/scx_flatcg.bpf.c | 954 +++
tools/sched_ext/scx_flatcg.c | 234 +
tools/sched_ext/scx_flatcg.h | 51 +
tools/sched_ext/scx_qmap.bpf.c | 827 ++
tools/sched_ext/scx_qmap.c | 155 +
tools/sched_ext/scx_show_state.py | 42 +
tools/sched_ext/scx_simple.bpf.c | 151 +
tools/sched_ext/scx_simple.c | 107 +
tools/testing/selftests/Makefile | 9 +-
tools/testing/selftests/bpf/.gitignore | 1 +
.../testing/selftests/bpf/bpf_experimental.h | 96 +
.../selftests/bpf/bpf_testmod/bpf_testmod.c | 160 +-
.../selftests/bpf/bpf_testmod/bpf_testmod.h | 61 +
.../bpf/bpf_testmod/bpf_testmod_kfunc.h | 9 +
.../selftests/bpf/prog_tests/bpf_iter.c | 44 +-
.../selftests/bpf/prog_tests/btf_distill.c | 692 ++
.../selftests/bpf/prog_tests/cgroup_iter.c | 33 +
.../bpf/prog_tests/global_func_dead_code.c | 60 +
.../testing/selftests/bpf/prog_tests/iters.c | 209 +
.../selftests/bpf/prog_tests/kfunc_call.c | 1 +
.../selftests/bpf/prog_tests/rcu_read_lock.c | 6 +
.../selftests/bpf/prog_tests/spin_lock.c | 2 +
.../prog_tests/test_struct_ops_maybe_null.c | 46 +
.../bpf/prog_tests/test_struct_ops_module.c | 86 +
.../prog_tests/test_struct_ops_multi_pages.c | 30 +
.../testing/selftests/bpf/prog_tests/timer.c | 4 +
.../selftests/bpf/prog_tests/verifier.c | 4 +
...f_iter_task_vma.c => bpf_iter_task_vmas.c} | 0
.../{bpf_iter_task.c => bpf_iter_tasks.c} | 0
.../bpf/progs/freplace_dead_global_func.c | 11 +
tools/testing/selftests/bpf/progs/iters_css.c | 72 +
.../selftests/bpf/progs/iters_css_task.c | 102 +
.../testing/selftests/bpf/progs/iters_task.c | 41 +
.../selftests/bpf/progs/iters_task_failure.c | 105 +
.../selftests/bpf/progs/iters_task_vma.c | 43 +
.../selftests/bpf/progs/iters_testmod_seq.c | 50 +
.../selftests/bpf/progs/kfunc_call_test.c | 37 +
.../selftests/bpf/progs/rcu_read_lock.c | 120 +
.../bpf/progs/struct_ops_maybe_null.c | 29 +
.../bpf/progs/struct_ops_maybe_null_fail.c | 24 +
.../selftests/bpf/progs/struct_ops_module.c | 37 +
.../bpf/progs/struct_ops_multi_pages.c | 102 +
.../selftests/bpf/progs/test_global_func12.c | 4 +-
.../selftests/bpf/progs/test_spin_lock.c | 65 +
.../selftests/bpf/progs/test_spin_lock_fail.c | 44 +
tools/testing/selftests/bpf/progs/timer.c | 63 +-
.../selftests/bpf/progs/verifier_bits_iter.c | 232 +
.../bpf/progs/verifier_global_subprogs.c | 101 +
.../selftests/bpf/progs/verifier_spin_lock.c | 2 +-
.../bpf/progs/verifier_subprog_precision.c | 4 +-
tools/testing/selftests/bpf/test_loader.c | 10 +-
tools/testing/selftests/bpf/test_maps.c | 18 +-
tools/testing/selftests/bpf/test_maps.h | 5 +
tools/testing/selftests/sched_ext/.gitignore | 6 +
tools/testing/selftests/sched_ext/Makefile | 211 +
tools/testing/selftests/sched_ext/config | 9 +
.../selftests/sched_ext/create_dsq.bpf.c | 58 +
.../testing/selftests/sched_ext/create_dsq.c | 57 +
.../sched_ext/ddsp_bogus_dsq_fail.bpf.c | 42 +
.../selftests/sched_ext/ddsp_bogus_dsq_fail.c | 60 +
.../sched_ext/ddsp_vtimelocal_fail.bpf.c | 39 +
.../sched_ext/ddsp_vtimelocal_fail.c | 59 +
.../selftests/sched_ext/dsp_local_on.bpf.c | 68 +
.../selftests/sched_ext/dsp_local_on.c | 60 +
.../sched_ext/enq_last_no_enq_fails.bpf.c | 29 +
.../sched_ext/enq_last_no_enq_fails.c | 64 +
.../sched_ext/enq_select_cpu_fails.bpf.c | 43 +
.../sched_ext/enq_select_cpu_fails.c | 61 +
tools/testing/selftests/sched_ext/exit.bpf.c | 86 +
tools/testing/selftests/sched_ext/exit.c | 64 +
tools/testing/selftests/sched_ext/exit_test.h | 20 +
.../testing/selftests/sched_ext/hotplug.bpf.c | 61 +
tools/testing/selftests/sched_ext/hotplug.c | 170 +
.../selftests/sched_ext/hotplug_test.h | 15 +
.../sched_ext/init_enable_count.bpf.c | 53 +
.../selftests/sched_ext/init_enable_count.c | 157 +
.../testing/selftests/sched_ext/maximal.bpf.c | 166 +
tools/testing/selftests/sched_ext/maximal.c | 54 +
.../selftests/sched_ext/maybe_null.bpf.c | 36 +
.../testing/selftests/sched_ext/maybe_null.c | 49 +
.../sched_ext/maybe_null_fail_dsp.bpf.c | 25 +
.../sched_ext/maybe_null_fail_yld.bpf.c | 28 +
.../testing/selftests/sched_ext/minimal.bpf.c | 21 +
tools/testing/selftests/sched_ext/minimal.c | 58 +
.../selftests/sched_ext/prog_run.bpf.c | 33 +
tools/testing/selftests/sched_ext/prog_run.c | 78 +
.../testing/selftests/sched_ext/reload_loop.c | 74 +
tools/testing/selftests/sched_ext/runner.c | 212 +
tools/testing/selftests/sched_ext/scx_test.h | 131 +
.../selftests/sched_ext/select_cpu_dfl.bpf.c | 40 +
.../selftests/sched_ext/select_cpu_dfl.c | 75 +
.../sched_ext/select_cpu_dfl_nodispatch.bpf.c | 89 +
.../sched_ext/select_cpu_dfl_nodispatch.c | 75 +
.../sched_ext/select_cpu_dispatch.bpf.c | 41 +
.../selftests/sched_ext/select_cpu_dispatch.c | 73 +
.../select_cpu_dispatch_bad_dsq.bpf.c | 37 +
.../sched_ext/select_cpu_dispatch_bad_dsq.c | 59 +
.../select_cpu_dispatch_dbl_dsp.bpf.c | 38 +
.../sched_ext/select_cpu_dispatch_dbl_dsp.c | 59 +
.../sched_ext/select_cpu_vtime.bpf.c | 92 +
.../selftests/sched_ext/select_cpu_vtime.c | 62 +
.../selftests/sched_ext/test_example.c | 49 +
tools/testing/selftests/sched_ext/util.c | 71 +
tools/testing/selftests/sched_ext/util.h | 13 +
263 files changed, 27732 insertions(+), 3817 deletions(-)
create mode 100644 Documentation/scheduler/sched-ext.rst
create mode 100644 include/asm-generic/cfi.h
create mode 100644 include/linux/sched/ext.h
create mode 100644 include/trace/events/sched_ext.h
delete mode 100644 kernel/bpf/bpf_struct_ops_types.h
create mode 100644 kernel/sched/ext.c
create mode 100644 kernel/sched/ext.h
create mode 100644 kernel/sched/ext_idle.c
create mode 100644 kernel/sched/ext_idle.h
create mode 100644 kernel/sched/syscalls.c
create mode 100644 scripts/Makefile.btf
delete mode 100755 scripts/pahole-flags.sh
create mode 100644 tools/lib/bpf/btf_iter.c
create mode 100644 tools/lib/bpf/btf_relocate.c
create mode 100644 tools/sched_ext/.gitignore
create mode 100644 tools/sched_ext/Makefile
create mode 100644 tools/sched_ext/README.md
create mode 100644 tools/sched_ext/include/bpf-compat/gnu/stubs.h
create mode 100644 tools/sched_ext/include/scx/common.bpf.h
create mode 100644 tools/sched_ext/include/scx/common.h
create mode 100644 tools/sched_ext/include/scx/compat.bpf.h
create mode 100644 tools/sched_ext/include/scx/compat.h
create mode 100644 tools/sched_ext/include/scx/enums.autogen.bpf.h
create mode 100644 tools/sched_ext/include/scx/enums.autogen.h
create mode 100644 tools/sched_ext/include/scx/enums.bpf.h
create mode 100644 tools/sched_ext/include/scx/enums.h
create mode 100644 tools/sched_ext/include/scx/user_exit_info.h
create mode 100644 tools/sched_ext/scx_central.bpf.c
create mode 100644 tools/sched_ext/scx_central.c
create mode 100644 tools/sched_ext/scx_flatcg.bpf.c
create mode 100644 tools/sched_ext/scx_flatcg.c
create mode 100644 tools/sched_ext/scx_flatcg.h
create mode 100644 tools/sched_ext/scx_qmap.bpf.c
create mode 100644 tools/sched_ext/scx_qmap.c
create mode 100644 tools/sched_ext/scx_show_state.py
create mode 100644 tools/sched_ext/scx_simple.bpf.c
create mode 100644 tools/sched_ext/scx_simple.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/btf_distill.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/global_func_dead_code.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_maybe_null.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_multi_pages.c
rename tools/testing/selftests/bpf/progs/{bpf_iter_task_vma.c => bpf_iter_task_vmas.c} (100%)
rename tools/testing/selftests/bpf/progs/{bpf_iter_task.c => bpf_iter_tasks.c} (100%)
create mode 100644 tools/testing/selftests/bpf/progs/freplace_dead_global_func.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_css.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_css_task.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_task.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_task_failure.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_task_vma.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_maybe_null.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_maybe_null_fail.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_module.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_multi_pages.c
create mode 100644 tools/testing/selftests/bpf/progs/verifier_bits_iter.c
create mode 100644 tools/testing/selftests/bpf/progs/verifier_global_subprogs.c
create mode 100644 tools/testing/selftests/sched_ext/.gitignore
create mode 100644 tools/testing/selftests/sched_ext/Makefile
create mode 100644 tools/testing/selftests/sched_ext/config
create mode 100644 tools/testing/selftests/sched_ext/create_dsq.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/create_dsq.c
create mode 100644 tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.c
create mode 100644 tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.c
create mode 100644 tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/dsp_local_on.c
create mode 100644 tools/testing/selftests/sched_ext/enq_last_no_enq_fails.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/enq_last_no_enq_fails.c
create mode 100644 tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/enq_select_cpu_fails.c
create mode 100644 tools/testing/selftests/sched_ext/exit.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/exit.c
create mode 100644 tools/testing/selftests/sched_ext/exit_test.h
create mode 100644 tools/testing/selftests/sched_ext/hotplug.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/hotplug.c
create mode 100644 tools/testing/selftests/sched_ext/hotplug_test.h
create mode 100644 tools/testing/selftests/sched_ext/init_enable_count.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/init_enable_count.c
create mode 100644 tools/testing/selftests/sched_ext/maximal.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/maximal.c
create mode 100644 tools/testing/selftests/sched_ext/maybe_null.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/maybe_null.c
create mode 100644 tools/testing/selftests/sched_ext/maybe_null_fail_dsp.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/maybe_null_fail_yld.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/minimal.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/minimal.c
create mode 100644 tools/testing/selftests/sched_ext/prog_run.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/prog_run.c
create mode 100644 tools/testing/selftests/sched_ext/reload_loop.c
create mode 100644 tools/testing/selftests/sched_ext/runner.c
create mode 100644 tools/testing/selftests/sched_ext/scx_test.h
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dfl.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dispatch.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_vtime.c
create mode 100644 tools/testing/selftests/sched_ext/test_example.c
create mode 100644 tools/testing/selftests/sched_ext/util.c
create mode 100644 tools/testing/selftests/sched_ext/util.h
--
2.34.1
2
453
Junxiao Bi (2):
scsi: core: Fix refcount leak for tagset_refcnt
scsi: core: Fix error handling for scsi_alloc_sdev()
drivers/scsi/scsi_scan.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
--
2.52.0
2
6
Aboorva Devarajan (2):
sched_ext: Documentation: Remove mentions of scx_bpf_switch_all
sched: Pass correct scheduling policy to __setscheduler_class
Alan Maguire (16):
kbuild,bpf: Switch to using --btf_features for pahole v1.26 and later
kbuild, bpf: Use test-ge check for v1.25-only pahole
libbpf: Add btf__distill_base() creating split BTF with distilled base
BTF
selftests/bpf: Test distilled base, split BTF generation
libbpf: Split BTF relocation
selftests/bpf: Extend distilled BTF tests to cover BTF relocation
resolve_btfids: Handle presence of .BTF.base section
libbpf: BTF relocation followup fixing naming, loop logic
module, bpf: Store BTF base pointer in struct module
libbpf: Split field iter code into its own file kernel
libbpf,bpf: Share BTF relocate-related code with kernel
kbuild,bpf: Add module-specific pahole flags for distilled base BTF
selftests/bpf: Add kfunc_call test for simple dtor in bpf_testmod
bpf: fix build when CONFIG_DEBUG_INFO_BTF[_MODULES] is undefined
libbpf: Fix error handling in btf__distill_base()
libbpf: Fix license for btf_relocate.c
Alexander Lobakin (1):
bitops: make BYTES_TO_BITS() treewide-available
Alexei Starovoitov (2):
s390/bpf: Fix indirect trampoline generation
bpf: Introduce "volatile compare" macros
Andrea Righi (33):
sched_ext: fix typo in set_weight() description
sched_ext: add CONFIG_DEBUG_INFO_BTF dependency
sched_ext: Provide a sysfs enable_seq counter
sched_ext: improve WAKE_SYNC behavior for default idle CPU selection
sched_ext: Clarify ops.select_cpu() for single-CPU tasks
sched_ext: Introduce LLC awareness to the default idle selection
policy
sched_ext: Introduce NUMA awareness to the default idle selection
policy
sched_ext: Do not enable LLC/NUMA optimizations when domains overlap
sched_ext: Fix incorrect use of bitwise AND
MAINTAINERS: add self as reviewer for sched_ext
sched_ext: idle: Refresh idle masks during idle-to-idle transitions
sched_ext: Use the NUMA scheduling domain for NUMA optimizations
sched_ext: idle: use assign_cpu() to update the idle cpumask
sched_ext: idle: clarify comments
sched_ext: idle: introduce check_builtin_idle_enabled() helper
sched_ext: idle: small CPU iteration refactoring
sched_ext: update scx_bpf_dsq_insert() doc for SCX_DSQ_LOCAL_ON
sched_ext: Include remaining task time slice in error state dump
sched_ext: Include task weight in the error state dump
selftests/sched_ext: Fix enum resolution
tools/sched_ext: Add helper to check task migration state
sched_ext: selftests/dsp_local_on: Fix selftest on UP systems
sched_ext: Fix lock imbalance in dispatch_to_local_dsq()
selftests/sched_ext: Fix exit selftest hang on UP
sched_ext: Move built-in idle CPU selection policy to a separate file
sched_ext: Track currently locked rq
sched_ext: Make scx_locked_rq() inline
sched_ext: Fix missing rq lock in scx_bpf_cpuperf_set()
sched/ext: Fix invalid task state transitions on class switch
sched_ext: Make scx_kf_allowed_if_unlocked() available outside ext.c
sched_ext: Remove duplicate BTF_ID_FLAGS definitions
sched_ext: Fix rq lock state in hotplug ops
sched_ext: Validate prev_cpu in scx_bpf_select_cpu_dfl()
Andrii Nakryiko (14):
bpf: Emit global subprog name in verifier logs
bpf: Validate global subprogs lazily
selftests/bpf: Add lazy global subprog validation tests
libbpf: Add btf__new_split() API that was declared but not implemented
bpf: move sleepable flag from bpf_prog_aux to bpf_prog
libbpf: Add BTF field iterator
libbpf: Make use of BTF field iterator in BPF linker code
libbpf: Make use of BTF field iterator in BTF handling code
bpftool: Use BTF field iterator in btfgen
libbpf: Remove callback-based type/string BTF field visitor helpers
bpf: extract iterator argument type and name validation logic
bpf: allow passing struct bpf_iter_<type> as kfunc arguments
selftests/bpf: test passing iterator to a kfunc
selftests/bpf: validate eliminated global subprog is not freplaceable
Arnaldo Carvalho de Melo (1):
tools include UAPI: Sync linux/sched.h copy with the kernel sources
Atul Kumar Pant (1):
sched_ext: Fixes typos in comments
Benjamin Tissoires (1):
bpf: introduce in_sleepable() helper
Bitao Hu (4):
genirq: Convert kstat_irqs to a struct
genirq: Provide a snapshot mechanism for interrupt statistics
watchdog/softlockup: Low-overhead detection of interrupt storm
watchdog/softlockup: Report the most frequent interrupts
Björn Töpel (1):
selftests: sched_ext: Add sched_ext as proper selftest target
Breno Leitao (3):
rhashtable: Fix potential deadlock by moving schedule_work outside
lock
sched_ext: Use kvzalloc for large exit_dump allocation
sched/ext: Prevent update_locked_rq() calls with NULL rq
Changwoo Min (12):
sched_ext: Clarify sched_ext_ops table for userland scheduler
sched_ext: add a missing rcu_read_lock/unlock pair at
scx_select_cpu_dfl()
MAINTAINERS: add me as reviewer for sched_ext
sched_ext: Replace rq_lock() to raw_spin_rq_lock() in scx_ops_bypass()
sched_ext: Relocate scx_enabled() related code
sched_ext: Implement scx_bpf_now()
sched_ext: Add scx_bpf_now() for BPF scheduler
sched_ext: Add time helpers for BPF schedulers
sched_ext: Replace bpf_ktime_get_ns() to scx_bpf_now()
sched_ext: Use time helpers in BPF schedulers
sched_ext: Fix incorrect time delta calculation in time_delta()
sched_ext: Add scx_bpf_events() and scx_read_event() for BPF
schedulers
Cheng-Yang Chou (1):
sched_ext: Always use SMP versions in kernel/sched/ext.c
Christian Brauner (1):
file: add take_fd() cleanup helper
Christian Loehle (1):
sched/fair: Remove stale FREQUENCY_UTIL comment
Christophe Leroy (2):
bpf: Remove arch_unprotect_bpf_trampoline()
bpf: Check return from set_memory_rox()
Chuyi Zhou (15):
cgroup: Prepare for using css_task_iter_*() in BPF
bpf: Introduce css_task open-coded iterator kfuncs
bpf: Introduce task open coded iterator kfuncs
bpf: Introduce css open-coded iterator kfuncs
bpf: teach the verifier to enforce css_iter and task_iter in RCU CS
bpf: Let bpf_iter_task_new accept null task ptr
selftests/bpf: rename bpf_iter_task.c to bpf_iter_tasks.c
selftests/bpf: Add tests for open-coded task and css iter
bpf: Relax allowlist for css_task iter
selftests/bpf: Add tests for css_task iter combining with cgroup iter
selftests/bpf: Add test for using css_task iter in sleepable progs
bpf: Let verifier consider {task,cgroup} is trusted in bpf_iter_reg
selftests/bpf: get trusted cgrp from bpf_iter__cgroup directly
sched_ext: Fix the incorrect bpf_list kfunc API in common.bpf.h.
sched_ext: Use SCX_CALL_OP_TASK in task_tick_scx
Colin Ian King (1):
sched_ext: Fix spelling mistake: "intead" -> "instead"
Daniel Xu (3):
bpf: btf: Support flags for BTF_SET8 sets
bpf: btf: Add BTF_KFUNCS_START/END macro pair
bpf: treewide: Annotate BPF kfuncs in BTF
Dave Marchevsky (6):
bpf: Don't explicitly emit BTF for struct btf_iter_num
selftests/bpf: Rename bpf_iter_task_vma.c to bpf_iter_task_vmas.c
bpf: Introduce task_vma open-coded iterator kfuncs
selftests/bpf: Add tests for open-coded task_vma iter
bpf: Add __bpf_kfunc_{start,end}_defs macros
bpf: Add __bpf_hook_{start,end} macros
David Vernet (15):
bpf: Add ability to pin bpf timer to calling CPU
selftests/bpf: Test pinning bpf timer to a core
sched_ext: Implement runnable task stall watchdog
sched_ext: Print sched_ext info when dumping stack
sched_ext: Implement SCX_KICK_WAIT
sched_ext: Implement sched_ext_ops.cpu_acquire/release()
sched_ext: Add selftests
bpf: Load vmlinux btf for any struct_ops map
sched_ext: Make scx_bpf_cpuperf_set() @cpu arg signed
scx: Allow calling sleepable kfuncs from BPF_PROG_TYPE_SYSCALL
scx/selftests: Verify we can call create_dsq from prog_run
sched_ext: Remove unnecessary cpu_relax()
scx: Fix exit selftest to use custom DSQ
scx: Fix raciness in scx_ops_bypass()
scx: Fix maximal BPF selftest prog
Dawei Li (1):
genirq: Deduplicate interrupt descriptor initialization
Devaansh Kumar (1):
sched_ext: selftests: Fix grammar in tests description
Devaansh-Kumar (1):
sched_ext: Documentation: Update instructions for running example
schedulers
Eduard Zingerman (2):
libbpf: Make btf_parse_elf process .BTF.base transparently
selftests/bpf: Check if distilled base inherits source endianness
Geliang Tang (1):
bpf, btf: Check btf for register_bpf_struct_ops
Henry Huang (2):
sched_ext: initialize kit->cursor.flags
sched_ext: keep running prev when prev->scx.slice != 0
Herbert Xu (1):
rhashtable: Fix rhashtable_try_insert test
Honglei Wang (3):
sched_ext: use correct function name in pick_task_scx() warning
message
sched_ext: Add __weak to fix the build errors
sched_ext: switch class when preempted by higher priority scheduler
Hongyan Xia (1):
sched/ext: Add BPF function to fetch rq
Hou Tao (7):
bpf: Free dynamically allocated bits in bpf_iter_bits_destroy()
bpf: Add bpf_mem_alloc_check_size() helper
bpf: Check the validity of nr_words in bpf_iter_bits_new()
bpf: Use __u64 to save the bits in bits iterator
selftests/bpf: Add three test cases for bits_iter
selftests/bpf: Use -4095 as the bad address for bits iterator
selftests/bpf: Export map_update_retriable()
Ihor Solodrai (2):
selftests/sched_ext: add order-only dependency of runner.o on BPFOBJ
selftests/sched_ext: fix build after renames in sched_ext API
Ilpo Järvinen (1):
<linux/cleanup.h>: Allow the passing of both iomem and non-iomem
pointers to no_free_ptr()
Ingo Molnar (3):
sched/syscalls: Split out kernel/sched/syscalls.c from
kernel/sched/core.c
sched/fair: Rename check_preempt_wakeup() to
check_preempt_wakeup_fair()
sched/fair: Rename check_preempt_curr() to wakeup_preempt()
Jake Hillion (2):
sched_ext: create_dsq: Return -EEXIST on duplicate request
sched_ext: Drop kfuncs marked for removal in 6.15
Jiapeng Chong (1):
sched_ext: Fixes incorrect type in bpf_scx_init()
Jiayuan Chen (1):
selftests/bpf: Fixes for test_maps test
Kui-Feng Lee (29):
bpf: refactory struct_ops type initialization to a function.
bpf: get type information with BTF_ID_LIST
bpf, net: introduce bpf_struct_ops_desc.
bpf: add struct_ops_tab to btf.
bpf: make struct_ops_map support btfs other than btf_vmlinux.
bpf: pass btf object id in bpf_map_info.
bpf: lookup struct_ops types from a given module BTF.
bpf: pass attached BTF to the bpf_struct_ops subsystem
bpf: hold module refcnt in bpf_struct_ops map creation and prog
verification.
bpf: validate value_type
bpf, net: switch to dynamic registration
libbpf: Find correct module BTFs for struct_ops maps and progs.
bpf: export btf_ctx_access to modules.
selftests/bpf: test case for register_bpf_struct_ops().
bpf: Fix error checks against bpf_get_btf_vmlinux().
bpf: Remove an unnecessary check.
selftests/bpf: Suppress warning message of an unused variable.
bpf: add btf pointer to struct bpf_ctx_arg_aux.
bpf: Move __kfunc_param_match_suffix() to btf.c.
bpf: Create argument information for nullable arguments.
selftests/bpf: Test PTR_MAYBE_NULL arguments of struct_ops operators.
libbpf: Set btf_value_type_id of struct bpf_map for struct_ops.
libbpf: Convert st_ops->data to shadow type.
bpftool: Generated shadow variables for struct_ops maps.
bpftool: Add an example for struct_ops map and shadow type.
selftests/bpf: Test if shadow types work correctly.
bpf, net: validate struct_ops when updating value.
bpf: struct_ops supports more than one page for trampolines.
selftests/bpf: Test struct_ops maps with a large number of struct_ops
program.
Kumar Kartikeya Dwivedi (4):
bpf: Allow calling static subprogs while holding a bpf_spin_lock
selftests/bpf: Add test for static subprog call in lock cs
bpf: Transfer RCU lock state between subprog calls
selftests/bpf: Add tests for RCU lock transfer between subprogs
Liang Jie (1):
sched_ext: Use sizeof_field for key_len in dsq_hash_params
Luo Gengkun (7):
bpf: Fix kabi-breakage for bpf_func_info_aux
bpf: Fix kabi-breakage for bpf_tramp_image
bpf: Fix kabi for bpf_attr
bpf_verifier: Fix kabi for bpf_verifier_env
bpf: Fix kabi for bpf_ctx_arg_aux
bpf: Fix kabi for bpf_prog_aux and bpf_prog
selftests/bpf: modify test_loader that didn't support running
bpf_prog_type_syscall programs
Manu Bretelle (1):
sched_ext: define missing cfi stubs for sched_ext
Martin KaFai Lau (5):
libbpf: Ensure undefined bpf_attr field stays 0
bpf: Remove unnecessary err < 0 check in
bpf_struct_ops_map_update_elem
bpf: Fix a crash when btf_parse_base() returns an error pointer
bpf: Reject struct_ops registration that uses module ptr and the
module btf_id is missing
bpf: Use kallsyms to find the function name of a struct_ops's stub
function
Masahiro Yamada (1):
kbuild: avoid too many execution of scripts/pahole-flags.sh
Matthieu Baerts (1):
bpf: fix compilation error without CGROUPS
Peter Zijlstra (26):
cfi: Flip headers
x86/cfi,bpf: Fix BPF JIT call
x86/cfi,bpf: Fix bpf_callback_t CFI
x86/cfi,bpf: Fix bpf_struct_ops CFI
cfi: Add CFI_NOSEAL()
bpf: Fix dtor CFI
cleanup: Make no_free_ptr() __must_check
sched: Simplify set_user_nice()
sched: Simplify syscalls
sched: Simplify sched_{set,get}affinity()
sched: Simplify yield_to()
sched: Simplify sched_rr_get_interval()
sched: Simplify sched_move_task()
sched: Misc cleanups
sched/deadline: Move bandwidth accounting into {en,de}queue_dl_entity
sched: Allow sched_class::dequeue_task() to fail
sched: Unify runtime accounting across classes
sched: Use set_next_task(.first) where required
sched/fair: Cleanup pick_task_fair() vs throttle
sched/fair: Cleanup pick_task_fair()'s curr
sched/fair: Unify pick_{,next_}_task_fair()
sched: Fixup set_next_task() implementations
sched: Split up put_prev_task_balance()
sched: Rework pick_next_task()
sched: Combine the last put_prev_task() and the first set_next_task()
sched: Add put_prev_task(.next)
Pu Lehui (8):
riscv, bpf: Fix unpredictable kernel crash about RV64 struct_ops
bpf: Fix kabi breakage in struct module
riscv, bpf: Fix out-of-bounds issue when preparing trampoline image
selftests/bpf: Fix btf leak on new btf alloc failure in btf_distill
test
libbpf: Fix return zero when elf_begin failed
libbpf: Fix incorrect traversal end type ID when marking
BTF_IS_EMBEDDED
selftests/bpf: Add distilled BTF test about marking BTF_IS_EMBEDDED
selftests/bpf: Add file_read_pattern to gitignore
Randy Dunlap (1):
sched_ext: fix kernel-doc warnings
Shizhao Chen (1):
sched_ext: Add option -l in selftest runner to list all available
tests
Song Liu (8):
bpf: Charge modmem for struct_ops trampoline
bpf: Let bpf_prog_pack_free handle any pointer
bpf: Adjust argument names of arch_prepare_bpf_trampoline()
bpf: Add helpers for trampoline image management
bpf, x86: Adjust arch_prepare_bpf_trampoline return value
bpf: Add arch_bpf_trampoline_size()
bpf: Use arch_bpf_trampoline_size
x86, bpf: Use bpf_prog_pack for bpf trampoline
T.J. Mercier (1):
bpf, docs: Fix broken link to renamed bpf_iter_task_vmas.c
Tejun Heo (152):
sched: Restructure sched_class order sanity checks in sched_init()
sched: Allow sched_cgroup_fork() to fail and introduce
sched_cancel_fork()
sched: Add sched_class->reweight_task()
sched: Add sched_class->switching_to() and expose
check_class_changing/changed()
sched: Factor out cgroup weight conversion functions
sched: Factor out update_other_load_avgs() from
__update_blocked_others()
sched: Add normal_policy()
sched_ext: Add boilerplate for extensible scheduler class
sched_ext: Implement BPF extensible scheduler class
sched_ext: Add scx_simple and scx_example_qmap example schedulers
sched_ext: Add sysrq-S which disables the BPF scheduler
sched_ext: Allow BPF schedulers to disallow specific tasks from
joining SCHED_EXT
sched_ext: Print debug dump after an error exit
tools/sched_ext: Add scx_show_state.py
sched_ext: Implement scx_bpf_kick_cpu() and task preemption support
sched_ext: Add a central scheduler which makes all scheduling
decisions on one CPU
sched_ext: Make watchdog handle ops.dispatch() looping stall
sched_ext: Add task state tracking operations
sched_ext: Implement tickless support
sched_ext: Track tasks that are subjects of the in-flight SCX
operation
sched_ext: Implement sched_ext_ops.cpu_online/offline()
sched_ext: Bypass BPF scheduler while PM events are in progress
sched_ext: Implement core-sched support
sched_ext: Add vtime-ordered priority queue to dispatch_q's
sched_ext: Documentation: scheduler: Document extensible scheduler
class
sched, sched_ext: Replace scx_next_task_picked() with
sched_class->switch_class()
cpufreq_schedutil: Refactor sugov_cpu_is_busy()
sched_ext: Add cpuperf support
sched_ext: Drop tools_clean target from the top-level Makefile
sched_ext: Swap argument positions in kcalloc() call to avoid compiler
warning
sched, sched_ext: Simplify dl_prio() case handling in sched_fork()
sched_ext: Account for idle policy when setting p->scx.weight in
scx_ops_enable_task()
sched_ext: Disallow loading BPF scheduler if isolcpus= domain
isolation is in effect
sched_ext: Minor cleanups in kernel/sched/ext.h
sched, sched_ext: Open code for_balance_class_range()
sched, sched_ext: Move some declarations from kernel/sched/ext.h to
sched.h
sched_ext: Take out ->priq and ->flags from scx_dsq_node
sched_ext: Implement DSQ iterator
sched_ext/scx_qmap: Add an example usage of DSQ iterator
sched_ext: Reimplement scx_bpf_reenqueue_local()
sched_ext: Make scx_bpf_reenqueue_local() skip tasks that are being
migrated
sched: Move struct balance_callback definition upward
sched_ext: Unpin and repin rq lock from balance_scx()
sched_ext: s/SCX_RQ_BALANCING/SCX_RQ_IN_BALANCE/ and add
SCX_RQ_IN_WAKEUP
sched_ext: Allow SCX_DSQ_LOCAL_ON for direct dispatches
sched_ext/scx_qmap: Pick idle CPU for direct dispatch on !wakeup
enqueues
sched_ext: Build fix on !CONFIG_STACKTRACE[_SUPPORT]
sched_ext: Allow p->scx.disallow only while loading
sched_ext: Simplify scx_can_stop_tick() invocation in
sched_can_stop_tick()
sched_ext: Add scx_enabled() test to @start_class promotion in
put_prev_task_balance()
sched_ext: Use update_curr_common() in update_curr_scx()
sched_ext: Simplify UP support by enabling sched_class->balance() in
UP
sched_ext: Improve comment on idle_sched_class exception in
scx_task_iter_next_locked()
sched_ext: Make task_can_run_on_remote_rq() use common
task_allowed_on_cpu()
sched_ext: Fix unsafe list iteration in process_ddsp_deferred_locals()
sched_ext: Make scx_rq_online() also test cpu_active() in addition to
SCX_RQ_ONLINE
sched_ext: Improve logging around enable/disable
sched_ext: Don't use double locking to migrate tasks across CPUs
scx_central: Fix smatch checker warning
sched_ext: Add missing cfi stub for ops.tick
sched_ext: Use task_can_run_on_remote_rq() test in
dispatch_to_local_dsq()
sched_ext: Use sched_clock_cpu() instead of rq_clock_task() in
touch_core_sched()
sched_ext: Don't call put_prev_task_scx() before picking the next task
sched_ext: Replace SCX_TASK_BAL_KEEP with SCX_RQ_BAL_KEEP
sched_ext: Unify regular and core-sched pick task paths
sched_ext: Relocate functions in kernel/sched/ext.c
sched_ext: Remove switch_class_scx()
sched_ext: Remove sched_class->switch_class()
sched_ext: TASK_DEAD tasks must be switched out of SCX on ops_disable
sched_ext: TASK_DEAD tasks must be switched into SCX on ops_enable
sched: Expose css_tg()
sched: Make cpu_shares_read_u64() use tg_weight()
sched: Introduce CONFIG_GROUP_SCHED_WEIGHT
sched_ext: Add cgroup support
sched_ext: Add a cgroup scheduler which uses flattened hierarchy
sched_ext: Temporarily work around pick_task_scx() being called
without balance_scx()
sched_ext: Add missing static to scx_has_op[]
sched_ext: Add missing static to scx_dump_data
sched_ext: Rename scx_kfunc_set_sleepable to unlocked and relocate
sched_ext: Refactor consume_remote_task()
sched_ext: Make find_dsq_for_dispatch() handle SCX_DSQ_LOCAL_ON
sched_ext: Fix processs_ddsp_deferred_locals() by unifying DTL_INVALID
handling
sched_ext: Restructure dispatch_to_local_dsq()
sched_ext: Reorder args for consume_local/remote_task()
sched_ext: Move sanity check and dsq_mod_nr() into
task_unlink_from_dsq()
sched_ext: Move consume_local_task() upward
sched_ext: Replace consume_local_task() with
move_local_task_to_local_dsq()
sched_ext: Compact struct bpf_iter_scx_dsq_kern
sched_ext: Implement scx_bpf_dispatch[_vtime]_from_dsq()
scx_qmap: Implement highpri boosting
sched_ext: Synchronize bypass state changes with rq lock
sched_ext: Don't trigger ops.quiescent/runnable() on migrations
sched_ext: Fix build when !CONFIG_STACKTRACE
sched_ext: Build fix for !CONFIG_SMP
sched_ext: Add __COMPAT helpers for features added during v6.12 devel
cycle
tools/sched_ext: Receive misc updates from SCX repo
scx_flatcg: Use a user DSQ for fallback instead of SCX_DSQ_GLOBAL
sched_ext: Allow only user DSQs for scx_bpf_consume(),
scx_bpf_dsq_nr_queued() and bpf_iter_scx_dsq_new()
sched_ext: Relocate find_user_dsq()
sched_ext: Split the global DSQ per NUMA node
sched_ext: Use shorter slice while bypassing
sched_ext: Relocate check_hotplug_seq() call in scx_ops_enable()
sched_ext: Remove SCX_OPS_PREPPING
sched_ext: Initialize in bypass mode
sched_ext: Fix SCX_TASK_INIT -> SCX_TASK_READY transitions in
scx_ops_enable()
sched_ext: Enable scx_ops_init_task() separately
sched_ext: Add scx_cgroup_enabled to gate cgroup operations and fix
scx_tg_online()
sched_ext: Decouple locks in scx_ops_disable_workfn()
sched_ext: Decouple locks in scx_ops_enable()
sched_ext: Improve error reporting during loading
sched_ext: scx_cgroup_exit() may be called without successful
scx_cgroup_init()
sched/core: Make select_task_rq() take the pointer to wake_flags
instead of value
sched/core: Add ENQUEUE_RQ_SELECTED to indicate whether
->select_task_rq() was called
sched_ext, scx_qmap: Add and use SCX_ENQ_CPU_SELECTED
Revert "sched_ext: Use shorter slice while bypassing"
sched_ext: Start schedulers with consistent p->scx.slice values
sched_ext: Move scx_buildin_idle_enabled check to
scx_bpf_select_cpu_dfl()
sched_ext: bypass mode shouldn't depend on ops.select_cpu()
sched_ext: Move scx_tasks_lock handling into scx_task_iter helpers
sched_ext: Don't hold scx_tasks_lock for too long
sched_ext: Make cast_mask() inline
sched_ext: Fix enq_last_no_enq_fails selftest
sched_ext: Add a missing newline at the end of an error message
sched_ext: Update scx_show_state.py to match scx_ops_bypass_depth's
new type
sched_ext: Handle cases where pick_task_scx() is called without
preceding balance_scx()
sched_ext: ops.cpu_acquire() should be called with SCX_KF_REST
sched_ext: Factor out move_task_between_dsqs() from
scx_dispatch_from_dsq()
sched_ext: Rename CFI stubs to names that are recognized by BPF
sched_ext: Replace set_arg_maybe_null() with __nullable CFI stub tags
sched_ext: Avoid live-locking bypass mode switching
sched_ext: Enable the ops breather and eject BPF scheduler on
softlockup
sched_ext: scx_bpf_dispatch_from_dsq_set_*() are allowed from unlocked
context
sched_ext: Rename scx_bpf_dispatch[_vtime]() to
scx_bpf_dsq_insert[_vtime]()
sched_ext: Rename scx_bpf_consume() to scx_bpf_dsq_move_to_local()
sched_ext: Rename scx_bpf_dispatch[_vtime]_from_dsq*() ->
scx_bpf_dsq_move[_vtime]*()
sched_ext: Fix invalid irq restore in scx_ops_bypass()
sched_ext: Fix dsq_local_on selftest
tools/sched_ext: Receive updates from SCX repo
sched_ext: selftests/dsp_local_on: Fix sporadic failures
sched_ext: Fix incorrect autogroup migration detection
sched_ext: Implement auto local dispatching of migration disabled
tasks
sched_ext: Fix migration disabled handling in targeted dispatches
sched_ext: Fix incorrect assumption about migration disabled tasks in
task_can_run_on_remote_rq()
sched_ext: Fix pick_task_scx() picking non-queued tasks when it's
called without balance()
sched_ext: Implement SCX_OPS_ALLOW_QUEUED_WAKEUP
sched_ext: bpf_iter_scx_dsq_new() should always initialize iterator
sched_ext: Make scx_group_set_weight() always update tg->scx.weight
sched_ext, sched/core: Don't call scx_group_set_weight() prematurely
from sched_create_group()
sched_ext: Mark scx_bpf_dsq_move_set_[slice|vtime]() with KF_RCU
sched_ext: Don't kick CPUs running higher classes
sched_ext: Use SCX_TASK_READY test instead of tryget_task_struct()
during class switch
tools/sched_ext: Sync with scx repo
Thomas Gleixner (1):
sched/ext: Remove sched_fork() hack
Thorsten Blum (1):
sched_ext: Use str_enabled_disabled() helper in
update_selcpu_topology()
Tianchen Ding (1):
sched_ext: Use btf_ids to resolve task_struct
Tony Ambardar (1):
libbpf: Ensure new BTF objects inherit input endianness
Vincent Guittot (2):
sched/cpufreq: Rework schedutil governor performance estimation
sched/fair: Fix sched_can_stop_tick() for fair tasks
Vishal Chourasia (2):
sched_ext: Add __weak markers to BPF helper function decalarations
sched_ext: Fix function pointer type mismatches in BPF selftests
Wenyu Huang (1):
sched/doc: Update documentation after renames and synchronize Chinese
version
Yafang Shao (2):
bpf: Add bits iterator
selftests/bpf: Add selftest for bits iter
Yipeng Zou (1):
sched_ext: Allow dequeue_task_scx to fail
Yiwei Lin (1):
sched/fair: Remove unused 'curr' argument from pick_next_entity()
Yu Liao (2):
sched: Put task_group::idle under CONFIG_GROUP_SCHED_WEIGHT
sched: Add dummy version of sched_group_set_idle()
Yury Norov (1):
cpumask: introduce assign_cpu() macro
Zhang Qiao (3):
sched_ext: Remove redundant p->nr_cpus_allowed checker
sched/ext: Fix unmatch trailing comment of CONFIG_EXT_GROUP_SCHED
sched/ext: Use tg_cgroup() to elieminate duplicate code
Zhao Mengmeng (1):
sched_ext: Replace scx_next_task_picked() with switch_class() in
comment
Zicheng Qu (17):
sched: Fix kabi for reweight_task in struct sched_class
sched/syscalls: Fix kabi for EXPORT_SYMBOL moved from core.c to
syscalls.c
sched: Fix kabi for switching_to in struct sched_class
sched/fair: Fix kabi for check_preempt_curr and wakeup_preempt in
struct sched_class
sched: Fix kabi for dequeue_task in struct sched_class
sched_ext: Fix kabi for scx in struct task_struct
sched_ext: Fix kabi for switch_class in struct sched_class
sched: Fix kabi for exec_max in struct sched_statistics
sched_ext: Fix kabi for balance in struct sched_class
sched_ext: Fix kabi for header in kernel/sched/sched.h
sched: Fix kabi pick_task in struct sched_class
sched: Fix kabi for put_prev_task in struct sched_class
sched_ext: Fix kabi for scx_flags and scx_weight in struct task_group
sched: Fix kabi for int idle in struct task_group
sched: Add __setscheduler_class() for sched_ext
genirq: Fix kabi for kstat_irqs in struct irq_desc
sched_ext: Enable and disable sched_ext configs
Zqiang (1):
sched_ext: Fix unsafe locking in the scx_dump_state()
guanjing (1):
sched_ext: fix application of sizeof to pointer
Documentation/bpf/bpf_iterators.rst | 2 +-
Documentation/bpf/kfuncs.rst | 14 +-
Documentation/scheduler/index.rst | 1 +
Documentation/scheduler/sched-design-CFS.rst | 8 +-
Documentation/scheduler/sched-ext.rst | 325 +
.../zh_CN/scheduler/sched-design-CFS.rst | 8 +-
MAINTAINERS | 16 +-
Makefile | 4 +-
arch/arm64/configs/openeuler_defconfig | 3 +
arch/arm64/kernel/bpf-rvi.c | 4 +-
arch/arm64/net/bpf_jit_comp.c | 55 +-
arch/mips/dec/setup.c | 2 +-
arch/parisc/kernel/smp.c | 2 +-
arch/powerpc/kvm/book3s_hv_rm_xics.c | 2 +-
arch/riscv/include/asm/cfi.h | 3 +-
arch/riscv/kernel/cfi.c | 2 +-
arch/riscv/net/bpf_jit_comp64.c | 48 +-
arch/s390/net/bpf_jit_comp.c | 59 +-
arch/x86/configs/openeuler_defconfig | 2 +
arch/x86/include/asm/cfi.h | 126 +-
arch/x86/kernel/alternative.c | 87 +-
arch/x86/kernel/cfi.c | 4 +-
arch/x86/net/bpf_jit_comp.c | 261 +-
block/blk-cgroup.c | 4 +-
drivers/hid/bpf/hid_bpf_dispatch.c | 12 +-
drivers/tty/sysrq.c | 1 +
fs/proc/stat.c | 4 +-
include/asm-generic/Kbuild | 1 +
include/asm-generic/cfi.h | 5 +
include/asm-generic/vmlinux.lds.h | 1 +
include/linux/bitops.h | 2 +
include/linux/bpf.h | 130 +-
include/linux/bpf_mem_alloc.h | 3 +
include/linux/bpf_verifier.h | 21 +-
include/linux/btf.h | 105 +
include/linux/btf_ids.h | 21 +-
include/linux/cfi.h | 12 +
include/linux/cgroup.h | 14 +-
include/linux/cleanup.h | 42 +-
include/linux/cpumask.h | 41 +-
include/linux/energy_model.h | 1 -
include/linux/file.h | 20 +
include/linux/filter.h | 2 +-
include/linux/irqdesc.h | 17 +-
include/linux/kernel_stat.h | 8 +
include/linux/module.h | 8 +-
include/linux/sched.h | 8 +-
include/linux/sched/ext.h | 216 +
include/linux/sched/task.h | 8 +-
include/trace/events/sched_ext.h | 32 +
include/uapi/linux/bpf.h | 16 +-
include/uapi/linux/sched.h | 1 +
init/Kconfig | 10 +
init/init_task.c | 12 +
kernel/Kconfig.preempt | 27 +-
kernel/bpf-rvi/common_kfuncs.c | 4 +-
kernel/bpf/Makefile | 8 +-
kernel/bpf/bpf_iter.c | 12 +-
kernel/bpf/bpf_struct_ops.c | 745 +-
kernel/bpf/bpf_struct_ops_types.h | 12 -
kernel/bpf/btf.c | 431 +-
kernel/bpf/cgroup_iter.c | 65 +-
kernel/bpf/core.c | 76 +-
kernel/bpf/cpumask.c | 18 +-
kernel/bpf/dispatcher.c | 7 +-
kernel/bpf/helpers.c | 202 +-
kernel/bpf/map_iter.c | 10 +-
kernel/bpf/memalloc.c | 14 +-
kernel/bpf/syscall.c | 12 +-
kernel/bpf/task_iter.c | 242 +-
kernel/bpf/trampoline.c | 99 +-
kernel/bpf/verifier.c | 317 +-
kernel/cgroup/cgroup.c | 18 +-
kernel/cgroup/cpuset.c | 4 +-
kernel/cgroup/rstat.c | 13 +-
kernel/events/core.c | 2 +-
kernel/fork.c | 17 +-
kernel/irq/Kconfig | 4 +
kernel/irq/internals.h | 2 +-
kernel/irq/irqdesc.c | 144 +-
kernel/irq/proc.c | 5 +-
kernel/module/main.c | 5 +-
kernel/sched/autogroup.c | 4 +-
kernel/sched/bpf_sched.c | 8 +-
kernel/sched/build_policy.c | 13 +
kernel/sched/core.c | 2492 +-----
kernel/sched/cpuacct.c | 4 +-
kernel/sched/cpufreq_schedutil.c | 83 +-
kernel/sched/deadline.c | 175 +-
kernel/sched/debug.c | 3 +
kernel/sched/ext.c | 7155 +++++++++++++++++
kernel/sched/ext.h | 119 +
kernel/sched/ext_idle.c | 755 ++
kernel/sched/ext_idle.h | 39 +
kernel/sched/fair.c | 306 +-
kernel/sched/idle.c | 31 +-
kernel/sched/rt.c | 40 +-
kernel/sched/sched.h | 473 +-
kernel/sched/stop_task.c | 35 +-
kernel/sched/syscalls.c | 1713 ++++
kernel/trace/bpf_trace.c | 12 +-
kernel/trace/trace_probe.c | 2 -
kernel/watchdog.c | 223 +-
lib/Kconfig.debug | 14 +
lib/dump_stack.c | 1 +
lib/rhashtable.c | 12 +-
net/bpf/bpf_dummy_struct_ops.c | 72 +-
net/bpf/test_run.c | 30 +-
net/core/filter.c | 33 +-
net/core/xdp.c | 10 +-
net/ipv4/bpf_tcp_ca.c | 93 +-
net/ipv4/fou_bpf.c | 10 +-
net/ipv4/tcp_bbr.c | 4 +-
net/ipv4/tcp_cong.c | 6 +-
net/ipv4/tcp_cubic.c | 4 +-
net/ipv4/tcp_dctcp.c | 4 +-
net/netfilter/nf_conntrack_bpf.c | 10 +-
net/netfilter/nf_nat_bpf.c | 10 +-
net/socket.c | 8 +-
net/xfrm/xfrm_interface_bpf.c | 10 +-
scripts/Makefile.btf | 33 +
scripts/Makefile.modfinal | 2 +-
scripts/gdb/linux/interrupts.py | 6 +-
scripts/pahole-flags.sh | 30 -
tools/Makefile | 10 +-
.../bpf/bpftool/Documentation/bpftool-gen.rst | 58 +-
tools/bpf/bpftool/gen.c | 253 +-
tools/bpf/resolve_btfids/main.c | 8 +
tools/include/linux/bitops.h | 2 +
tools/include/uapi/linux/bpf.h | 14 +-
tools/include/uapi/linux/sched.h | 1 +
tools/lib/bpf/Build | 2 +-
tools/lib/bpf/bpf.c | 4 +-
tools/lib/bpf/bpf.h | 4 +-
tools/lib/bpf/btf.c | 704 +-
tools/lib/bpf/btf.h | 36 +
tools/lib/bpf/btf_iter.c | 177 +
tools/lib/bpf/btf_relocate.c | 519 ++
tools/lib/bpf/libbpf.c | 97 +-
tools/lib/bpf/libbpf.map | 4 +-
tools/lib/bpf/libbpf_internal.h | 29 +-
tools/lib/bpf/libbpf_probes.c | 1 +
tools/lib/bpf/linker.c | 58 +-
tools/perf/util/probe-finder.c | 4 +-
tools/sched_ext/.gitignore | 2 +
tools/sched_ext/Makefile | 246 +
tools/sched_ext/README.md | 270 +
.../sched_ext/include/bpf-compat/gnu/stubs.h | 11 +
tools/sched_ext/include/scx/common.bpf.h | 647 ++
tools/sched_ext/include/scx/common.h | 81 +
tools/sched_ext/include/scx/compat.bpf.h | 143 +
tools/sched_ext/include/scx/compat.h | 187 +
.../sched_ext/include/scx/enums.autogen.bpf.h | 105 +
tools/sched_ext/include/scx/enums.autogen.h | 41 +
tools/sched_ext/include/scx/enums.bpf.h | 12 +
tools/sched_ext/include/scx/enums.h | 27 +
tools/sched_ext/include/scx/user_exit_info.h | 118 +
tools/sched_ext/scx_central.bpf.c | 356 +
tools/sched_ext/scx_central.c | 145 +
tools/sched_ext/scx_flatcg.bpf.c | 954 +++
tools/sched_ext/scx_flatcg.c | 234 +
tools/sched_ext/scx_flatcg.h | 51 +
tools/sched_ext/scx_qmap.bpf.c | 827 ++
tools/sched_ext/scx_qmap.c | 155 +
tools/sched_ext/scx_show_state.py | 42 +
tools/sched_ext/scx_simple.bpf.c | 151 +
tools/sched_ext/scx_simple.c | 107 +
tools/testing/selftests/Makefile | 9 +-
tools/testing/selftests/bpf/.gitignore | 1 +
.../testing/selftests/bpf/bpf_experimental.h | 96 +
.../selftests/bpf/bpf_testmod/bpf_testmod.c | 160 +-
.../selftests/bpf/bpf_testmod/bpf_testmod.h | 61 +
.../bpf/bpf_testmod/bpf_testmod_kfunc.h | 9 +
.../selftests/bpf/prog_tests/bpf_iter.c | 44 +-
.../selftests/bpf/prog_tests/btf_distill.c | 692 ++
.../selftests/bpf/prog_tests/cgroup_iter.c | 33 +
.../bpf/prog_tests/global_func_dead_code.c | 60 +
.../testing/selftests/bpf/prog_tests/iters.c | 209 +
.../selftests/bpf/prog_tests/kfunc_call.c | 1 +
.../selftests/bpf/prog_tests/rcu_read_lock.c | 6 +
.../selftests/bpf/prog_tests/spin_lock.c | 2 +
.../prog_tests/test_struct_ops_maybe_null.c | 46 +
.../bpf/prog_tests/test_struct_ops_module.c | 86 +
.../prog_tests/test_struct_ops_multi_pages.c | 30 +
.../testing/selftests/bpf/prog_tests/timer.c | 4 +
.../selftests/bpf/prog_tests/verifier.c | 4 +
...f_iter_task_vma.c => bpf_iter_task_vmas.c} | 0
.../{bpf_iter_task.c => bpf_iter_tasks.c} | 0
.../bpf/progs/freplace_dead_global_func.c | 11 +
tools/testing/selftests/bpf/progs/iters_css.c | 72 +
.../selftests/bpf/progs/iters_css_task.c | 102 +
.../testing/selftests/bpf/progs/iters_task.c | 41 +
.../selftests/bpf/progs/iters_task_failure.c | 105 +
.../selftests/bpf/progs/iters_task_vma.c | 43 +
.../selftests/bpf/progs/iters_testmod_seq.c | 50 +
.../selftests/bpf/progs/kfunc_call_test.c | 37 +
.../selftests/bpf/progs/rcu_read_lock.c | 120 +
.../bpf/progs/struct_ops_maybe_null.c | 29 +
.../bpf/progs/struct_ops_maybe_null_fail.c | 24 +
.../selftests/bpf/progs/struct_ops_module.c | 37 +
.../bpf/progs/struct_ops_multi_pages.c | 102 +
.../selftests/bpf/progs/test_global_func12.c | 4 +-
.../selftests/bpf/progs/test_spin_lock.c | 65 +
.../selftests/bpf/progs/test_spin_lock_fail.c | 44 +
tools/testing/selftests/bpf/progs/timer.c | 63 +-
.../selftests/bpf/progs/verifier_bits_iter.c | 232 +
.../bpf/progs/verifier_global_subprogs.c | 101 +
.../selftests/bpf/progs/verifier_spin_lock.c | 2 +-
.../bpf/progs/verifier_subprog_precision.c | 4 +-
tools/testing/selftests/bpf/test_loader.c | 10 +-
tools/testing/selftests/bpf/test_maps.c | 18 +-
tools/testing/selftests/bpf/test_maps.h | 5 +
tools/testing/selftests/sched_ext/.gitignore | 6 +
tools/testing/selftests/sched_ext/Makefile | 211 +
tools/testing/selftests/sched_ext/config | 9 +
.../selftests/sched_ext/create_dsq.bpf.c | 58 +
.../testing/selftests/sched_ext/create_dsq.c | 57 +
.../sched_ext/ddsp_bogus_dsq_fail.bpf.c | 42 +
.../selftests/sched_ext/ddsp_bogus_dsq_fail.c | 60 +
.../sched_ext/ddsp_vtimelocal_fail.bpf.c | 39 +
.../sched_ext/ddsp_vtimelocal_fail.c | 59 +
.../selftests/sched_ext/dsp_local_on.bpf.c | 68 +
.../selftests/sched_ext/dsp_local_on.c | 60 +
.../sched_ext/enq_last_no_enq_fails.bpf.c | 29 +
.../sched_ext/enq_last_no_enq_fails.c | 64 +
.../sched_ext/enq_select_cpu_fails.bpf.c | 43 +
.../sched_ext/enq_select_cpu_fails.c | 61 +
tools/testing/selftests/sched_ext/exit.bpf.c | 86 +
tools/testing/selftests/sched_ext/exit.c | 64 +
tools/testing/selftests/sched_ext/exit_test.h | 20 +
.../testing/selftests/sched_ext/hotplug.bpf.c | 61 +
tools/testing/selftests/sched_ext/hotplug.c | 170 +
.../selftests/sched_ext/hotplug_test.h | 15 +
.../sched_ext/init_enable_count.bpf.c | 53 +
.../selftests/sched_ext/init_enable_count.c | 157 +
.../testing/selftests/sched_ext/maximal.bpf.c | 166 +
tools/testing/selftests/sched_ext/maximal.c | 54 +
.../selftests/sched_ext/maybe_null.bpf.c | 36 +
.../testing/selftests/sched_ext/maybe_null.c | 49 +
.../sched_ext/maybe_null_fail_dsp.bpf.c | 25 +
.../sched_ext/maybe_null_fail_yld.bpf.c | 28 +
.../testing/selftests/sched_ext/minimal.bpf.c | 21 +
tools/testing/selftests/sched_ext/minimal.c | 58 +
.../selftests/sched_ext/prog_run.bpf.c | 33 +
tools/testing/selftests/sched_ext/prog_run.c | 78 +
.../testing/selftests/sched_ext/reload_loop.c | 74 +
tools/testing/selftests/sched_ext/runner.c | 212 +
tools/testing/selftests/sched_ext/scx_test.h | 131 +
.../selftests/sched_ext/select_cpu_dfl.bpf.c | 40 +
.../selftests/sched_ext/select_cpu_dfl.c | 75 +
.../sched_ext/select_cpu_dfl_nodispatch.bpf.c | 89 +
.../sched_ext/select_cpu_dfl_nodispatch.c | 75 +
.../sched_ext/select_cpu_dispatch.bpf.c | 41 +
.../selftests/sched_ext/select_cpu_dispatch.c | 73 +
.../select_cpu_dispatch_bad_dsq.bpf.c | 37 +
.../sched_ext/select_cpu_dispatch_bad_dsq.c | 59 +
.../select_cpu_dispatch_dbl_dsp.bpf.c | 38 +
.../sched_ext/select_cpu_dispatch_dbl_dsp.c | 59 +
.../sched_ext/select_cpu_vtime.bpf.c | 92 +
.../selftests/sched_ext/select_cpu_vtime.c | 62 +
.../selftests/sched_ext/test_example.c | 49 +
tools/testing/selftests/sched_ext/util.c | 71 +
tools/testing/selftests/sched_ext/util.h | 13 +
263 files changed, 27732 insertions(+), 3787 deletions(-)
create mode 100644 Documentation/scheduler/sched-ext.rst
create mode 100644 include/asm-generic/cfi.h
create mode 100644 include/linux/sched/ext.h
create mode 100644 include/trace/events/sched_ext.h
delete mode 100644 kernel/bpf/bpf_struct_ops_types.h
create mode 100644 kernel/sched/ext.c
create mode 100644 kernel/sched/ext.h
create mode 100644 kernel/sched/ext_idle.c
create mode 100644 kernel/sched/ext_idle.h
create mode 100644 kernel/sched/syscalls.c
create mode 100644 scripts/Makefile.btf
delete mode 100755 scripts/pahole-flags.sh
create mode 100644 tools/lib/bpf/btf_iter.c
create mode 100644 tools/lib/bpf/btf_relocate.c
create mode 100644 tools/sched_ext/.gitignore
create mode 100644 tools/sched_ext/Makefile
create mode 100644 tools/sched_ext/README.md
create mode 100644 tools/sched_ext/include/bpf-compat/gnu/stubs.h
create mode 100644 tools/sched_ext/include/scx/common.bpf.h
create mode 100644 tools/sched_ext/include/scx/common.h
create mode 100644 tools/sched_ext/include/scx/compat.bpf.h
create mode 100644 tools/sched_ext/include/scx/compat.h
create mode 100644 tools/sched_ext/include/scx/enums.autogen.bpf.h
create mode 100644 tools/sched_ext/include/scx/enums.autogen.h
create mode 100644 tools/sched_ext/include/scx/enums.bpf.h
create mode 100644 tools/sched_ext/include/scx/enums.h
create mode 100644 tools/sched_ext/include/scx/user_exit_info.h
create mode 100644 tools/sched_ext/scx_central.bpf.c
create mode 100644 tools/sched_ext/scx_central.c
create mode 100644 tools/sched_ext/scx_flatcg.bpf.c
create mode 100644 tools/sched_ext/scx_flatcg.c
create mode 100644 tools/sched_ext/scx_flatcg.h
create mode 100644 tools/sched_ext/scx_qmap.bpf.c
create mode 100644 tools/sched_ext/scx_qmap.c
create mode 100644 tools/sched_ext/scx_show_state.py
create mode 100644 tools/sched_ext/scx_simple.bpf.c
create mode 100644 tools/sched_ext/scx_simple.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/btf_distill.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/global_func_dead_code.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_maybe_null.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_multi_pages.c
rename tools/testing/selftests/bpf/progs/{bpf_iter_task_vma.c => bpf_iter_task_vmas.c} (100%)
rename tools/testing/selftests/bpf/progs/{bpf_iter_task.c => bpf_iter_tasks.c} (100%)
create mode 100644 tools/testing/selftests/bpf/progs/freplace_dead_global_func.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_css.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_css_task.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_task.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_task_failure.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_task_vma.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_maybe_null.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_maybe_null_fail.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_module.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_multi_pages.c
create mode 100644 tools/testing/selftests/bpf/progs/verifier_bits_iter.c
create mode 100644 tools/testing/selftests/bpf/progs/verifier_global_subprogs.c
create mode 100644 tools/testing/selftests/sched_ext/.gitignore
create mode 100644 tools/testing/selftests/sched_ext/Makefile
create mode 100644 tools/testing/selftests/sched_ext/config
create mode 100644 tools/testing/selftests/sched_ext/create_dsq.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/create_dsq.c
create mode 100644 tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.c
create mode 100644 tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.c
create mode 100644 tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/dsp_local_on.c
create mode 100644 tools/testing/selftests/sched_ext/enq_last_no_enq_fails.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/enq_last_no_enq_fails.c
create mode 100644 tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/enq_select_cpu_fails.c
create mode 100644 tools/testing/selftests/sched_ext/exit.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/exit.c
create mode 100644 tools/testing/selftests/sched_ext/exit_test.h
create mode 100644 tools/testing/selftests/sched_ext/hotplug.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/hotplug.c
create mode 100644 tools/testing/selftests/sched_ext/hotplug_test.h
create mode 100644 tools/testing/selftests/sched_ext/init_enable_count.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/init_enable_count.c
create mode 100644 tools/testing/selftests/sched_ext/maximal.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/maximal.c
create mode 100644 tools/testing/selftests/sched_ext/maybe_null.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/maybe_null.c
create mode 100644 tools/testing/selftests/sched_ext/maybe_null_fail_dsp.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/maybe_null_fail_yld.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/minimal.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/minimal.c
create mode 100644 tools/testing/selftests/sched_ext/prog_run.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/prog_run.c
create mode 100644 tools/testing/selftests/sched_ext/reload_loop.c
create mode 100644 tools/testing/selftests/sched_ext/runner.c
create mode 100644 tools/testing/selftests/sched_ext/scx_test.h
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dfl.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dispatch.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_vtime.c
create mode 100644 tools/testing/selftests/sched_ext/test_example.c
create mode 100644 tools/testing/selftests/sched_ext/util.c
create mode 100644 tools/testing/selftests/sched_ext/util.h
--
2.34.1
2
452
03 Apr '26
From: Daniel Hodges <git(a)danielhodges.dev>
stable inclusion
from stable-v6.6.130
commit 3c5c818c78b03a1725f3dcd566865c77b48dd3a6
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13984
CVE: CVE-2026-23281
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 03cc8f90d0537fcd4985c3319b4fafbf2e3fb1f0 ]
The lbs_free_adapter() function uses timer_delete() (non-synchronous)
for both command_timer and tx_lockup_timer before the structure is
freed. This is incorrect because timer_delete() does not wait for
any running timer callback to complete.
If a timer callback is executing when lbs_free_adapter() is called,
the callback will access freed memory since lbs_cfg_free() frees the
containing structure immediately after lbs_free_adapter() returns.
Both timer callbacks (lbs_cmd_timeout_handler and lbs_tx_lockup_handler)
access priv->driver_lock, priv->cur_cmd, priv->dev, and other fields,
which would all be use-after-free violations.
Use timer_delete_sync() instead to ensure any running timer callback
has completed before returning.
This bug was introduced in commit 8f641d93c38a ("libertas: detect TX
lockups and reset hardware") where del_timer() was used instead of
del_timer_sync() in the cleanup path. The command_timer has had the
same issue since the driver was first written.
Fixes: 8f641d93c38a ("libertas: detect TX lockups and reset hardware")
Fixes: 954ee164f4f4 ("[PATCH] libertas: reorganize and simplify init sequence")
Cc: stable(a)vger.kernel.org
Signed-off-by: Daniel Hodges <git(a)danielhodges.dev>
Link: https://patch.msgid.link/20260206195356.15647-1-git@danielhodges.dev
Signed-off-by: Johannes Berg <johannes.berg(a)intel.com>
[ del_timer() => timer_delete_sync() ]
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Wupeng Ma <mawupeng1(a)huawei.com>
---
drivers/net/wireless/marvell/libertas/main.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/wireless/marvell/libertas/main.c b/drivers/net/wireless/marvell/libertas/main.c
index 78e8b5aecec0e..91b9501c6d8cb 100644
--- a/drivers/net/wireless/marvell/libertas/main.c
+++ b/drivers/net/wireless/marvell/libertas/main.c
@@ -881,8 +881,8 @@ static void lbs_free_adapter(struct lbs_private *priv)
{
lbs_free_cmd_buffer(priv);
kfifo_free(&priv->event_fifo);
- del_timer(&priv->command_timer);
- del_timer(&priv->tx_lockup_timer);
+ timer_delete_sync(&priv->command_timer);
+ timer_delete_sync(&priv->tx_lockup_timer);
del_timer(&priv->auto_deepsleep_timer);
}
--
2.43.0
2
1
[PATCH OLK-6.6 0/2] ip6_tunnel: fix skb_vlan_inet_prepare() return value handling regression
by Li Xiasong 03 Apr '26
by Li Xiasong 03 Apr '26
03 Apr '26
This patchset contains a backport of the upstream change that introduced
the issue, followed by the fix.
Patch 1 is a backport of upstream commit f478b8239d65 ("net: tunnel:
make skb_vlan_inet_prepare() return drop reasons") which changed the
return value semantics of skb_vlan_inet_prepare().
Patch 2 adapts the return value handling in __ip6_tnl_rcv() to match
the new semantics, fixing the regression.
Including the upstream change as patch 1 ensures that applying both
patches together does not introduce the issue that would occur if only
patch 1 were merged.
Li Xiasong (1):
ip6_tunnel: adapt to skb_vlan_inet_prepare() return value change
Menglong Dong (1):
net: tunnel: make skb_vlan_inet_prepare() return drop reasons
drivers/net/bareudp.c | 4 ++--
drivers/net/geneve.c | 4 ++--
include/net/ip_tunnels.h | 13 ++++++++-----
net/ipv6/ip6_tunnel.c | 2 +-
4 files changed, 13 insertions(+), 10 deletions(-)
--
2.34.1
2
3
[PATCH OLK-6.6] can: usb: etas_es58x: correctly anchor the urb in the read bulk callback
by Wupeng Ma 03 Apr '26
by Wupeng Ma 03 Apr '26
03 Apr '26
From: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
stable inclusion
from stable-v6.6.130
commit f6e90c113c92e83fc0963d5e60e16b0e8a268981
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13957
CVE: CVE-2026-23324
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 5eaad4f768266f1f17e01232ffe2ef009f8129b7 upstream.
When submitting an urb, that is using the anchor pattern, it needs to be
anchored before submitting it otherwise it could be leaked if
usb_kill_anchored_urbs() is called. This logic is correctly done
elsewhere in the driver, except in the read bulk callback so do that
here also.
Cc: Vincent Mailhol <mailhol(a)kernel.org>
Cc: Marc Kleine-Budde <mkl(a)pengutronix.de>
Cc: stable(a)kernel.org
Assisted-by: gkh_clanker_2000
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Reviewed-by: Vincent Mailhol <mailhol(a)kernel.org>
Tested-by: Vincent Mailhol <mailhol(a)kernel.org>
Link: https://patch.msgid.link/2026022320-poser-stiffly-9d84@gregkh
Fixes: 8537257874e9 ("can: etas_es58x: add core support for ETAS ES58X CAN USB interfaces")
Signed-off-by: Marc Kleine-Budde <mkl(a)pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Wupeng Ma <mawupeng1(a)huawei.com>
---
drivers/net/can/usb/etas_es58x/es58x_core.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/net/can/usb/etas_es58x/es58x_core.c b/drivers/net/can/usb/etas_es58x/es58x_core.c
index bb49a2c0a9a5c..3edf06106f9d3 100644
--- a/drivers/net/can/usb/etas_es58x/es58x_core.c
+++ b/drivers/net/can/usb/etas_es58x/es58x_core.c
@@ -1461,12 +1461,18 @@ static void es58x_read_bulk_callback(struct urb *urb)
}
resubmit_urb:
+ usb_anchor_urb(urb, &es58x_dev->rx_urbs);
ret = usb_submit_urb(urb, GFP_ATOMIC);
+ if (!ret)
+ return;
+
+ usb_unanchor_urb(urb);
+
if (ret == -ENODEV) {
for (i = 0; i < es58x_dev->num_can_ch; i++)
if (es58x_dev->netdev[i])
netif_device_detach(es58x_dev->netdev[i]);
- } else if (ret)
+ } else
dev_err_ratelimited(dev,
"Failed resubmitting read bulk urb: %pe\n",
ERR_PTR(ret));
--
2.43.0
2
1
Aboorva Devarajan (2):
sched_ext: Documentation: Remove mentions of scx_bpf_switch_all
sched: Pass correct scheduling policy to __setscheduler_class
Alan Maguire (16):
kbuild,bpf: Switch to using --btf_features for pahole v1.26 and later
kbuild, bpf: Use test-ge check for v1.25-only pahole
libbpf: Add btf__distill_base() creating split BTF with distilled base
BTF
selftests/bpf: Test distilled base, split BTF generation
libbpf: Split BTF relocation
selftests/bpf: Extend distilled BTF tests to cover BTF relocation
resolve_btfids: Handle presence of .BTF.base section
libbpf: BTF relocation followup fixing naming, loop logic
module, bpf: Store BTF base pointer in struct module
libbpf: Split field iter code into its own file kernel
libbpf,bpf: Share BTF relocate-related code with kernel
kbuild,bpf: Add module-specific pahole flags for distilled base BTF
selftests/bpf: Add kfunc_call test for simple dtor in bpf_testmod
bpf: fix build when CONFIG_DEBUG_INFO_BTF[_MODULES] is undefined
libbpf: Fix error handling in btf__distill_base()
libbpf: Fix license for btf_relocate.c
Alexander Lobakin (1):
bitops: make BYTES_TO_BITS() treewide-available
Alexei Starovoitov (2):
s390/bpf: Fix indirect trampoline generation
bpf: Introduce "volatile compare" macros
Andrea Righi (33):
sched_ext: fix typo in set_weight() description
sched_ext: add CONFIG_DEBUG_INFO_BTF dependency
sched_ext: Provide a sysfs enable_seq counter
sched_ext: improve WAKE_SYNC behavior for default idle CPU selection
sched_ext: Clarify ops.select_cpu() for single-CPU tasks
sched_ext: Introduce LLC awareness to the default idle selection
policy
sched_ext: Introduce NUMA awareness to the default idle selection
policy
sched_ext: Do not enable LLC/NUMA optimizations when domains overlap
sched_ext: Fix incorrect use of bitwise AND
MAINTAINERS: add self as reviewer for sched_ext
sched_ext: idle: Refresh idle masks during idle-to-idle transitions
sched_ext: Use the NUMA scheduling domain for NUMA optimizations
sched_ext: idle: use assign_cpu() to update the idle cpumask
sched_ext: idle: clarify comments
sched_ext: idle: introduce check_builtin_idle_enabled() helper
sched_ext: idle: small CPU iteration refactoring
sched_ext: update scx_bpf_dsq_insert() doc for SCX_DSQ_LOCAL_ON
sched_ext: Include remaining task time slice in error state dump
sched_ext: Include task weight in the error state dump
selftests/sched_ext: Fix enum resolution
tools/sched_ext: Add helper to check task migration state
sched_ext: selftests/dsp_local_on: Fix selftest on UP systems
sched_ext: Fix lock imbalance in dispatch_to_local_dsq()
selftests/sched_ext: Fix exit selftest hang on UP
sched_ext: Move built-in idle CPU selection policy to a separate file
sched_ext: Track currently locked rq
sched_ext: Make scx_locked_rq() inline
sched_ext: Fix missing rq lock in scx_bpf_cpuperf_set()
sched/ext: Fix invalid task state transitions on class switch
sched_ext: Make scx_kf_allowed_if_unlocked() available outside ext.c
sched_ext: Remove duplicate BTF_ID_FLAGS definitions
sched_ext: Fix rq lock state in hotplug ops
sched_ext: Validate prev_cpu in scx_bpf_select_cpu_dfl()
Andrii Nakryiko (14):
bpf: Emit global subprog name in verifier logs
bpf: Validate global subprogs lazily
selftests/bpf: Add lazy global subprog validation tests
libbpf: Add btf__new_split() API that was declared but not implemented
bpf: move sleepable flag from bpf_prog_aux to bpf_prog
libbpf: Add BTF field iterator
libbpf: Make use of BTF field iterator in BPF linker code
libbpf: Make use of BTF field iterator in BTF handling code
bpftool: Use BTF field iterator in btfgen
libbpf: Remove callback-based type/string BTF field visitor helpers
bpf: extract iterator argument type and name validation logic
bpf: allow passing struct bpf_iter_<type> as kfunc arguments
selftests/bpf: test passing iterator to a kfunc
selftests/bpf: validate eliminated global subprog is not freplaceable
Arnaldo Carvalho de Melo (1):
tools include UAPI: Sync linux/sched.h copy with the kernel sources
Atul Kumar Pant (1):
sched_ext: Fixes typos in comments
Benjamin Tissoires (1):
bpf: introduce in_sleepable() helper
Bitao Hu (4):
genirq: Convert kstat_irqs to a struct
genirq: Provide a snapshot mechanism for interrupt statistics
watchdog/softlockup: Low-overhead detection of interrupt storm
watchdog/softlockup: Report the most frequent interrupts
Björn Töpel (1):
selftests: sched_ext: Add sched_ext as proper selftest target
Breno Leitao (3):
rhashtable: Fix potential deadlock by moving schedule_work outside
lock
sched_ext: Use kvzalloc for large exit_dump allocation
sched/ext: Prevent update_locked_rq() calls with NULL rq
Changwoo Min (12):
sched_ext: Clarify sched_ext_ops table for userland scheduler
sched_ext: add a missing rcu_read_lock/unlock pair at
scx_select_cpu_dfl()
MAINTAINERS: add me as reviewer for sched_ext
sched_ext: Replace rq_lock() to raw_spin_rq_lock() in scx_ops_bypass()
sched_ext: Relocate scx_enabled() related code
sched_ext: Implement scx_bpf_now()
sched_ext: Add scx_bpf_now() for BPF scheduler
sched_ext: Add time helpers for BPF schedulers
sched_ext: Replace bpf_ktime_get_ns() to scx_bpf_now()
sched_ext: Use time helpers in BPF schedulers
sched_ext: Fix incorrect time delta calculation in time_delta()
sched_ext: Add scx_bpf_events() and scx_read_event() for BPF
schedulers
Cheng-Yang Chou (1):
sched_ext: Always use SMP versions in kernel/sched/ext.c
Christian Brauner (1):
file: add take_fd() cleanup helper
Christian Loehle (1):
sched/fair: Remove stale FREQUENCY_UTIL comment
Christophe Leroy (2):
bpf: Remove arch_unprotect_bpf_trampoline()
bpf: Check return from set_memory_rox()
Chuyi Zhou (15):
cgroup: Prepare for using css_task_iter_*() in BPF
bpf: Introduce css_task open-coded iterator kfuncs
bpf: Introduce task open coded iterator kfuncs
bpf: Introduce css open-coded iterator kfuncs
bpf: teach the verifier to enforce css_iter and task_iter in RCU CS
bpf: Let bpf_iter_task_new accept null task ptr
selftests/bpf: rename bpf_iter_task.c to bpf_iter_tasks.c
selftests/bpf: Add tests for open-coded task and css iter
bpf: Relax allowlist for css_task iter
selftests/bpf: Add tests for css_task iter combining with cgroup iter
selftests/bpf: Add test for using css_task iter in sleepable progs
bpf: Let verifier consider {task,cgroup} is trusted in bpf_iter_reg
selftests/bpf: get trusted cgrp from bpf_iter__cgroup directly
sched_ext: Fix the incorrect bpf_list kfunc API in common.bpf.h.
sched_ext: Use SCX_CALL_OP_TASK in task_tick_scx
Colin Ian King (1):
sched_ext: Fix spelling mistake: "intead" -> "instead"
Daniel Xu (3):
bpf: btf: Support flags for BTF_SET8 sets
bpf: btf: Add BTF_KFUNCS_START/END macro pair
bpf: treewide: Annotate BPF kfuncs in BTF
Dave Marchevsky (6):
bpf: Don't explicitly emit BTF for struct btf_iter_num
selftests/bpf: Rename bpf_iter_task_vma.c to bpf_iter_task_vmas.c
bpf: Introduce task_vma open-coded iterator kfuncs
selftests/bpf: Add tests for open-coded task_vma iter
bpf: Add __bpf_kfunc_{start,end}_defs macros
bpf: Add __bpf_hook_{start,end} macros
David Vernet (15):
bpf: Add ability to pin bpf timer to calling CPU
selftests/bpf: Test pinning bpf timer to a core
sched_ext: Implement runnable task stall watchdog
sched_ext: Print sched_ext info when dumping stack
sched_ext: Implement SCX_KICK_WAIT
sched_ext: Implement sched_ext_ops.cpu_acquire/release()
sched_ext: Add selftests
bpf: Load vmlinux btf for any struct_ops map
sched_ext: Make scx_bpf_cpuperf_set() @cpu arg signed
scx: Allow calling sleepable kfuncs from BPF_PROG_TYPE_SYSCALL
scx/selftests: Verify we can call create_dsq from prog_run
sched_ext: Remove unnecessary cpu_relax()
scx: Fix exit selftest to use custom DSQ
scx: Fix raciness in scx_ops_bypass()
scx: Fix maximal BPF selftest prog
Dawei Li (1):
genirq: Deduplicate interrupt descriptor initialization
Devaansh Kumar (1):
sched_ext: selftests: Fix grammar in tests description
Devaansh-Kumar (1):
sched_ext: Documentation: Update instructions for running example
schedulers
Eduard Zingerman (2):
libbpf: Make btf_parse_elf process .BTF.base transparently
selftests/bpf: Check if distilled base inherits source endianness
Geliang Tang (1):
bpf, btf: Check btf for register_bpf_struct_ops
Henry Huang (2):
sched_ext: initialize kit->cursor.flags
sched_ext: keep running prev when prev->scx.slice != 0
Herbert Xu (1):
rhashtable: Fix rhashtable_try_insert test
Honglei Wang (3):
sched_ext: use correct function name in pick_task_scx() warning
message
sched_ext: Add __weak to fix the build errors
sched_ext: switch class when preempted by higher priority scheduler
Hongyan Xia (1):
sched/ext: Add BPF function to fetch rq
Hou Tao (7):
bpf: Free dynamically allocated bits in bpf_iter_bits_destroy()
bpf: Add bpf_mem_alloc_check_size() helper
bpf: Check the validity of nr_words in bpf_iter_bits_new()
bpf: Use __u64 to save the bits in bits iterator
selftests/bpf: Add three test cases for bits_iter
selftests/bpf: Use -4095 as the bad address for bits iterator
selftests/bpf: Export map_update_retriable()
Ihor Solodrai (2):
selftests/sched_ext: add order-only dependency of runner.o on BPFOBJ
selftests/sched_ext: fix build after renames in sched_ext API
Ilpo Järvinen (1):
<linux/cleanup.h>: Allow the passing of both iomem and non-iomem
pointers to no_free_ptr()
Ingo Molnar (3):
sched/syscalls: Split out kernel/sched/syscalls.c from
kernel/sched/core.c
sched/fair: Rename check_preempt_wakeup() to
check_preempt_wakeup_fair()
sched/fair: Rename check_preempt_curr() to wakeup_preempt()
Jake Hillion (2):
sched_ext: create_dsq: Return -EEXIST on duplicate request
sched_ext: Drop kfuncs marked for removal in 6.15
Jiapeng Chong (1):
sched_ext: Fixes incorrect type in bpf_scx_init()
Jiayuan Chen (1):
selftests/bpf: Fixes for test_maps test
Kui-Feng Lee (29):
bpf: refactory struct_ops type initialization to a function.
bpf: get type information with BTF_ID_LIST
bpf, net: introduce bpf_struct_ops_desc.
bpf: add struct_ops_tab to btf.
bpf: make struct_ops_map support btfs other than btf_vmlinux.
bpf: pass btf object id in bpf_map_info.
bpf: lookup struct_ops types from a given module BTF.
bpf: pass attached BTF to the bpf_struct_ops subsystem
bpf: hold module refcnt in bpf_struct_ops map creation and prog
verification.
bpf: validate value_type
bpf, net: switch to dynamic registration
libbpf: Find correct module BTFs for struct_ops maps and progs.
bpf: export btf_ctx_access to modules.
selftests/bpf: test case for register_bpf_struct_ops().
bpf: Fix error checks against bpf_get_btf_vmlinux().
bpf: Remove an unnecessary check.
selftests/bpf: Suppress warning message of an unused variable.
bpf: add btf pointer to struct bpf_ctx_arg_aux.
bpf: Move __kfunc_param_match_suffix() to btf.c.
bpf: Create argument information for nullable arguments.
selftests/bpf: Test PTR_MAYBE_NULL arguments of struct_ops operators.
libbpf: Set btf_value_type_id of struct bpf_map for struct_ops.
libbpf: Convert st_ops->data to shadow type.
bpftool: Generated shadow variables for struct_ops maps.
bpftool: Add an example for struct_ops map and shadow type.
selftests/bpf: Test if shadow types work correctly.
bpf, net: validate struct_ops when updating value.
bpf: struct_ops supports more than one page for trampolines.
selftests/bpf: Test struct_ops maps with a large number of struct_ops
program.
Kumar Kartikeya Dwivedi (4):
bpf: Allow calling static subprogs while holding a bpf_spin_lock
selftests/bpf: Add test for static subprog call in lock cs
bpf: Transfer RCU lock state between subprog calls
selftests/bpf: Add tests for RCU lock transfer between subprogs
Liang Jie (1):
sched_ext: Use sizeof_field for key_len in dsq_hash_params
Luo Gengkun (7):
bpf: Fix kabi-breakage for bpf_func_info_aux
bpf: Fix kabi-breakage for bpf_tramp_image
bpf: Fix kabi for bpf_attr
bpf_verifier: Fix kabi for bpf_verifier_env
bpf: Fix kabi for bpf_ctx_arg_aux
bpf: Fix kabi for bpf_prog_aux and bpf_prog
selftests/bpf: modify test_loader that didn't support running
bpf_prog_type_syscall programs
Manu Bretelle (1):
sched_ext: define missing cfi stubs for sched_ext
Martin KaFai Lau (5):
libbpf: Ensure undefined bpf_attr field stays 0
bpf: Remove unnecessary err < 0 check in
bpf_struct_ops_map_update_elem
bpf: Fix a crash when btf_parse_base() returns an error pointer
bpf: Reject struct_ops registration that uses module ptr and the
module btf_id is missing
bpf: Use kallsyms to find the function name of a struct_ops's stub
function
Masahiro Yamada (1):
kbuild: avoid too many execution of scripts/pahole-flags.sh
Matthieu Baerts (1):
bpf: fix compilation error without CGROUPS
Peter Zijlstra (26):
cfi: Flip headers
x86/cfi,bpf: Fix BPF JIT call
x86/cfi,bpf: Fix bpf_callback_t CFI
x86/cfi,bpf: Fix bpf_struct_ops CFI
cfi: Add CFI_NOSEAL()
bpf: Fix dtor CFI
cleanup: Make no_free_ptr() __must_check
sched: Simplify set_user_nice()
sched: Simplify syscalls
sched: Simplify sched_{set,get}affinity()
sched: Simplify yield_to()
sched: Simplify sched_rr_get_interval()
sched: Simplify sched_move_task()
sched: Misc cleanups
sched/deadline: Move bandwidth accounting into {en,de}queue_dl_entity
sched: Allow sched_class::dequeue_task() to fail
sched: Unify runtime accounting across classes
sched: Use set_next_task(.first) where required
sched/fair: Cleanup pick_task_fair() vs throttle
sched/fair: Cleanup pick_task_fair()'s curr
sched/fair: Unify pick_{,next_}_task_fair()
sched: Fixup set_next_task() implementations
sched: Split up put_prev_task_balance()
sched: Rework pick_next_task()
sched: Combine the last put_prev_task() and the first set_next_task()
sched: Add put_prev_task(.next)
Pu Lehui (8):
riscv, bpf: Fix unpredictable kernel crash about RV64 struct_ops
bpf: Fix kabi breakage in struct module
riscv, bpf: Fix out-of-bounds issue when preparing trampoline image
selftests/bpf: Fix btf leak on new btf alloc failure in btf_distill
test
libbpf: Fix return zero when elf_begin failed
libbpf: Fix incorrect traversal end type ID when marking
BTF_IS_EMBEDDED
selftests/bpf: Add distilled BTF test about marking BTF_IS_EMBEDDED
selftests/bpf: Add file_read_pattern to gitignore
Randy Dunlap (1):
sched_ext: fix kernel-doc warnings
Shizhao Chen (1):
sched_ext: Add option -l in selftest runner to list all available
tests
Song Liu (8):
bpf: Charge modmem for struct_ops trampoline
bpf: Let bpf_prog_pack_free handle any pointer
bpf: Adjust argument names of arch_prepare_bpf_trampoline()
bpf: Add helpers for trampoline image management
bpf, x86: Adjust arch_prepare_bpf_trampoline return value
bpf: Add arch_bpf_trampoline_size()
bpf: Use arch_bpf_trampoline_size
x86, bpf: Use bpf_prog_pack for bpf trampoline
T.J. Mercier (1):
bpf, docs: Fix broken link to renamed bpf_iter_task_vmas.c
Tejun Heo (152):
sched: Restructure sched_class order sanity checks in sched_init()
sched: Allow sched_cgroup_fork() to fail and introduce
sched_cancel_fork()
sched: Add sched_class->reweight_task()
sched: Add sched_class->switching_to() and expose
check_class_changing/changed()
sched: Factor out cgroup weight conversion functions
sched: Factor out update_other_load_avgs() from
__update_blocked_others()
sched: Add normal_policy()
sched_ext: Add boilerplate for extensible scheduler class
sched_ext: Implement BPF extensible scheduler class
sched_ext: Add scx_simple and scx_example_qmap example schedulers
sched_ext: Add sysrq-S which disables the BPF scheduler
sched_ext: Allow BPF schedulers to disallow specific tasks from
joining SCHED_EXT
sched_ext: Print debug dump after an error exit
tools/sched_ext: Add scx_show_state.py
sched_ext: Implement scx_bpf_kick_cpu() and task preemption support
sched_ext: Add a central scheduler which makes all scheduling
decisions on one CPU
sched_ext: Make watchdog handle ops.dispatch() looping stall
sched_ext: Add task state tracking operations
sched_ext: Implement tickless support
sched_ext: Track tasks that are subjects of the in-flight SCX
operation
sched_ext: Implement sched_ext_ops.cpu_online/offline()
sched_ext: Bypass BPF scheduler while PM events are in progress
sched_ext: Implement core-sched support
sched_ext: Add vtime-ordered priority queue to dispatch_q's
sched_ext: Documentation: scheduler: Document extensible scheduler
class
sched, sched_ext: Replace scx_next_task_picked() with
sched_class->switch_class()
cpufreq_schedutil: Refactor sugov_cpu_is_busy()
sched_ext: Add cpuperf support
sched_ext: Drop tools_clean target from the top-level Makefile
sched_ext: Swap argument positions in kcalloc() call to avoid compiler
warning
sched, sched_ext: Simplify dl_prio() case handling in sched_fork()
sched_ext: Account for idle policy when setting p->scx.weight in
scx_ops_enable_task()
sched_ext: Disallow loading BPF scheduler if isolcpus= domain
isolation is in effect
sched_ext: Minor cleanups in kernel/sched/ext.h
sched, sched_ext: Open code for_balance_class_range()
sched, sched_ext: Move some declarations from kernel/sched/ext.h to
sched.h
sched_ext: Take out ->priq and ->flags from scx_dsq_node
sched_ext: Implement DSQ iterator
sched_ext/scx_qmap: Add an example usage of DSQ iterator
sched_ext: Reimplement scx_bpf_reenqueue_local()
sched_ext: Make scx_bpf_reenqueue_local() skip tasks that are being
migrated
sched: Move struct balance_callback definition upward
sched_ext: Unpin and repin rq lock from balance_scx()
sched_ext: s/SCX_RQ_BALANCING/SCX_RQ_IN_BALANCE/ and add
SCX_RQ_IN_WAKEUP
sched_ext: Allow SCX_DSQ_LOCAL_ON for direct dispatches
sched_ext/scx_qmap: Pick idle CPU for direct dispatch on !wakeup
enqueues
sched_ext: Build fix on !CONFIG_STACKTRACE[_SUPPORT]
sched_ext: Allow p->scx.disallow only while loading
sched_ext: Simplify scx_can_stop_tick() invocation in
sched_can_stop_tick()
sched_ext: Add scx_enabled() test to @start_class promotion in
put_prev_task_balance()
sched_ext: Use update_curr_common() in update_curr_scx()
sched_ext: Simplify UP support by enabling sched_class->balance() in
UP
sched_ext: Improve comment on idle_sched_class exception in
scx_task_iter_next_locked()
sched_ext: Make task_can_run_on_remote_rq() use common
task_allowed_on_cpu()
sched_ext: Fix unsafe list iteration in process_ddsp_deferred_locals()
sched_ext: Make scx_rq_online() also test cpu_active() in addition to
SCX_RQ_ONLINE
sched_ext: Improve logging around enable/disable
sched_ext: Don't use double locking to migrate tasks across CPUs
scx_central: Fix smatch checker warning
sched_ext: Add missing cfi stub for ops.tick
sched_ext: Use task_can_run_on_remote_rq() test in
dispatch_to_local_dsq()
sched_ext: Use sched_clock_cpu() instead of rq_clock_task() in
touch_core_sched()
sched_ext: Don't call put_prev_task_scx() before picking the next task
sched_ext: Replace SCX_TASK_BAL_KEEP with SCX_RQ_BAL_KEEP
sched_ext: Unify regular and core-sched pick task paths
sched_ext: Relocate functions in kernel/sched/ext.c
sched_ext: Remove switch_class_scx()
sched_ext: Remove sched_class->switch_class()
sched_ext: TASK_DEAD tasks must be switched out of SCX on ops_disable
sched_ext: TASK_DEAD tasks must be switched into SCX on ops_enable
sched: Expose css_tg()
sched: Make cpu_shares_read_u64() use tg_weight()
sched: Introduce CONFIG_GROUP_SCHED_WEIGHT
sched_ext: Add cgroup support
sched_ext: Add a cgroup scheduler which uses flattened hierarchy
sched_ext: Temporarily work around pick_task_scx() being called
without balance_scx()
sched_ext: Add missing static to scx_has_op[]
sched_ext: Add missing static to scx_dump_data
sched_ext: Rename scx_kfunc_set_sleepable to unlocked and relocate
sched_ext: Refactor consume_remote_task()
sched_ext: Make find_dsq_for_dispatch() handle SCX_DSQ_LOCAL_ON
sched_ext: Fix processs_ddsp_deferred_locals() by unifying DTL_INVALID
handling
sched_ext: Restructure dispatch_to_local_dsq()
sched_ext: Reorder args for consume_local/remote_task()
sched_ext: Move sanity check and dsq_mod_nr() into
task_unlink_from_dsq()
sched_ext: Move consume_local_task() upward
sched_ext: Replace consume_local_task() with
move_local_task_to_local_dsq()
sched_ext: Compact struct bpf_iter_scx_dsq_kern
sched_ext: Implement scx_bpf_dispatch[_vtime]_from_dsq()
scx_qmap: Implement highpri boosting
sched_ext: Synchronize bypass state changes with rq lock
sched_ext: Don't trigger ops.quiescent/runnable() on migrations
sched_ext: Fix build when !CONFIG_STACKTRACE
sched_ext: Build fix for !CONFIG_SMP
sched_ext: Add __COMPAT helpers for features added during v6.12 devel
cycle
tools/sched_ext: Receive misc updates from SCX repo
scx_flatcg: Use a user DSQ for fallback instead of SCX_DSQ_GLOBAL
sched_ext: Allow only user DSQs for scx_bpf_consume(),
scx_bpf_dsq_nr_queued() and bpf_iter_scx_dsq_new()
sched_ext: Relocate find_user_dsq()
sched_ext: Split the global DSQ per NUMA node
sched_ext: Use shorter slice while bypassing
sched_ext: Relocate check_hotplug_seq() call in scx_ops_enable()
sched_ext: Remove SCX_OPS_PREPPING
sched_ext: Initialize in bypass mode
sched_ext: Fix SCX_TASK_INIT -> SCX_TASK_READY transitions in
scx_ops_enable()
sched_ext: Enable scx_ops_init_task() separately
sched_ext: Add scx_cgroup_enabled to gate cgroup operations and fix
scx_tg_online()
sched_ext: Decouple locks in scx_ops_disable_workfn()
sched_ext: Decouple locks in scx_ops_enable()
sched_ext: Improve error reporting during loading
sched_ext: scx_cgroup_exit() may be called without successful
scx_cgroup_init()
sched/core: Make select_task_rq() take the pointer to wake_flags
instead of value
sched/core: Add ENQUEUE_RQ_SELECTED to indicate whether
->select_task_rq() was called
sched_ext, scx_qmap: Add and use SCX_ENQ_CPU_SELECTED
Revert "sched_ext: Use shorter slice while bypassing"
sched_ext: Start schedulers with consistent p->scx.slice values
sched_ext: Move scx_buildin_idle_enabled check to
scx_bpf_select_cpu_dfl()
sched_ext: bypass mode shouldn't depend on ops.select_cpu()
sched_ext: Move scx_tasks_lock handling into scx_task_iter helpers
sched_ext: Don't hold scx_tasks_lock for too long
sched_ext: Make cast_mask() inline
sched_ext: Fix enq_last_no_enq_fails selftest
sched_ext: Add a missing newline at the end of an error message
sched_ext: Update scx_show_state.py to match scx_ops_bypass_depth's
new type
sched_ext: Handle cases where pick_task_scx() is called without
preceding balance_scx()
sched_ext: ops.cpu_acquire() should be called with SCX_KF_REST
sched_ext: Factor out move_task_between_dsqs() from
scx_dispatch_from_dsq()
sched_ext: Rename CFI stubs to names that are recognized by BPF
sched_ext: Replace set_arg_maybe_null() with __nullable CFI stub tags
sched_ext: Avoid live-locking bypass mode switching
sched_ext: Enable the ops breather and eject BPF scheduler on
softlockup
sched_ext: scx_bpf_dispatch_from_dsq_set_*() are allowed from unlocked
context
sched_ext: Rename scx_bpf_dispatch[_vtime]() to
scx_bpf_dsq_insert[_vtime]()
sched_ext: Rename scx_bpf_consume() to scx_bpf_dsq_move_to_local()
sched_ext: Rename scx_bpf_dispatch[_vtime]_from_dsq*() ->
scx_bpf_dsq_move[_vtime]*()
sched_ext: Fix invalid irq restore in scx_ops_bypass()
sched_ext: Fix dsq_local_on selftest
tools/sched_ext: Receive updates from SCX repo
sched_ext: selftests/dsp_local_on: Fix sporadic failures
sched_ext: Fix incorrect autogroup migration detection
sched_ext: Implement auto local dispatching of migration disabled
tasks
sched_ext: Fix migration disabled handling in targeted dispatches
sched_ext: Fix incorrect assumption about migration disabled tasks in
task_can_run_on_remote_rq()
sched_ext: Fix pick_task_scx() picking non-queued tasks when it's
called without balance()
sched_ext: Implement SCX_OPS_ALLOW_QUEUED_WAKEUP
sched_ext: bpf_iter_scx_dsq_new() should always initialize iterator
sched_ext: Make scx_group_set_weight() always update tg->scx.weight
sched_ext, sched/core: Don't call scx_group_set_weight() prematurely
from sched_create_group()
sched_ext: Mark scx_bpf_dsq_move_set_[slice|vtime]() with KF_RCU
sched_ext: Don't kick CPUs running higher classes
sched_ext: Use SCX_TASK_READY test instead of tryget_task_struct()
during class switch
tools/sched_ext: Sync with scx repo
Thomas Gleixner (1):
sched/ext: Remove sched_fork() hack
Thorsten Blum (1):
sched_ext: Use str_enabled_disabled() helper in
update_selcpu_topology()
Tianchen Ding (1):
sched_ext: Use btf_ids to resolve task_struct
Tony Ambardar (1):
libbpf: Ensure new BTF objects inherit input endianness
Vincent Guittot (2):
sched/cpufreq: Rework schedutil governor performance estimation
sched/fair: Fix sched_can_stop_tick() for fair tasks
Vishal Chourasia (2):
sched_ext: Add __weak markers to BPF helper function decalarations
sched_ext: Fix function pointer type mismatches in BPF selftests
Wenyu Huang (1):
sched/doc: Update documentation after renames and synchronize Chinese
version
Yafang Shao (2):
bpf: Add bits iterator
selftests/bpf: Add selftest for bits iter
Yipeng Zou (1):
sched_ext: Allow dequeue_task_scx to fail
Yiwei Lin (1):
sched/fair: Remove unused 'curr' argument from pick_next_entity()
Yu Liao (2):
sched: Put task_group::idle under CONFIG_GROUP_SCHED_WEIGHT
sched: Add dummy version of sched_group_set_idle()
Yury Norov (1):
cpumask: introduce assign_cpu() macro
Zhang Qiao (3):
sched_ext: Remove redundant p->nr_cpus_allowed checker
sched/ext: Fix unmatch trailing comment of CONFIG_EXT_GROUP_SCHED
sched/ext: Use tg_cgroup() to elieminate duplicate code
Zhao Mengmeng (1):
sched_ext: Replace scx_next_task_picked() with switch_class() in
comment
Zicheng Qu (17):
sched: Fix kabi for reweight_task in struct sched_class
sched/syscalls: Fix kabi for EXPORT_SYMBOL moved from core.c to
syscalls.c
sched: Fix kabi for switching_to in struct sched_class
sched/fair: Fix kabi for check_preempt_curr and wakeup_preempt in
struct sched_class
sched: Fix kabi for dequeue_task in struct sched_class
sched_ext: Fix kabi for scx in struct task_struct
sched_ext: Fix kabi for switch_class in struct sched_class
sched: Fix kabi for exec_max in struct sched_statistics
sched_ext: Fix kabi for balance in struct sched_class
sched_ext: Fix kabi for header in kernel/sched/sched.h
sched: Fix kabi pick_task in struct sched_class
sched: Fix kabi for put_prev_task in struct sched_class
sched_ext: Fix kabi for scx_flags and scx_weight in struct task_group
sched: Fix kabi for int idle in struct task_group
sched: Add __setscheduler_class() for sched_ext
genirq: Fix kabi for kstat_irqs in struct irq_desc
sched_ext: Enable and disable sched_ext configs
Zqiang (1):
sched_ext: Fix unsafe locking in the scx_dump_state()
guanjing (1):
sched_ext: fix application of sizeof to pointer
Documentation/bpf/bpf_iterators.rst | 2 +-
Documentation/bpf/kfuncs.rst | 14 +-
Documentation/scheduler/index.rst | 1 +
Documentation/scheduler/sched-design-CFS.rst | 8 +-
Documentation/scheduler/sched-ext.rst | 325 +
.../zh_CN/scheduler/sched-design-CFS.rst | 8 +-
MAINTAINERS | 16 +-
Makefile | 4 +-
arch/arm64/configs/openeuler_defconfig | 3 +
arch/arm64/kernel/bpf-rvi.c | 4 +-
arch/arm64/net/bpf_jit_comp.c | 55 +-
arch/mips/dec/setup.c | 2 +-
arch/parisc/kernel/smp.c | 2 +-
arch/powerpc/kvm/book3s_hv_rm_xics.c | 2 +-
arch/riscv/include/asm/cfi.h | 3 +-
arch/riscv/kernel/cfi.c | 2 +-
arch/riscv/net/bpf_jit_comp64.c | 48 +-
arch/s390/net/bpf_jit_comp.c | 59 +-
arch/x86/configs/openeuler_defconfig | 2 +
arch/x86/include/asm/cfi.h | 126 +-
arch/x86/kernel/alternative.c | 87 +-
arch/x86/kernel/cfi.c | 4 +-
arch/x86/net/bpf_jit_comp.c | 261 +-
block/blk-cgroup.c | 4 +-
drivers/hid/bpf/hid_bpf_dispatch.c | 12 +-
drivers/tty/sysrq.c | 1 +
fs/proc/stat.c | 4 +-
include/asm-generic/Kbuild | 1 +
include/asm-generic/cfi.h | 5 +
include/asm-generic/vmlinux.lds.h | 1 +
include/linux/bitops.h | 2 +
include/linux/bpf.h | 130 +-
include/linux/bpf_mem_alloc.h | 3 +
include/linux/bpf_verifier.h | 21 +-
include/linux/btf.h | 105 +
include/linux/btf_ids.h | 21 +-
include/linux/cfi.h | 12 +
include/linux/cgroup.h | 14 +-
include/linux/cleanup.h | 42 +-
include/linux/cpumask.h | 41 +-
include/linux/energy_model.h | 1 -
include/linux/file.h | 20 +
include/linux/filter.h | 2 +-
include/linux/irqdesc.h | 17 +-
include/linux/kernel_stat.h | 8 +
include/linux/module.h | 8 +-
include/linux/sched.h | 8 +-
include/linux/sched/ext.h | 216 +
include/linux/sched/task.h | 8 +-
include/trace/events/sched_ext.h | 32 +
include/uapi/linux/bpf.h | 16 +-
include/uapi/linux/sched.h | 1 +
init/Kconfig | 10 +
init/init_task.c | 12 +
kernel/Kconfig.preempt | 27 +-
kernel/bpf-rvi/common_kfuncs.c | 4 +-
kernel/bpf/Makefile | 8 +-
kernel/bpf/bpf_iter.c | 12 +-
kernel/bpf/bpf_struct_ops.c | 745 +-
kernel/bpf/bpf_struct_ops_types.h | 12 -
kernel/bpf/btf.c | 431 +-
kernel/bpf/cgroup_iter.c | 65 +-
kernel/bpf/core.c | 76 +-
kernel/bpf/cpumask.c | 18 +-
kernel/bpf/dispatcher.c | 7 +-
kernel/bpf/helpers.c | 202 +-
kernel/bpf/map_iter.c | 10 +-
kernel/bpf/memalloc.c | 14 +-
kernel/bpf/syscall.c | 12 +-
kernel/bpf/task_iter.c | 242 +-
kernel/bpf/trampoline.c | 99 +-
kernel/bpf/verifier.c | 317 +-
kernel/cgroup/cgroup.c | 18 +-
kernel/cgroup/cpuset.c | 4 +-
kernel/cgroup/rstat.c | 13 +-
kernel/events/core.c | 2 +-
kernel/fork.c | 17 +-
kernel/irq/Kconfig | 4 +
kernel/irq/internals.h | 2 +-
kernel/irq/irqdesc.c | 144 +-
kernel/irq/proc.c | 5 +-
kernel/module/main.c | 5 +-
kernel/sched/autogroup.c | 4 +-
kernel/sched/bpf_sched.c | 8 +-
kernel/sched/build_policy.c | 13 +
kernel/sched/core.c | 2492 +-----
kernel/sched/cpuacct.c | 4 +-
kernel/sched/cpufreq_schedutil.c | 83 +-
kernel/sched/deadline.c | 175 +-
kernel/sched/debug.c | 3 +
kernel/sched/ext.c | 7155 +++++++++++++++++
kernel/sched/ext.h | 119 +
kernel/sched/ext_idle.c | 755 ++
kernel/sched/ext_idle.h | 39 +
kernel/sched/fair.c | 306 +-
kernel/sched/idle.c | 31 +-
kernel/sched/rt.c | 40 +-
kernel/sched/sched.h | 473 +-
kernel/sched/stop_task.c | 35 +-
kernel/sched/syscalls.c | 1713 ++++
kernel/trace/bpf_trace.c | 12 +-
kernel/trace/trace_probe.c | 2 -
kernel/watchdog.c | 223 +-
lib/Kconfig.debug | 14 +
lib/dump_stack.c | 1 +
lib/rhashtable.c | 12 +-
net/bpf/bpf_dummy_struct_ops.c | 72 +-
net/bpf/test_run.c | 30 +-
net/core/filter.c | 33 +-
net/core/xdp.c | 10 +-
net/ipv4/bpf_tcp_ca.c | 93 +-
net/ipv4/fou_bpf.c | 10 +-
net/ipv4/tcp_bbr.c | 4 +-
net/ipv4/tcp_cong.c | 6 +-
net/ipv4/tcp_cubic.c | 4 +-
net/ipv4/tcp_dctcp.c | 4 +-
net/netfilter/nf_conntrack_bpf.c | 10 +-
net/netfilter/nf_nat_bpf.c | 10 +-
net/socket.c | 8 +-
net/xfrm/xfrm_interface_bpf.c | 10 +-
scripts/Makefile.btf | 33 +
scripts/Makefile.modfinal | 2 +-
scripts/gdb/linux/interrupts.py | 6 +-
scripts/pahole-flags.sh | 30 -
tools/Makefile | 10 +-
.../bpf/bpftool/Documentation/bpftool-gen.rst | 58 +-
tools/bpf/bpftool/gen.c | 253 +-
tools/bpf/resolve_btfids/main.c | 8 +
tools/include/linux/bitops.h | 2 +
tools/include/uapi/linux/bpf.h | 14 +-
tools/include/uapi/linux/sched.h | 1 +
tools/lib/bpf/Build | 2 +-
tools/lib/bpf/bpf.c | 4 +-
tools/lib/bpf/bpf.h | 4 +-
tools/lib/bpf/btf.c | 704 +-
tools/lib/bpf/btf.h | 36 +
tools/lib/bpf/btf_iter.c | 177 +
tools/lib/bpf/btf_relocate.c | 519 ++
tools/lib/bpf/libbpf.c | 97 +-
tools/lib/bpf/libbpf.map | 4 +-
tools/lib/bpf/libbpf_internal.h | 29 +-
tools/lib/bpf/libbpf_probes.c | 1 +
tools/lib/bpf/linker.c | 58 +-
tools/perf/util/probe-finder.c | 4 +-
tools/sched_ext/.gitignore | 2 +
tools/sched_ext/Makefile | 246 +
tools/sched_ext/README.md | 270 +
.../sched_ext/include/bpf-compat/gnu/stubs.h | 11 +
tools/sched_ext/include/scx/common.bpf.h | 647 ++
tools/sched_ext/include/scx/common.h | 81 +
tools/sched_ext/include/scx/compat.bpf.h | 143 +
tools/sched_ext/include/scx/compat.h | 187 +
.../sched_ext/include/scx/enums.autogen.bpf.h | 105 +
tools/sched_ext/include/scx/enums.autogen.h | 41 +
tools/sched_ext/include/scx/enums.bpf.h | 12 +
tools/sched_ext/include/scx/enums.h | 27 +
tools/sched_ext/include/scx/user_exit_info.h | 118 +
tools/sched_ext/scx_central.bpf.c | 356 +
tools/sched_ext/scx_central.c | 145 +
tools/sched_ext/scx_flatcg.bpf.c | 954 +++
tools/sched_ext/scx_flatcg.c | 234 +
tools/sched_ext/scx_flatcg.h | 51 +
tools/sched_ext/scx_qmap.bpf.c | 827 ++
tools/sched_ext/scx_qmap.c | 155 +
tools/sched_ext/scx_show_state.py | 42 +
tools/sched_ext/scx_simple.bpf.c | 151 +
tools/sched_ext/scx_simple.c | 107 +
tools/testing/selftests/Makefile | 9 +-
tools/testing/selftests/bpf/.gitignore | 1 +
.../testing/selftests/bpf/bpf_experimental.h | 96 +
.../selftests/bpf/bpf_testmod/bpf_testmod.c | 160 +-
.../selftests/bpf/bpf_testmod/bpf_testmod.h | 61 +
.../bpf/bpf_testmod/bpf_testmod_kfunc.h | 9 +
.../selftests/bpf/prog_tests/bpf_iter.c | 44 +-
.../selftests/bpf/prog_tests/btf_distill.c | 692 ++
.../selftests/bpf/prog_tests/cgroup_iter.c | 33 +
.../bpf/prog_tests/global_func_dead_code.c | 60 +
.../testing/selftests/bpf/prog_tests/iters.c | 209 +
.../selftests/bpf/prog_tests/kfunc_call.c | 1 +
.../selftests/bpf/prog_tests/rcu_read_lock.c | 6 +
.../selftests/bpf/prog_tests/spin_lock.c | 2 +
.../prog_tests/test_struct_ops_maybe_null.c | 46 +
.../bpf/prog_tests/test_struct_ops_module.c | 86 +
.../prog_tests/test_struct_ops_multi_pages.c | 30 +
.../testing/selftests/bpf/prog_tests/timer.c | 4 +
.../selftests/bpf/prog_tests/verifier.c | 4 +
...f_iter_task_vma.c => bpf_iter_task_vmas.c} | 0
.../{bpf_iter_task.c => bpf_iter_tasks.c} | 0
.../bpf/progs/freplace_dead_global_func.c | 11 +
tools/testing/selftests/bpf/progs/iters_css.c | 72 +
.../selftests/bpf/progs/iters_css_task.c | 102 +
.../testing/selftests/bpf/progs/iters_task.c | 41 +
.../selftests/bpf/progs/iters_task_failure.c | 105 +
.../selftests/bpf/progs/iters_task_vma.c | 43 +
.../selftests/bpf/progs/iters_testmod_seq.c | 50 +
.../selftests/bpf/progs/kfunc_call_test.c | 37 +
.../selftests/bpf/progs/rcu_read_lock.c | 120 +
.../bpf/progs/struct_ops_maybe_null.c | 29 +
.../bpf/progs/struct_ops_maybe_null_fail.c | 24 +
.../selftests/bpf/progs/struct_ops_module.c | 37 +
.../bpf/progs/struct_ops_multi_pages.c | 102 +
.../selftests/bpf/progs/test_global_func12.c | 4 +-
.../selftests/bpf/progs/test_spin_lock.c | 65 +
.../selftests/bpf/progs/test_spin_lock_fail.c | 44 +
tools/testing/selftests/bpf/progs/timer.c | 63 +-
.../selftests/bpf/progs/verifier_bits_iter.c | 232 +
.../bpf/progs/verifier_global_subprogs.c | 101 +
.../selftests/bpf/progs/verifier_spin_lock.c | 2 +-
.../bpf/progs/verifier_subprog_precision.c | 4 +-
tools/testing/selftests/bpf/test_loader.c | 10 +-
tools/testing/selftests/bpf/test_maps.c | 18 +-
tools/testing/selftests/bpf/test_maps.h | 5 +
tools/testing/selftests/sched_ext/.gitignore | 6 +
tools/testing/selftests/sched_ext/Makefile | 211 +
tools/testing/selftests/sched_ext/config | 9 +
.../selftests/sched_ext/create_dsq.bpf.c | 58 +
.../testing/selftests/sched_ext/create_dsq.c | 57 +
.../sched_ext/ddsp_bogus_dsq_fail.bpf.c | 42 +
.../selftests/sched_ext/ddsp_bogus_dsq_fail.c | 60 +
.../sched_ext/ddsp_vtimelocal_fail.bpf.c | 39 +
.../sched_ext/ddsp_vtimelocal_fail.c | 59 +
.../selftests/sched_ext/dsp_local_on.bpf.c | 68 +
.../selftests/sched_ext/dsp_local_on.c | 60 +
.../sched_ext/enq_last_no_enq_fails.bpf.c | 29 +
.../sched_ext/enq_last_no_enq_fails.c | 64 +
.../sched_ext/enq_select_cpu_fails.bpf.c | 43 +
.../sched_ext/enq_select_cpu_fails.c | 61 +
tools/testing/selftests/sched_ext/exit.bpf.c | 86 +
tools/testing/selftests/sched_ext/exit.c | 64 +
tools/testing/selftests/sched_ext/exit_test.h | 20 +
.../testing/selftests/sched_ext/hotplug.bpf.c | 61 +
tools/testing/selftests/sched_ext/hotplug.c | 170 +
.../selftests/sched_ext/hotplug_test.h | 15 +
.../sched_ext/init_enable_count.bpf.c | 53 +
.../selftests/sched_ext/init_enable_count.c | 157 +
.../testing/selftests/sched_ext/maximal.bpf.c | 166 +
tools/testing/selftests/sched_ext/maximal.c | 54 +
.../selftests/sched_ext/maybe_null.bpf.c | 36 +
.../testing/selftests/sched_ext/maybe_null.c | 49 +
.../sched_ext/maybe_null_fail_dsp.bpf.c | 25 +
.../sched_ext/maybe_null_fail_yld.bpf.c | 28 +
.../testing/selftests/sched_ext/minimal.bpf.c | 21 +
tools/testing/selftests/sched_ext/minimal.c | 58 +
.../selftests/sched_ext/prog_run.bpf.c | 33 +
tools/testing/selftests/sched_ext/prog_run.c | 78 +
.../testing/selftests/sched_ext/reload_loop.c | 74 +
tools/testing/selftests/sched_ext/runner.c | 212 +
tools/testing/selftests/sched_ext/scx_test.h | 131 +
.../selftests/sched_ext/select_cpu_dfl.bpf.c | 40 +
.../selftests/sched_ext/select_cpu_dfl.c | 75 +
.../sched_ext/select_cpu_dfl_nodispatch.bpf.c | 89 +
.../sched_ext/select_cpu_dfl_nodispatch.c | 75 +
.../sched_ext/select_cpu_dispatch.bpf.c | 41 +
.../selftests/sched_ext/select_cpu_dispatch.c | 73 +
.../select_cpu_dispatch_bad_dsq.bpf.c | 37 +
.../sched_ext/select_cpu_dispatch_bad_dsq.c | 59 +
.../select_cpu_dispatch_dbl_dsp.bpf.c | 38 +
.../sched_ext/select_cpu_dispatch_dbl_dsp.c | 59 +
.../sched_ext/select_cpu_vtime.bpf.c | 92 +
.../selftests/sched_ext/select_cpu_vtime.c | 62 +
.../selftests/sched_ext/test_example.c | 49 +
tools/testing/selftests/sched_ext/util.c | 71 +
tools/testing/selftests/sched_ext/util.h | 13 +
263 files changed, 27732 insertions(+), 3787 deletions(-)
create mode 100644 Documentation/scheduler/sched-ext.rst
create mode 100644 include/asm-generic/cfi.h
create mode 100644 include/linux/sched/ext.h
create mode 100644 include/trace/events/sched_ext.h
delete mode 100644 kernel/bpf/bpf_struct_ops_types.h
create mode 100644 kernel/sched/ext.c
create mode 100644 kernel/sched/ext.h
create mode 100644 kernel/sched/ext_idle.c
create mode 100644 kernel/sched/ext_idle.h
create mode 100644 kernel/sched/syscalls.c
create mode 100644 scripts/Makefile.btf
delete mode 100755 scripts/pahole-flags.sh
create mode 100644 tools/lib/bpf/btf_iter.c
create mode 100644 tools/lib/bpf/btf_relocate.c
create mode 100644 tools/sched_ext/.gitignore
create mode 100644 tools/sched_ext/Makefile
create mode 100644 tools/sched_ext/README.md
create mode 100644 tools/sched_ext/include/bpf-compat/gnu/stubs.h
create mode 100644 tools/sched_ext/include/scx/common.bpf.h
create mode 100644 tools/sched_ext/include/scx/common.h
create mode 100644 tools/sched_ext/include/scx/compat.bpf.h
create mode 100644 tools/sched_ext/include/scx/compat.h
create mode 100644 tools/sched_ext/include/scx/enums.autogen.bpf.h
create mode 100644 tools/sched_ext/include/scx/enums.autogen.h
create mode 100644 tools/sched_ext/include/scx/enums.bpf.h
create mode 100644 tools/sched_ext/include/scx/enums.h
create mode 100644 tools/sched_ext/include/scx/user_exit_info.h
create mode 100644 tools/sched_ext/scx_central.bpf.c
create mode 100644 tools/sched_ext/scx_central.c
create mode 100644 tools/sched_ext/scx_flatcg.bpf.c
create mode 100644 tools/sched_ext/scx_flatcg.c
create mode 100644 tools/sched_ext/scx_flatcg.h
create mode 100644 tools/sched_ext/scx_qmap.bpf.c
create mode 100644 tools/sched_ext/scx_qmap.c
create mode 100644 tools/sched_ext/scx_show_state.py
create mode 100644 tools/sched_ext/scx_simple.bpf.c
create mode 100644 tools/sched_ext/scx_simple.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/btf_distill.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/global_func_dead_code.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_maybe_null.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_struct_ops_multi_pages.c
rename tools/testing/selftests/bpf/progs/{bpf_iter_task_vma.c => bpf_iter_task_vmas.c} (100%)
rename tools/testing/selftests/bpf/progs/{bpf_iter_task.c => bpf_iter_tasks.c} (100%)
create mode 100644 tools/testing/selftests/bpf/progs/freplace_dead_global_func.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_css.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_css_task.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_task.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_task_failure.c
create mode 100644 tools/testing/selftests/bpf/progs/iters_task_vma.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_maybe_null.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_maybe_null_fail.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_module.c
create mode 100644 tools/testing/selftests/bpf/progs/struct_ops_multi_pages.c
create mode 100644 tools/testing/selftests/bpf/progs/verifier_bits_iter.c
create mode 100644 tools/testing/selftests/bpf/progs/verifier_global_subprogs.c
create mode 100644 tools/testing/selftests/sched_ext/.gitignore
create mode 100644 tools/testing/selftests/sched_ext/Makefile
create mode 100644 tools/testing/selftests/sched_ext/config
create mode 100644 tools/testing/selftests/sched_ext/create_dsq.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/create_dsq.c
create mode 100644 tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.c
create mode 100644 tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.c
create mode 100644 tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/dsp_local_on.c
create mode 100644 tools/testing/selftests/sched_ext/enq_last_no_enq_fails.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/enq_last_no_enq_fails.c
create mode 100644 tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/enq_select_cpu_fails.c
create mode 100644 tools/testing/selftests/sched_ext/exit.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/exit.c
create mode 100644 tools/testing/selftests/sched_ext/exit_test.h
create mode 100644 tools/testing/selftests/sched_ext/hotplug.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/hotplug.c
create mode 100644 tools/testing/selftests/sched_ext/hotplug_test.h
create mode 100644 tools/testing/selftests/sched_ext/init_enable_count.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/init_enable_count.c
create mode 100644 tools/testing/selftests/sched_ext/maximal.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/maximal.c
create mode 100644 tools/testing/selftests/sched_ext/maybe_null.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/maybe_null.c
create mode 100644 tools/testing/selftests/sched_ext/maybe_null_fail_dsp.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/maybe_null_fail_yld.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/minimal.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/minimal.c
create mode 100644 tools/testing/selftests/sched_ext/prog_run.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/prog_run.c
create mode 100644 tools/testing/selftests/sched_ext/reload_loop.c
create mode 100644 tools/testing/selftests/sched_ext/runner.c
create mode 100644 tools/testing/selftests/sched_ext/scx_test.h
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dfl.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dispatch.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/select_cpu_vtime.c
create mode 100644 tools/testing/selftests/sched_ext/test_example.c
create mode 100644 tools/testing/selftests/sched_ext/util.c
create mode 100644 tools/testing/selftests/sched_ext/util.h
--
2.34.1
1
294
From: Chaitanya Kulkarni <ckulkarnilinux(a)gmail.com>
stable inclusion
from stable-v6.6.124
commit 7c54d3f5ebbc5982daaa004260242dc07ac943ea
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13892
CVE: CVE-2026-23261
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit d1877cc7270302081a315a81a0ee8331f19f95c8 ]
nvme_fabrics creates an NVMe/FC controller in following path:
nvmf_dev_write()
-> nvmf_create_ctrl()
-> nvme_fc_create_ctrl()
-> nvme_fc_init_ctrl()
nvme_fc_init_ctrl() allocates the admin blk-mq resources right after
nvme_add_ctrl() succeeds. If any of the subsequent steps fail (changing
the controller state, scheduling connect work, etc.), we jump to the
fail_ctrl path, which tears down the controller references but never
frees the admin queue/tag set. The leaked blk-mq allocations match the
kmemleak report seen during blktests nvme/fc.
Check ctrl->ctrl.admin_tagset in the fail_ctrl path and call
nvme_remove_admin_tag_set() when it is set so that all admin queue
allocations are reclaimed whenever controller setup aborts.
Reported-by: Yi Zhang <yi.zhang(a)redhat.com>
Reviewed-by: Justin Tee <justin.tee(a)broadcom.com>
Signed-off-by: Chaitanya Kulkarni <ckulkarnilinux(a)gmail.com>
Signed-off-by: Keith Busch <kbusch(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Zheng Qixing <zhengqixing(a)huawei.com>
---
drivers/nvme/host/fc.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index db339f72ce62..745b3babb849 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -3546,6 +3546,8 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
ctrl->ctrl.opts = NULL;
+ if (ctrl->ctrl.admin_tagset)
+ nvme_remove_admin_tag_set(&ctrl->ctrl);
/* initiate nvme ctrl ref counting teardown */
nvme_uninit_ctrl(&ctrl->ctrl);
--
2.39.2
2
1
03 Apr '26
From: Daniel Hodges <git(a)danielhodges.dev>
stable inclusion
from stable-v6.6.130
commit 3c5c818c78b03a1725f3dcd566865c77b48dd3a6
category: bugfix
bugzilla: 13984
CVE: CVE-2026-23281
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 03cc8f90d0537fcd4985c3319b4fafbf2e3fb1f0 ]
The lbs_free_adapter() function uses timer_delete() (non-synchronous)
for both command_timer and tx_lockup_timer before the structure is
freed. This is incorrect because timer_delete() does not wait for
any running timer callback to complete.
If a timer callback is executing when lbs_free_adapter() is called,
the callback will access freed memory since lbs_cfg_free() frees the
containing structure immediately after lbs_free_adapter() returns.
Both timer callbacks (lbs_cmd_timeout_handler and lbs_tx_lockup_handler)
access priv->driver_lock, priv->cur_cmd, priv->dev, and other fields,
which would all be use-after-free violations.
Use timer_delete_sync() instead to ensure any running timer callback
has completed before returning.
This bug was introduced in commit 8f641d93c38a ("libertas: detect TX
lockups and reset hardware") where del_timer() was used instead of
del_timer_sync() in the cleanup path. The command_timer has had the
same issue since the driver was first written.
Fixes: 8f641d93c38a ("libertas: detect TX lockups and reset hardware")
Fixes: 954ee164f4f4 ("[PATCH] libertas: reorganize and simplify init sequence")
Cc: stable(a)vger.kernel.org
Signed-off-by: Daniel Hodges <git(a)danielhodges.dev>
Link: https://patch.msgid.link/20260206195356.15647-1-git@danielhodges.dev
Signed-off-by: Johannes Berg <johannes.berg(a)intel.com>
[ del_timer() => timer_delete_sync() ]
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Wupeng Ma <mawupeng1(a)huawei.com>
---
drivers/net/wireless/marvell/libertas/main.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/wireless/marvell/libertas/main.c b/drivers/net/wireless/marvell/libertas/main.c
index 78e8b5aecec0e..91b9501c6d8cb 100644
--- a/drivers/net/wireless/marvell/libertas/main.c
+++ b/drivers/net/wireless/marvell/libertas/main.c
@@ -881,8 +881,8 @@ static void lbs_free_adapter(struct lbs_private *priv)
{
lbs_free_cmd_buffer(priv);
kfifo_free(&priv->event_fifo);
- del_timer(&priv->command_timer);
- del_timer(&priv->tx_lockup_timer);
+ timer_delete_sync(&priv->command_timer);
+ timer_delete_sync(&priv->tx_lockup_timer);
del_timer(&priv->auto_deepsleep_timer);
}
--
2.43.0
2
1
[PATCH OLK-6.6] can: usb: etas_es58x: correctly anchor the urb in the read bulk callback
by Wupeng Ma 03 Apr '26
by Wupeng Ma 03 Apr '26
03 Apr '26
From: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
stable inclusion
from stable-v6.6.130
commit f6e90c113c92e83fc0963d5e60e16b0e8a268981
category: bugfix
bugzilla: 13957
CVE: CVE-2026-23324
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 5eaad4f768266f1f17e01232ffe2ef009f8129b7 upstream.
When submitting an urb, that is using the anchor pattern, it needs to be
anchored before submitting it otherwise it could be leaked if
usb_kill_anchored_urbs() is called. This logic is correctly done
elsewhere in the driver, except in the read bulk callback so do that
here also.
Cc: Vincent Mailhol <mailhol(a)kernel.org>
Cc: Marc Kleine-Budde <mkl(a)pengutronix.de>
Cc: stable(a)kernel.org
Assisted-by: gkh_clanker_2000
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Reviewed-by: Vincent Mailhol <mailhol(a)kernel.org>
Tested-by: Vincent Mailhol <mailhol(a)kernel.org>
Link: https://patch.msgid.link/2026022320-poser-stiffly-9d84@gregkh
Fixes: 8537257874e9 ("can: etas_es58x: add core support for ETAS ES58X CAN USB interfaces")
Signed-off-by: Marc Kleine-Budde <mkl(a)pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Wupeng Ma <mawupeng1(a)huawei.com>
---
drivers/net/can/usb/etas_es58x/es58x_core.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/net/can/usb/etas_es58x/es58x_core.c b/drivers/net/can/usb/etas_es58x/es58x_core.c
index bb49a2c0a9a5c..3edf06106f9d3 100644
--- a/drivers/net/can/usb/etas_es58x/es58x_core.c
+++ b/drivers/net/can/usb/etas_es58x/es58x_core.c
@@ -1461,12 +1461,18 @@ static void es58x_read_bulk_callback(struct urb *urb)
}
resubmit_urb:
+ usb_anchor_urb(urb, &es58x_dev->rx_urbs);
ret = usb_submit_urb(urb, GFP_ATOMIC);
+ if (!ret)
+ return;
+
+ usb_unanchor_urb(urb);
+
if (ret == -ENODEV) {
for (i = 0; i < es58x_dev->num_can_ch; i++)
if (es58x_dev->netdev[i])
netif_device_detach(es58x_dev->netdev[i]);
- } else if (ret)
+ } else
dev_err_ratelimited(dev,
"Failed resubmitting read bulk urb: %pe\n",
ERR_PTR(ret));
--
2.43.0
2
1
您好!
Kernel 邀请您参加 2026-04-03 14:00 召开的WeLink会议(自动录制)
会议主题:openEuler Kernel SIG双周例会
会议发起人: LiaoTao_Wave
会议内容:
1. 进展update
2. 议题征集中(新增议题可直接填写至会议看板)
会议链接:https://meeting.huaweicloud.com:36443/#/j/981797149
会议纪要&签到链接:https://etherpad.openeuler.org/p/Kernel-meetings
更多资讯尽在:https://www.openeuler.org/zh/
Hello!
Kernel invites you to attend the WeLink conference(auto recording) will be held at 2026-04-03 14:00
The subject of the conference is openEuler Kernel SIG双周例会
The sponsor of the conference is LiaoTao_Wave
Summary:
1. 进展update
2. 议题征集中(新增议题可直接填写至会议看板)
You can join the meeting at https://meeting.huaweicloud.com:36443/#/j/981797149
Add topics at https://etherpad.openeuler.org/p/Kernel-meetings
More information: https://www.openeuler.org/en/
1
0
03 Apr '26
Qinxin Xia (2):
arm-smmu-v3: add HIP09A, HIP09B, HIP10C, HIP10CA for 162100602 errata
ACPI/IORT: Add PMCG platform information for 162001900
drivers/acpi/arm64/iort.c | 4 ++++
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 6 ++++++
2 files changed, 10 insertions(+)
--
2.25.1
2
3
02 Apr '26
Reuse SUBSYS for xcu and freezer to preserve KABI
Liu Kai (5):
xSched/cgroup: reuse SUBSYS for xcu and freezer to preserve KABI
xSched/cgroup: make xcu.stat invisible at root cgroup
cgroup: sync CGROUP_SUBSYS_COUNT limit with upstream to 16
xSched: enable CONFIG_CGROUP_XCU and CONFIG_XCU_SCHED_CFS in arm64/x86
defconfig
xSched: update xSched manual for xcu cmdline enable option
Documentation/scheduler/xsched.md | 6 +-
arch/arm64/configs/openeuler_defconfig | 3 +-
arch/x86/configs/openeuler_defconfig | 3 +-
include/linux/cgroup_subsys.h | 8 +-
include/linux/freezer.h | 24 ++++
kernel/cgroup/cgroup.c | 2 +-
kernel/cgroup/legacy_freezer.c | 25 +++-
kernel/xsched/cgroup.c | 174 +++++++++++++++++++++++--
8 files changed, 217 insertions(+), 28 deletions(-)
--
2.34.1
2
6
Eric Dumazet (1):
net: prevent NULL deref in ip[6]tunnel_xmit()
Weiming Shi (1):
net: add xmit recursion limit to tunnel xmit functions
include/net/ip6_tunnel.h | 14 ++++++++++++++
include/net/ip_tunnels.h | 7 +++++++
net/ipv4/ip_tunnel_core.c | 15 +++++++++++++++
3 files changed, 36 insertions(+)
--
2.34.1
2
3
From: Eric Dumazet <edumazet(a)google.com>
mainline inclusion
from mainline-v7.0-rc3
commit 165573e41f2f66ef98940cf65f838b2cb575d9d1
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13867
CVE: CVE-2026-23247
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
This reverts 28ee1b746f49 ("secure_seq: downgrade to per-host timestamp offsets")
tcp_tw_recycle went away in 2017.
Zhouyan Deng reported off-path TCP source port leakage via
SYN cookie side-channel that can be fixed in multiple ways.
One of them is to bring back TCP ports in TS offset randomization.
As a bonus, we perform a single siphash() computation
to provide both an ISN and a TS offset.
Fixes: 28ee1b746f49 ("secure_seq: downgrade to per-host timestamp offsets")
Reported-by: Zhouyan Deng <dengzhouyan_nwpu(a)163.com>
Signed-off-by: Eric Dumazet <edumazet(a)google.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu(a)google.com>
Acked-by: Florian Westphal <fw(a)strlen.de>
Link: https://patch.msgid.link/20260302205527.1982836-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Conflicts:
include/net/secure_seq.h
include/net/tcp.h
net/core/secure_seq.c
net/ipv4/syncookies.c
net/ipv4/tcp_input.c
net/ipv4/tcp_ipv4.c
net/ipv6/syncookies.c
net/ipv6/tcp_ipv6.c
[conflicts due to merged
bf8317ce3910 ("net: use __GENKSYMS__ to revert the kabi change") and not merge
2a63dd0edf38 ("net: Retire DCCP socket.") and
6dc4c2526f6d ("tcp: use EXPORT_IPV6_MOD[_GPL]()") and
8e7bab6b9652 ("tcp: Factorise cookie-independent fields initialisation in cookie_v[46]_check()") and
cdbab6236605 ("tcp: fix fastopen code vs usec TS") and
449f68f8fffa ("net: Convert proto callbacks from sockaddr to sockaddr_unsized").]
Signed-off-by: Li Xiasong <lixiasong1(a)huawei.com>
---
include/net/secure_seq.h | 45 ++++++++++++++++++----
include/net/tcp.h | 6 ++-
net/core/secure_seq.c | 80 +++++++++++++++-------------------------
net/ipv4/syncookies.c | 11 ++++--
net/ipv4/tcp_input.c | 10 +++--
net/ipv4/tcp_ipv4.c | 37 +++++++++----------
net/ipv6/syncookies.c | 11 ++++--
net/ipv6/tcp_ipv6.c | 37 +++++++++----------
8 files changed, 128 insertions(+), 109 deletions(-)
diff --git a/include/net/secure_seq.h b/include/net/secure_seq.h
index 21e7fa2a1813..e4ad681a2a12 100644
--- a/include/net/secure_seq.h
+++ b/include/net/secure_seq.h
@@ -5,20 +5,51 @@
#include <linux/types.h>
struct net;
+extern struct net init_net;
+
+union tcp_seq_and_ts_off {
+ struct {
+ u32 seq;
+ u32 ts_off;
+ };
+ u64 hash64;
+};
u64 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport);
u64 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr,
__be16 dport);
-u32 secure_tcp_seq(__be32 saddr, __be32 daddr,
- __be16 sport, __be16 dport);
-u32 secure_tcp_ts_off(const struct net *net, __be32 saddr, __be32 daddr);
-u32 secure_tcpv6_seq(const __be32 *saddr, const __be32 *daddr,
- __be16 sport, __be16 dport);
-u32 secure_tcpv6_ts_off(const struct net *net,
- const __be32 *saddr, const __be32 *daddr);
u64 secure_dccp_sequence_number(__be32 saddr, __be32 daddr,
__be16 sport, __be16 dport);
u64 secure_dccpv6_sequence_number(__be32 *saddr, __be32 *daddr,
__be16 sport, __be16 dport);
+union tcp_seq_and_ts_off
+secure_tcp_seq_and_ts_off(const struct net *net, __be32 saddr, __be32 daddr,
+ __be16 sport, __be16 dport);
+
+static inline u32 secure_tcp_seq(__be32 saddr, __be32 daddr,
+ __be16 sport, __be16 dport)
+{
+ union tcp_seq_and_ts_off ts;
+
+ ts = secure_tcp_seq_and_ts_off(&init_net, saddr, daddr,
+ sport, dport);
+
+ return ts.seq;
+}
+
+union tcp_seq_and_ts_off
+secure_tcpv6_seq_and_ts_off(const struct net *net, const __be32 *saddr,
+ const __be32 *daddr,
+ __be16 sport, __be16 dport);
+
+static inline u32 secure_tcpv6_seq(const __be32 *saddr, const __be32 *daddr,
+ __be16 sport, __be16 dport)
+{
+ union tcp_seq_and_ts_off ts;
+
+ ts = secure_tcpv6_seq_and_ts_off(&init_net, saddr, daddr,
+ sport, dport);
+ return ts.seq;
+}
#endif /* _NET_SECURE_SEQ */
diff --git a/include/net/tcp.h b/include/net/tcp.h
index e8f81924defc..f0a6468a1c34 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -44,6 +44,7 @@
#ifndef __GENKSYMS__
#include <net/xfrm.h>
#endif
+#include <net/secure_seq.h>
#include <linux/seq_file.h>
#include <linux/memcontrol.h>
@@ -2180,8 +2181,9 @@ struct tcp_request_sock_ops {
struct sk_buff *skb,
struct flowi *fl,
struct request_sock *req);
- u32 (*init_seq)(const struct sk_buff *skb);
- u32 (*init_ts_off)(const struct net *net, const struct sk_buff *skb);
+ union tcp_seq_and_ts_off (*init_seq_and_ts_off)(
+ const struct net *net,
+ const struct sk_buff *skb);
int (*send_synack)(const struct sock *sk, struct dst_entry *dst,
struct flowi *fl, struct request_sock *req,
struct tcp_fastopen_cookie *foc,
diff --git a/net/core/secure_seq.c b/net/core/secure_seq.c
index b0ff6153be62..740642aeaf76 100644
--- a/net/core/secure_seq.c
+++ b/net/core/secure_seq.c
@@ -20,7 +20,6 @@
#include <net/tcp.h>
static siphash_aligned_key_t net_secret;
-static siphash_aligned_key_t ts_secret;
#define EPHEMERAL_PORT_SHUFFLE_PERIOD (10 * HZ)
@@ -28,11 +27,6 @@ static __always_inline void net_secret_init(void)
{
net_get_random_once(&net_secret, sizeof(net_secret));
}
-
-static __always_inline void ts_secret_init(void)
-{
- net_get_random_once(&ts_secret, sizeof(ts_secret));
-}
#endif
#ifdef CONFIG_INET
@@ -53,28 +47,9 @@ static u32 seq_scale(u32 seq)
#endif
#if IS_ENABLED(CONFIG_IPV6)
-u32 secure_tcpv6_ts_off(const struct net *net,
- const __be32 *saddr, const __be32 *daddr)
-{
- const struct {
- struct in6_addr saddr;
- struct in6_addr daddr;
- } __aligned(SIPHASH_ALIGNMENT) combined = {
- .saddr = *(struct in6_addr *)saddr,
- .daddr = *(struct in6_addr *)daddr,
- };
-
- if (READ_ONCE(net->ipv4.sysctl_tcp_timestamps) != 1)
- return 0;
-
- ts_secret_init();
- return siphash(&combined, offsetofend(typeof(combined), daddr),
- &ts_secret);
-}
-EXPORT_SYMBOL(secure_tcpv6_ts_off);
-
-u32 secure_tcpv6_seq(const __be32 *saddr, const __be32 *daddr,
- __be16 sport, __be16 dport)
+union tcp_seq_and_ts_off
+secure_tcpv6_seq_and_ts_off(const struct net *net, const __be32 *saddr,
+ const __be32 *daddr, __be16 sport, __be16 dport)
{
const struct {
struct in6_addr saddr;
@@ -87,14 +62,20 @@ u32 secure_tcpv6_seq(const __be32 *saddr, const __be32 *daddr,
.sport = sport,
.dport = dport
};
- u32 hash;
+ union tcp_seq_and_ts_off st;
net_secret_init();
- hash = siphash(&combined, offsetofend(typeof(combined), dport),
- &net_secret);
- return seq_scale(hash);
+
+ st.hash64 = siphash(&combined, offsetofend(typeof(combined), dport),
+ &net_secret);
+
+ if (READ_ONCE(net->ipv4.sysctl_tcp_timestamps) != 1)
+ st.ts_off = 0;
+
+ st.seq = seq_scale(st.seq);
+ return st;
}
-EXPORT_SYMBOL(secure_tcpv6_seq);
+EXPORT_SYMBOL(secure_tcpv6_seq_and_ts_off);
u64 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr,
__be16 dport)
@@ -118,33 +99,30 @@ EXPORT_SYMBOL(secure_ipv6_port_ephemeral);
#endif
#ifdef CONFIG_INET
-u32 secure_tcp_ts_off(const struct net *net, __be32 saddr, __be32 daddr)
-{
- if (READ_ONCE(net->ipv4.sysctl_tcp_timestamps) != 1)
- return 0;
-
- ts_secret_init();
- return siphash_2u32((__force u32)saddr, (__force u32)daddr,
- &ts_secret);
-}
-
/* secure_tcp_seq_and_tsoff(a, b, 0, d) == secure_ipv4_port_ephemeral(a, b, d),
* but fortunately, `sport' cannot be 0 in any circumstances. If this changes,
* it would be easy enough to have the former function use siphash_4u32, passing
* the arguments as separate u32.
*/
-u32 secure_tcp_seq(__be32 saddr, __be32 daddr,
- __be16 sport, __be16 dport)
+union tcp_seq_and_ts_off
+secure_tcp_seq_and_ts_off(const struct net *net, __be32 saddr, __be32 daddr,
+ __be16 sport, __be16 dport)
{
- u32 hash;
+ u32 ports = (__force u32)sport << 16 | (__force u32)dport;
+ union tcp_seq_and_ts_off st;
net_secret_init();
- hash = siphash_3u32((__force u32)saddr, (__force u32)daddr,
- (__force u32)sport << 16 | (__force u32)dport,
- &net_secret);
- return seq_scale(hash);
+
+ st.hash64 = siphash_3u32((__force u32)saddr, (__force u32)daddr,
+ ports, &net_secret);
+
+ if (READ_ONCE(net->ipv4.sysctl_tcp_timestamps) != 1)
+ st.ts_off = 0;
+
+ st.seq = seq_scale(st.seq);
+ return st;
}
-EXPORT_SYMBOL_GPL(secure_tcp_seq);
+EXPORT_SYMBOL_GPL(secure_tcp_seq_and_ts_off);
u64 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport)
{
diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
index 64f098d127ac..b8f7e2b0b4f3 100644
--- a/net/ipv4/syncookies.c
+++ b/net/ipv4/syncookies.c
@@ -353,9 +353,14 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb)
tcp_parse_options(sock_net(sk), skb, &tcp_opt, 0, NULL);
if (tcp_opt.saw_tstamp && tcp_opt.rcv_tsecr) {
- tsoff = secure_tcp_ts_off(sock_net(sk),
- ip_hdr(skb)->daddr,
- ip_hdr(skb)->saddr);
+ union tcp_seq_and_ts_off st;
+
+ st = secure_tcp_seq_and_ts_off(sock_net(sk),
+ ip_hdr(skb)->daddr,
+ ip_hdr(skb)->saddr,
+ tcp_hdr(skb)->dest,
+ tcp_hdr(skb)->source);
+ tsoff = st.ts_off;
tcp_opt.rcv_tsecr -= tsoff;
}
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 8d6757ec06aa..9ca0a78d5a4b 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -7165,6 +7165,7 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
struct tcp_sock *tp = tcp_sk(sk);
struct net *net = sock_net(sk);
struct sock *fastopen_sk = NULL;
+ union tcp_seq_and_ts_off st;
struct request_sock *req;
bool want_cookie = false;
struct dst_entry *dst;
@@ -7227,9 +7228,12 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
if (!dst)
goto drop_and_free;
- if (tmp_opt.tstamp_ok)
- tcp_rsk(req)->ts_off = af_ops->init_ts_off(net, skb);
+ if (tmp_opt.tstamp_ok || (!want_cookie && !isn))
+ st = af_ops->init_seq_and_ts_off(net, skb);
+ if (tmp_opt.tstamp_ok) {
+ tcp_rsk(req)->ts_off = st.ts_off;
+ }
if (!want_cookie && !isn) {
int max_syn_backlog = READ_ONCE(net->ipv4.sysctl_max_syn_backlog);
@@ -7250,7 +7254,7 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
goto drop_and_release;
}
- isn = af_ops->init_seq(skb);
+ isn = st.seq;
}
tcp_ecn_create_request(req, skb, sk, dst);
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 56d00db07d12..ceb34b30218b 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -96,17 +96,14 @@ static DEFINE_PER_CPU(struct sock *, ipv4_tcp_sk);
static DEFINE_MUTEX(tcp_exit_batch_mutex);
-static u32 tcp_v4_init_seq(const struct sk_buff *skb)
+static union tcp_seq_and_ts_off
+tcp_v4_init_seq_and_ts_off(const struct net *net, const struct sk_buff *skb)
{
- return secure_tcp_seq(ip_hdr(skb)->daddr,
- ip_hdr(skb)->saddr,
- tcp_hdr(skb)->dest,
- tcp_hdr(skb)->source);
-}
-
-static u32 tcp_v4_init_ts_off(const struct net *net, const struct sk_buff *skb)
-{
- return secure_tcp_ts_off(net, ip_hdr(skb)->daddr, ip_hdr(skb)->saddr);
+ return secure_tcp_seq_and_ts_off(net,
+ ip_hdr(skb)->daddr,
+ ip_hdr(skb)->saddr,
+ tcp_hdr(skb)->dest,
+ tcp_hdr(skb)->source);
}
int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp)
@@ -313,15 +310,16 @@ int tcp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
rt = NULL;
if (likely(!tp->repair)) {
+ union tcp_seq_and_ts_off st;
+
+ st = secure_tcp_seq_and_ts_off(net,
+ inet->inet_saddr,
+ inet->inet_daddr,
+ inet->inet_sport,
+ usin->sin_port);
if (!tp->write_seq)
- WRITE_ONCE(tp->write_seq,
- secure_tcp_seq(inet->inet_saddr,
- inet->inet_daddr,
- inet->inet_sport,
- usin->sin_port));
- WRITE_ONCE(tp->tsoffset,
- secure_tcp_ts_off(net, inet->inet_saddr,
- inet->inet_daddr));
+ WRITE_ONCE(tp->write_seq, st.seq);
+ WRITE_ONCE(tp->tsoffset, st.ts_off);
}
atomic_set(&inet->inet_id, get_random_u16());
@@ -1535,8 +1533,7 @@ const struct tcp_request_sock_ops tcp_request_sock_ipv4_ops = {
.cookie_init_seq = cookie_v4_init_sequence,
#endif
.route_req = tcp_v4_route_req,
- .init_seq = tcp_v4_init_seq,
- .init_ts_off = tcp_v4_init_ts_off,
+ .init_seq_and_ts_off = tcp_v4_init_seq_and_ts_off,
.send_synack = tcp_v4_send_synack,
};
diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
index a6cefee15005..bdbd3fd20448 100644
--- a/net/ipv6/syncookies.c
+++ b/net/ipv6/syncookies.c
@@ -161,9 +161,14 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb)
tcp_parse_options(sock_net(sk), skb, &tcp_opt, 0, NULL);
if (tcp_opt.saw_tstamp && tcp_opt.rcv_tsecr) {
- tsoff = secure_tcpv6_ts_off(sock_net(sk),
- ipv6_hdr(skb)->daddr.s6_addr32,
- ipv6_hdr(skb)->saddr.s6_addr32);
+ union tcp_seq_and_ts_off st;
+
+ st = secure_tcpv6_seq_and_ts_off(sock_net(sk),
+ ipv6_hdr(skb)->daddr.s6_addr32,
+ ipv6_hdr(skb)->saddr.s6_addr32,
+ tcp_hdr(skb)->dest,
+ tcp_hdr(skb)->source);
+ tsoff = st.ts_off;
tcp_opt.rcv_tsecr -= tsoff;
}
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index f285e52b8b85..793e644fa750 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -107,18 +107,14 @@ static void inet6_sk_rx_dst_set(struct sock *sk, const struct sk_buff *skb)
}
}
-static u32 tcp_v6_init_seq(const struct sk_buff *skb)
+static union tcp_seq_and_ts_off
+tcp_v6_init_seq_and_ts_off(const struct net *net, const struct sk_buff *skb)
{
- return secure_tcpv6_seq(ipv6_hdr(skb)->daddr.s6_addr32,
- ipv6_hdr(skb)->saddr.s6_addr32,
- tcp_hdr(skb)->dest,
- tcp_hdr(skb)->source);
-}
-
-static u32 tcp_v6_init_ts_off(const struct net *net, const struct sk_buff *skb)
-{
- return secure_tcpv6_ts_off(net, ipv6_hdr(skb)->daddr.s6_addr32,
- ipv6_hdr(skb)->saddr.s6_addr32);
+ return secure_tcpv6_seq_and_ts_off(net,
+ ipv6_hdr(skb)->daddr.s6_addr32,
+ ipv6_hdr(skb)->saddr.s6_addr32,
+ tcp_hdr(skb)->dest,
+ tcp_hdr(skb)->source);
}
static int tcp_v6_pre_connect(struct sock *sk, struct sockaddr *uaddr,
@@ -318,14 +314,16 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
sk_set_txhash(sk);
if (likely(!tp->repair)) {
+ union tcp_seq_and_ts_off st;
+
+ st = secure_tcpv6_seq_and_ts_off(net,
+ np->saddr.s6_addr32,
+ sk->sk_v6_daddr.s6_addr32,
+ inet->inet_sport,
+ inet->inet_dport);
if (!tp->write_seq)
- WRITE_ONCE(tp->write_seq,
- secure_tcpv6_seq(np->saddr.s6_addr32,
- sk->sk_v6_daddr.s6_addr32,
- inet->inet_sport,
- inet->inet_dport));
- tp->tsoffset = secure_tcpv6_ts_off(net, np->saddr.s6_addr32,
- sk->sk_v6_daddr.s6_addr32);
+ WRITE_ONCE(tp->write_seq, st.seq);
+ tp->tsoffset = st.ts_off;
}
if (tcp_fastopen_defer_connect(sk, &err))
@@ -831,8 +829,7 @@ const struct tcp_request_sock_ops tcp_request_sock_ipv6_ops = {
.cookie_init_seq = cookie_v6_init_sequence,
#endif
.route_req = tcp_v6_route_req,
- .init_seq = tcp_v6_init_seq,
- .init_ts_off = tcp_v6_init_ts_off,
+ .init_seq_and_ts_off = tcp_v6_init_seq_and_ts_off,
.send_synack = tcp_v6_send_synack,
};
--
2.34.1
2
1
From: Paolo Abeni <pabeni(a)redhat.com>
stable inclusion
from stable-v6.6.124
commit 9d40a85138568696387ef04cd004c64612a70874
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13895
CVE: CVE-2026-23254
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 5c2c3c38be396257a6a2e55bd601a12bb9781507 ]
The udp GRO complete stage assumes that all the packets inserted the RX
have the `encapsulation` flag zeroed. Such assumption is not true, as a
few H/W NICs can set such flag when H/W offloading the checksum for
an UDP encapsulated traffic, the tun driver can inject GSO packets with
UDP encapsulation and the problematic layout can also be created via
a veth based setup.
Due to the above, in the problematic scenarios, udp4_gro_complete() uses
the wrong network offset (inner instead of outer) to compute the outer
UDP header pseudo checksum, leading to csum validation errors later on
in packet processing.
Address the issue always clearing the encapsulation flag at GRO completion
time. Such flag will be set again as needed for encapsulated packets by
udp_gro_complete().
Fixes: 5ef31ea5d053 ("net: gro: fix udp bad offset in socket lookup by adding {inner_}network_offset to napi_gro_cb")
Reviewed-by: Willem de Bruijn <willemb(a)google.com>
Signed-off-by: Paolo Abeni <pabeni(a)redhat.com>
Reviewed-by: Eric Dumazet <edumazet(a)google.com>
Link: https://patch.msgid.link/562638dbebb3b15424220e26a180274b387e2a88.177003208…
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Li Xiasong <lixiasong1(a)huawei.com>
---
net/core/gro.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/net/core/gro.c b/net/core/gro.c
index 2f72cce0b219..98f47ff5837b 100644
--- a/net/core/gro.c
+++ b/net/core/gro.c
@@ -242,6 +242,8 @@ static void napi_gro_complete(struct napi_struct *napi, struct sk_buff *skb)
goto out;
}
+ /* NICs can feed encapsulated packets into GRO */
+ skb->encapsulation = 0;
rcu_read_lock();
list_for_each_entry_rcu(ptype, head, list) {
if (ptype->type != type || !ptype->callbacks.gro_complete)
--
2.34.1
2
1
[PATCH OLK-6.6] net: annotate data-races around sk->sk_{data_ready,write_space}
by Li Xiasong 02 Apr '26
by Li Xiasong 02 Apr '26
02 Apr '26
From: Eric Dumazet <edumazet(a)google.com>
mainline inclusion
from mainline-v7.0-rc3
commit 2ef2b20cf4e04ac8a6ba68493f8780776ff84300
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13979
CVE: CVE-2026-23302
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
skmsg (and probably other layers) are changing these pointers
while other cpus might read them concurrently.
Add corresponding READ_ONCE()/WRITE_ONCE() annotations
for UDP, TCP and AF_UNIX.
Fixes: 604326b41a6f ("bpf, sockmap: convert to generic sk_msg interface")
Reported-by: syzbot+87f770387a9e5dc6b79b(a)syzkaller.appspotmail.com
Closes: https://lore.kernel.org/netdev/699ee9fc.050a0220.1cd54b.0009.GAE@google.com/
Signed-off-by: Eric Dumazet <edumazet(a)google.com>
Cc: Daniel Borkmann <daniel(a)iogearbox.net>
Cc: John Fastabend <john.fastabend(a)gmail.com>
Cc: Jakub Sitnicki <jakub(a)cloudflare.com>
Cc: Willem de Bruijn <willemdebruijn.kernel(a)gmail.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu(a)google.com>
Link: https://patch.msgid.link/20260225131547.1085509-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Conflicts:
net/ipv4/udp.c
net/ipv4/tcp.c
net/ipv4/tcp_input.c
net/ipv4/tcp_minisocks.c
net/unix/af_unix.c
[conflicts due to not merge
bd61848900bf ("net: devmem: Implement TX path") and
95b9a87c6a6b ("tcp: record last received ipv6 flowlabel") and
f0db2bca0cf9 ("tcp: rework {__,}tcp_ecn_check_ce() -> tcp_data_ecn_check()") and
2bd99aef1b19 ("tcp: accept bare FIN packets under memory pressure") and
b98256959305 ("tcp: make the dropreason really work when calling tcp_rcv_state_process()") and
85cb0757d7e1 ("net: Convert proto_ops connect() callbacks to use sockaddr_unsized") and
f4e1fb04c123 ("af_unix: Use cached value for SOCK_STREAM in unix_inq_len().") and
b650bf0977d3 ("udp: remove busylock and add per NUMA queues").]
Signed-off-by: Li Xiasong <lixiasong1(a)huawei.com>
---
net/core/skmsg.c | 14 +++++++-------
net/ipv4/tcp.c | 4 ++--
net/ipv4/tcp_bpf.c | 2 +-
net/ipv4/tcp_input.c | 14 ++++++++------
net/ipv4/tcp_minisocks.c | 2 +-
net/ipv4/udp.c | 3 ++-
net/ipv4/udp_bpf.c | 2 +-
net/unix/af_unix.c | 8 ++++----
8 files changed, 26 insertions(+), 23 deletions(-)
diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 3429c6e2ce21..e3696d510eec 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -1174,8 +1174,8 @@ void sk_psock_start_strp(struct sock *sk, struct sk_psock *psock)
return;
psock->saved_data_ready = sk->sk_data_ready;
- sk->sk_data_ready = sk_psock_strp_data_ready;
- sk->sk_write_space = sk_psock_write_space;
+ WRITE_ONCE(sk->sk_data_ready, sk_psock_strp_data_ready);
+ WRITE_ONCE(sk->sk_write_space, sk_psock_write_space);
}
void sk_psock_stop_strp(struct sock *sk, struct sk_psock *psock)
@@ -1185,8 +1185,8 @@ void sk_psock_stop_strp(struct sock *sk, struct sk_psock *psock)
if (!psock->saved_data_ready)
return;
- sk->sk_data_ready = psock->saved_data_ready;
- psock->saved_data_ready = NULL;
+ WRITE_ONCE(sk->sk_data_ready, psock->saved_data_ready);
+ WRITE_ONCE(psock->saved_data_ready, NULL);
strp_stop(&psock->strp);
}
@@ -1265,8 +1265,8 @@ void sk_psock_start_verdict(struct sock *sk, struct sk_psock *psock)
return;
psock->saved_data_ready = sk->sk_data_ready;
- sk->sk_data_ready = sk_psock_verdict_data_ready;
- sk->sk_write_space = sk_psock_write_space;
+ WRITE_ONCE(sk->sk_data_ready, sk_psock_verdict_data_ready);
+ WRITE_ONCE(sk->sk_write_space, sk_psock_write_space);
}
void sk_psock_stop_verdict(struct sock *sk, struct sk_psock *psock)
@@ -1277,6 +1277,6 @@ void sk_psock_stop_verdict(struct sock *sk, struct sk_psock *psock)
if (!psock->saved_data_ready)
return;
- sk->sk_data_ready = psock->saved_data_ready;
+ WRITE_ONCE(sk->sk_data_ready, psock->saved_data_ready);
psock->saved_data_ready = NULL;
}
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index bf23aa827031..1fe89e742f75 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1342,7 +1342,7 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
err = sk_stream_error(sk, flags, err);
/* make sure we wake any epoll edge trigger waiter */
if (unlikely(tcp_rtx_and_write_queues_empty(sk) && err == -EAGAIN)) {
- sk->sk_write_space(sk);
+ READ_ONCE(sk->sk_write_space)(sk);
tcp_chrono_stop(sk, TCP_CHRONO_SNDBUF_LIMITED);
}
return err;
@@ -3705,7 +3705,7 @@ int do_tcp_setsockopt(struct sock *sk, int level, int optname,
break;
case TCP_NOTSENT_LOWAT:
WRITE_ONCE(tp->notsent_lowat, val);
- sk->sk_write_space(sk);
+ READ_ONCE(sk->sk_write_space)(sk);
break;
case TCP_INQ:
if (val > 1 || val < 0)
diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
index 0e56f3baf98a..9ceb6e0c8b00 100644
--- a/net/ipv4/tcp_bpf.c
+++ b/net/ipv4/tcp_bpf.c
@@ -668,7 +668,7 @@ int tcp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore)
WRITE_ONCE(sk->sk_prot->unhash, psock->saved_unhash);
tcp_update_ulp(sk, psock->sk_proto, psock->saved_write_space);
} else {
- sk->sk_write_space = psock->saved_write_space;
+ WRITE_ONCE(sk->sk_write_space, psock->saved_write_space);
/* Pairs with lockless read in sk_clone_lock() */
sock_replace_proto(sk, psock->sk_proto);
}
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 8d6757ec06aa..a10e7da1a612 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -4985,7 +4985,7 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb)
if (unlikely(tcp_try_rmem_schedule(sk, skb, skb->truesize))) {
NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPOFODROP);
- sk->sk_data_ready(sk);
+ READ_ONCE(sk->sk_data_ready)(sk);
tcp_drop_reason(sk, skb, SKB_DROP_REASON_PROTO_MEM);
return;
}
@@ -5192,7 +5192,7 @@ int tcp_send_rcvq(struct sock *sk, struct msghdr *msg, size_t size)
void tcp_data_ready(struct sock *sk)
{
if (tcp_epollin_ready(sk, sk->sk_rcvlowat) || sock_flag(sk, SOCK_DONE))
- sk->sk_data_ready(sk);
+ READ_ONCE(sk->sk_data_ready)(sk);
}
static void tcp_data_queue(struct sock *sk, struct sk_buff *skb)
@@ -5238,7 +5238,7 @@ static void tcp_data_queue(struct sock *sk, struct sk_buff *skb)
inet_csk(sk)->icsk_ack.pending |=
(ICSK_ACK_NOMEM | ICSK_ACK_NOW);
inet_csk_schedule_ack(sk);
- sk->sk_data_ready(sk);
+ READ_ONCE(sk->sk_data_ready)(sk);
if (skb_queue_len(&sk->sk_receive_queue)) {
reason = SKB_DROP_REASON_PROTO_MEM;
@@ -5675,7 +5675,9 @@ static void tcp_new_space(struct sock *sk)
tp->snd_cwnd_stamp = tcp_jiffies32;
}
- INDIRECT_CALL_1(sk->sk_write_space, sk_stream_write_space, sk);
+ INDIRECT_CALL_1(READ_ONCE(sk->sk_write_space),
+ sk_stream_write_space,
+ sk);
}
/* Caller made space either from:
@@ -5873,7 +5875,7 @@ static void tcp_urg(struct sock *sk, struct sk_buff *skb, const struct tcphdr *t
BUG();
WRITE_ONCE(tp->urg_data, TCP_URG_VALID | tmp);
if (!sock_flag(sk, SOCK_DEAD))
- sk->sk_data_ready(sk);
+ READ_ONCE(sk->sk_data_ready)(sk);
}
}
}
@@ -7279,7 +7281,7 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
sock_put(fastopen_sk);
goto drop_and_free;
}
- sk->sk_data_ready(sk);
+ READ_ONCE(sk->sk_data_ready)(sk);
bh_unlock_sock(fastopen_sk);
sock_put(fastopen_sk);
} else {
diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
index 770b1ffb2e22..8d72532143bf 100644
--- a/net/ipv4/tcp_minisocks.c
+++ b/net/ipv4/tcp_minisocks.c
@@ -900,7 +900,7 @@ int tcp_child_process(struct sock *parent, struct sock *child,
ret = tcp_rcv_state_process(child, skb);
/* Wakeup parent, send SIGIO */
if (state == TCP_SYN_RECV && child->sk_state != state)
- parent->sk_data_ready(parent);
+ READ_ONCE(parent->sk_data_ready)(parent);
} else {
/* Alas, it is possible again, because we do lookup
* in main socket hash table and lock on listening
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 4c1f82f6777f..40100fed5c67 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1618,7 +1618,8 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
spin_unlock(&list->lock);
if (!sock_flag(sk, SOCK_DEAD))
- INDIRECT_CALL_1(sk->sk_data_ready, sock_def_readable, sk);
+ INDIRECT_CALL_1(READ_ONCE(sk->sk_data_ready),
+ sock_def_readable, sk);
busylock_release(busy);
return 0;
diff --git a/net/ipv4/udp_bpf.c b/net/ipv4/udp_bpf.c
index 0735d820e413..44271ba1adec 100644
--- a/net/ipv4/udp_bpf.c
+++ b/net/ipv4/udp_bpf.c
@@ -143,7 +143,7 @@ int udp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore)
int family = sk->sk_family == AF_INET ? UDP_BPF_IPV4 : UDP_BPF_IPV6;
if (restore) {
- sk->sk_write_space = psock->saved_write_space;
+ WRITE_ONCE(sk->sk_write_space, psock->saved_write_space);
sock_replace_proto(sk, psock->sk_proto);
return 0;
}
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index abf9fe35e620..51f444ecebf6 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -1678,7 +1678,7 @@ static int unix_stream_connect(struct socket *sock, struct sockaddr *uaddr,
__skb_queue_tail(&other->sk_receive_queue, skb);
spin_unlock(&other->sk_receive_queue.lock);
unix_state_unlock(other);
- other->sk_data_ready(other);
+ READ_ONCE(other->sk_data_ready)(other);
sock_put(other);
return 0;
@@ -2138,7 +2138,7 @@ static int unix_dgram_sendmsg(struct socket *sock, struct msghdr *msg,
scm_stat_add(other, skb);
skb_queue_tail(&other->sk_receive_queue, skb);
unix_state_unlock(other);
- other->sk_data_ready(other);
+ READ_ONCE(other->sk_data_ready)(other);
sock_put(other);
scm_destroy(&scm);
return len;
@@ -2206,7 +2206,7 @@ static int queue_oob(struct socket *sock, struct msghdr *msg, struct sock *other
sk_send_sigurg(other);
unix_state_unlock(other);
- other->sk_data_ready(other);
+ READ_ONCE(other->sk_data_ready)(other);
return err;
}
@@ -2317,7 +2317,7 @@ static int unix_stream_sendmsg(struct socket *sock, struct msghdr *msg,
scm_stat_add(other, skb);
skb_queue_tail(&other->sk_receive_queue, skb);
unix_state_unlock(other);
- other->sk_data_ready(other);
+ READ_ONCE(other->sk_data_ready)(other);
sent += size;
}
--
2.34.1
2
1
02 Apr '26
From: Eric Dumazet <edumazet(a)google.com>
mainline inclusion
from mainline-v6.19
commit f613e8b4afea0cd17c7168e8b00e25bc8d33175d
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13893
CVE: CVE-2026-23255
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
Yin Fengwei reported an RCU stall in ptype_seq_show() and provided
a patch.
Real issue is that ptype_seq_next() and ptype_seq_show() violate
RCU rules.
ptype_seq_show() runs under rcu_read_lock(), and reads pt->dev
to get device name without any barrier.
At the same time, concurrent writers can remove a packet_type structure
(which is correctly freed after an RCU grace period) and clear pt->dev
without an RCU grace period.
Define ptype_iter_state to carry a dev pointer along seq_net_private:
struct ptype_iter_state {
struct seq_net_private p;
struct net_device *dev; // added in this patch
};
We need to record the device pointer in ptype_get_idx() and
ptype_seq_next() so that ptype_seq_show() is safe against
concurrent pt->dev changes.
We also need to add full RCU protection in ptype_seq_next().
(Missing READ_ONCE() when reading list.next values)
Many thanks to Dong Chenchen for providing a repro.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Fixes: 1d10f8a1f40b ("net-procfs: show net devices bound packet types")
Fixes: c353e8983e0d ("net: introduce per netns packet chains")
Reported-by: Yin Fengwei <fengwei_yin(a)linux.alibaba.com>
Reported-by: Dong Chenchen <dongchenchen2(a)huawei.com>
Closes: https://lore.kernel.org/netdev/CANn89iKRRKPnWjJmb-_3a=sq+9h6DvTQM4DBZHT5ZRG…
Signed-off-by: Eric Dumazet <edumazet(a)google.com>
Reviewed-by: Willem de Bruijn <willemb(a)google.com>
Tested-by: Yin Fengwei <fengwei_yin(a)linux.alibaba.com>
Link: https://patch.msgid.link/20260202205217.2881198-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Conflicts:
net/core/net-procfs.c
[conflicts due to not merge c353e8983e0d ("net: introduce per netns packet chains").]
Signed-off-by: Li Xiasong <lixiasong1(a)huawei.com>
---
net/core/net-procfs.c | 48 +++++++++++++++++++++++++++++--------------
1 file changed, 33 insertions(+), 15 deletions(-)
diff --git a/net/core/net-procfs.c b/net/core/net-procfs.c
index 09f7ed1a04e8..caad20464a4d 100644
--- a/net/core/net-procfs.c
+++ b/net/core/net-procfs.c
@@ -200,8 +200,14 @@ static const struct seq_operations softnet_seq_ops = {
.show = softnet_seq_show,
};
+struct ptype_iter_state {
+ struct seq_net_private p;
+ struct net_device *dev;
+};
+
static void *ptype_get_idx(struct seq_file *seq, loff_t pos)
{
+ struct ptype_iter_state *iter = seq->private;
struct list_head *ptype_list = NULL;
struct packet_type *pt = NULL;
struct net_device *dev;
@@ -211,12 +217,16 @@ static void *ptype_get_idx(struct seq_file *seq, loff_t pos)
for_each_netdev_rcu(seq_file_net(seq), dev) {
ptype_list = &dev->ptype_all;
list_for_each_entry_rcu(pt, ptype_list, list) {
- if (i == pos)
+ if (i == pos) {
+ iter->dev = dev;
return pt;
+ }
++i;
}
}
+ iter->dev = NULL;
+
list_for_each_entry_rcu(pt, &ptype_all, list) {
if (i == pos)
return pt;
@@ -242,6 +252,7 @@ static void *ptype_seq_start(struct seq_file *seq, loff_t *pos)
static void *ptype_seq_next(struct seq_file *seq, void *v, loff_t *pos)
{
+ struct ptype_iter_state *iter = seq->private;
struct net_device *dev;
struct packet_type *pt;
struct list_head *nxt;
@@ -252,20 +263,22 @@ static void *ptype_seq_next(struct seq_file *seq, void *v, loff_t *pos)
return ptype_get_idx(seq, 0);
pt = v;
- nxt = pt->list.next;
- if (pt->dev) {
- if (nxt != &pt->dev->ptype_all)
+ nxt = READ_ONCE(pt->list.next);
+ dev = iter->dev;
+ if (dev) {
+ if (nxt != &dev->ptype_all)
goto found;
- dev = pt->dev;
for_each_netdev_continue_rcu(seq_file_net(seq), dev) {
- if (!list_empty(&dev->ptype_all)) {
- nxt = dev->ptype_all.next;
+ nxt = READ_ONCE(dev->ptype_all.next);
+ if (nxt != &dev->ptype_all) {
+ iter->dev = dev;
goto found;
}
}
- nxt = ptype_all.next;
+ iter->dev = NULL;
+ nxt = READ_ONCE(ptype_all.next);
goto ptype_all;
}
@@ -274,14 +287,14 @@ static void *ptype_seq_next(struct seq_file *seq, void *v, loff_t *pos)
if (nxt != &ptype_all)
goto found;
hash = 0;
- nxt = ptype_base[0].next;
+ nxt = READ_ONCE(ptype_base[0].next);
} else
hash = ntohs(pt->type) & PTYPE_HASH_MASK;
while (nxt == &ptype_base[hash]) {
if (++hash >= PTYPE_HASH_SIZE)
return NULL;
- nxt = ptype_base[hash].next;
+ nxt = READ_ONCE(ptype_base[hash].next);
}
found:
return list_entry(nxt, struct packet_type, list);
@@ -295,19 +308,24 @@ static void ptype_seq_stop(struct seq_file *seq, void *v)
static int ptype_seq_show(struct seq_file *seq, void *v)
{
+ struct ptype_iter_state *iter = seq->private;
struct packet_type *pt = v;
+ struct net_device *dev;
- if (v == SEQ_START_TOKEN)
+ if (v == SEQ_START_TOKEN) {
seq_puts(seq, "Type Device Function\n");
- else if ((!pt->af_packet_net || net_eq(pt->af_packet_net, seq_file_net(seq))) &&
- (!pt->dev || net_eq(dev_net(pt->dev), seq_file_net(seq)))) {
+ return 0;
+ }
+ dev = iter->dev;
+ if ((!pt->af_packet_net || net_eq(pt->af_packet_net, seq_file_net(seq))) &&
+ (!dev || net_eq(dev_net(dev), seq_file_net(seq)))) {
if (pt->type == htons(ETH_P_ALL))
seq_puts(seq, "ALL ");
else
seq_printf(seq, "%04x", ntohs(pt->type));
seq_printf(seq, " %-8s %ps\n",
- pt->dev ? pt->dev->name : "", pt->func);
+ dev ? dev->name : "", pt->func);
}
return 0;
@@ -331,7 +349,7 @@ static int __net_init dev_proc_net_init(struct net *net)
&softnet_seq_ops))
goto out_dev;
if (!proc_create_net("ptype", 0444, net->proc_net, &ptype_seq_ops,
- sizeof(struct seq_net_private)))
+ sizeof(struct ptype_iter_state)))
goto out_softnet;
if (wext_proc_init(net))
--
2.34.1
2
1
From: Kuniyuki Iwashima <kuniyu(a)google.com>
mainline inclusion
from mainline-v7.0-rc5
commit e5b31d988a41549037b8d8721a3c3cae893d8670
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14025
CVE: CVE-2026-23394
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
Igor Ushakov reported that GC purged the receive queue of
an alive socket due to a race with MSG_PEEK with a nice repro.
This is the exact same issue previously fixed by commit
cbcf01128d0a ("af_unix: fix garbage collect vs MSG_PEEK").
After GC was replaced with the current algorithm, the cited
commit removed the locking dance in unix_peek_fds() and
reintroduced the same issue.
The problem is that MSG_PEEK bumps a file refcount without
interacting with GC.
Consider an SCC containing sk-A and sk-B, where sk-A is
close()d but can be recv()ed via sk-B.
The bad thing happens if sk-A is recv()ed with MSG_PEEK from
sk-B and sk-B is close()d while GC is checking unix_vertex_dead()
for sk-A and sk-B.
GC thread User thread
--------- -----------
unix_vertex_dead(sk-A)
-> true <------.
\
`------ recv(sk-B, MSG_PEEK)
invalidate !! -> sk-A's file refcount : 1 -> 2
close(sk-B)
-> sk-B's file refcount : 2 -> 1
unix_vertex_dead(sk-B)
-> true
Initially, sk-A's file refcount is 1 by the inflight fd in sk-B
recvq. GC thinks sk-A is dead because the file refcount is the
same as the number of its inflight fds.
However, sk-A's file refcount is bumped silently by MSG_PEEK,
which invalidates the previous evaluation.
At this moment, sk-B's file refcount is 2; one by the open fd,
and one by the inflight fd in sk-A. The subsequent close()
releases one refcount by the former.
Finally, GC incorrectly concludes that both sk-A and sk-B are dead.
One option is to restore the locking dance in unix_peek_fds(),
but we can resolve this more elegantly thanks to the new algorithm.
The point is that the issue does not occur without the subsequent
close() and we actually do not need to synchronise MSG_PEEK with
the dead SCC detection.
When the issue occurs, close() and GC touch the same file refcount.
If GC sees the refcount being decremented by close(), it can just
give up garbage-collecting the SCC.
Therefore, we only need to signal the race during MSG_PEEK with
a proper memory barrier to make it visible to the GC.
Let's use seqcount_t to notify GC when MSG_PEEK occurs and let
it defer the SCC to the next run.
This way no locking is needed on the MSG_PEEK side, and we can
avoid imposing a penalty on every MSG_PEEK unnecessarily.
Note that we can retry within unix_scc_dead() if MSG_PEEK is
detected, but we do not do so to avoid hung task splat from
abusive MSG_PEEK calls.
Fixes: 118f457da9ed ("af_unix: Remove lock dance in unix_peek_fds().")
Reported-by: Igor Ushakov <sysroot314(a)gmail.com>
Signed-off-by: Kuniyuki Iwashima <kuniyu(a)google.com>
Link: https://patch.msgid.link/20260311054043.1231316-1-kuniyu@google.com
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Conflicts:
net/unix/af_unix.h
include/net/af_unix.h
net/unix/garbage.c
[conflicts due to not merge 58b47c713711 ("af_unix: Count cyclic SCC.") and
da8fc7a39be8 ("af_unix: Don't trigger GC from close() if unnecessary.") and
84960bf24031 ("af_unix: Move internal definitions to net/unix/.").]
Signed-off-by: Li Xiasong <lixiasong1(a)huawei.com>
---
include/net/af_unix.h | 1 +
net/unix/af_unix.c | 2 ++
net/unix/garbage.c | 79 ++++++++++++++++++++++++++++---------------
3 files changed, 54 insertions(+), 28 deletions(-)
diff --git a/include/net/af_unix.h b/include/net/af_unix.h
index b6eedf7650da..d2fa6d9f1e97 100644
--- a/include/net/af_unix.h
+++ b/include/net/af_unix.h
@@ -23,6 +23,7 @@ void unix_del_edges(struct scm_fp_list *fpl);
void unix_update_edges(struct unix_sock *receiver);
int unix_prepare_fpl(struct scm_fp_list *fpl);
void unix_destroy_fpl(struct scm_fp_list *fpl);
+void unix_peek_fpl(struct scm_fp_list *fpl);
void unix_gc(void);
void wait_for_unix_gc(struct scm_fp_list *fpl);
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index abf9fe35e620..4407d4fedb02 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -1848,6 +1848,8 @@ static void unix_detach_fds(struct scm_cookie *scm, struct sk_buff *skb)
static void unix_peek_fds(struct scm_cookie *scm, struct sk_buff *skb)
{
scm->fp = scm_fp_dup(UNIXCB(skb).fp);
+
+ unix_peek_fpl(scm->fp);
}
static void unix_destruct_scm(struct sk_buff *skb)
diff --git a/net/unix/garbage.c b/net/unix/garbage.c
index 66fd606c43f4..1cdb54c61619 100644
--- a/net/unix/garbage.c
+++ b/net/unix/garbage.c
@@ -306,6 +306,25 @@ void unix_destroy_fpl(struct scm_fp_list *fpl)
unix_free_vertices(fpl);
}
+static bool gc_in_progress;
+static seqcount_t unix_peek_seq = SEQCNT_ZERO(unix_peek_seq);
+
+void unix_peek_fpl(struct scm_fp_list *fpl)
+{
+ static DEFINE_SPINLOCK(unix_peek_lock);
+
+ if (!fpl || !fpl->count_unix)
+ return;
+
+ if (!READ_ONCE(gc_in_progress))
+ return;
+
+ /* Invalidate the final refcnt check in unix_vertex_dead(). */
+ spin_lock(&unix_peek_lock);
+ raw_write_seqcount_barrier(&unix_peek_seq);
+ spin_unlock(&unix_peek_lock);
+}
+
static bool unix_vertex_dead(struct unix_vertex *vertex)
{
struct unix_edge *edge;
@@ -339,6 +358,36 @@ static bool unix_vertex_dead(struct unix_vertex *vertex)
return true;
}
+static LIST_HEAD(unix_visited_vertices);
+static unsigned long unix_vertex_grouped_index = UNIX_VERTEX_INDEX_MARK2;
+
+static bool unix_scc_dead(struct list_head *scc, bool fast)
+{
+ struct unix_vertex *vertex;
+ bool scc_dead = true;
+ unsigned int seq;
+
+ seq = read_seqcount_begin(&unix_peek_seq);
+
+ list_for_each_entry_reverse(vertex, scc, scc_entry) {
+ /* Don't restart DFS from this vertex. */
+ list_move_tail(&vertex->entry, &unix_visited_vertices);
+
+ /* Mark vertex as off-stack for __unix_walk_scc(). */
+ if (!fast)
+ vertex->index = unix_vertex_grouped_index;
+
+ if (scc_dead)
+ scc_dead = unix_vertex_dead(vertex);
+ }
+
+ /* If MSG_PEEK intervened, defer this SCC to the next round. */
+ if (read_seqcount_retry(&unix_peek_seq, seq))
+ return false;
+
+ return scc_dead;
+}
+
static void unix_collect_skb(struct list_head *scc, struct sk_buff_head *hitlist)
{
struct unix_vertex *vertex;
@@ -392,9 +441,6 @@ static bool unix_scc_cyclic(struct list_head *scc)
return false;
}
-static LIST_HEAD(unix_visited_vertices);
-static unsigned long unix_vertex_grouped_index = UNIX_VERTEX_INDEX_MARK2;
-
static void __unix_walk_scc(struct unix_vertex *vertex, unsigned long *last_index,
struct sk_buff_head *hitlist)
{
@@ -460,9 +506,7 @@ static void __unix_walk_scc(struct unix_vertex *vertex, unsigned long *last_inde
}
if (vertex->index == vertex->scc_index) {
- struct unix_vertex *v;
struct list_head scc;
- bool scc_dead = true;
/* SCC finalised.
*
@@ -471,18 +515,7 @@ static void __unix_walk_scc(struct unix_vertex *vertex, unsigned long *last_inde
*/
__list_cut_position(&scc, &vertex_stack, &vertex->scc_entry);
- list_for_each_entry_reverse(v, &scc, scc_entry) {
- /* Don't restart DFS from this vertex in unix_walk_scc(). */
- list_move_tail(&v->entry, &unix_visited_vertices);
-
- /* Mark vertex as off-stack. */
- v->index = unix_vertex_grouped_index;
-
- if (scc_dead)
- scc_dead = unix_vertex_dead(v);
- }
-
- if (scc_dead) {
+ if (unix_scc_dead(&scc, false)) {
unix_collect_skb(&scc, hitlist);
} else {
if (unix_vertex_max_scc_index < vertex->scc_index)
@@ -530,19 +563,11 @@ static void unix_walk_scc_fast(struct sk_buff_head *hitlist)
while (!list_empty(&unix_unvisited_vertices)) {
struct unix_vertex *vertex;
struct list_head scc;
- bool scc_dead = true;
vertex = list_first_entry(&unix_unvisited_vertices, typeof(*vertex), entry);
list_add(&scc, &vertex->scc_entry);
- list_for_each_entry_reverse(vertex, &scc, scc_entry) {
- list_move_tail(&vertex->entry, &unix_visited_vertices);
-
- if (scc_dead)
- scc_dead = unix_vertex_dead(vertex);
- }
-
- if (scc_dead)
+ if (unix_scc_dead(&scc, true))
unix_collect_skb(&scc, hitlist);
else if (!unix_graph_maybe_cyclic)
unix_graph_maybe_cyclic = unix_scc_cyclic(&scc);
@@ -553,8 +578,6 @@ static void unix_walk_scc_fast(struct sk_buff_head *hitlist)
list_replace_init(&unix_visited_vertices, &unix_unvisited_vertices);
}
-static bool gc_in_progress;
-
static void __unix_gc(struct work_struct *work)
{
struct sk_buff_head hitlist;
--
2.34.1
2
1
[PATCH OLK-6.6 0/2] ip6_tunnel: fix skb_vlan_inet_prepare() return value handling regression
by Li Xiasong 02 Apr '26
by Li Xiasong 02 Apr '26
02 Apr '26
This patchset contains a backport of the upstream change that introduced
the issue, followed by the fix.
Patch 1 is a backport of upstream commit f478b8239d65 ("net: tunnel:
make skb_vlan_inet_prepare() return drop reasons") which changed the
return value semantics of skb_vlan_inet_prepare().
Patch 2 adapts the return value handling in __ip6_tnl_rcv() to match
the new semantics, fixing the regression.
Including the upstream change as patch 1 ensures that applying both
patches together does not introduce the issue that would occur if only
patch 1 were merged.
Li Xiasong (1):
ip6_tunnel: adapt to skb_vlan_inet_prepare() return value change
Menglong Dong (1):
net: tunnel: make skb_vlan_inet_prepare() return drop reasons
drivers/net/bareudp.c | 4 ++--
drivers/net/geneve.c | 4 ++--
include/net/ip_tunnels.h | 13 ++++++++-----
net/ipv6/ip6_tunnel.c | 2 +-
4 files changed, 13 insertions(+), 10 deletions(-)
--
2.34.1
2
3
[PATCH OLK-5.10] KVM: x86/mmu: Drop/zap existing present SPTE even when creating an MMIO SPTE
by Zhang Kunbo 02 Apr '26
by Zhang Kunbo 02 Apr '26
02 Apr '26
From: Sean Christopherson <seanjc(a)google.com>
mainline inclusion
from mainline-v7.0-rc6
commit aad885e774966e97b675dfe928da164214a71605
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14045
CVE: CVE-2026-23401
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
When installing an emulated MMIO SPTE, do so *after* dropping/zapping the
existing SPTE (if it's shadow-present). While commit a54aa15c6bda3 was
right about it being impossible to convert a shadow-present SPTE to an
MMIO SPTE due to a _guest_ write, it failed to account for writes to guest
memory that are outside the scope of KVM.
E.g. if host userspace modifies a shadowed gPTE to switch from a memslot
to emulted MMIO and then the guest hits a relevant page fault, KVM will
install the MMIO SPTE without first zapping the shadow-present SPTE.
------------[ cut here ]------------
is_shadow_present_pte(*sptep)
WARNING: arch/x86/kvm/mmu/mmu.c:484 at mark_mmio_spte+0xb2/0xc0 [kvm], CPU#0: vmx_ept_stale_r/4292
Modules linked in: kvm_intel kvm irqbypass
CPU: 0 UID: 1000 PID: 4292 Comm: vmx_ept_stale_r Not tainted 7.0.0-rc2-eafebd2d2ab0-sink-vm #319 PREEMPT
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
RIP: 0010:mark_mmio_spte+0xb2/0xc0 [kvm]
Call Trace:
<TASK>
mmu_set_spte+0x237/0x440 [kvm]
ept_page_fault+0x535/0x7f0 [kvm]
kvm_mmu_do_page_fault+0xee/0x1f0 [kvm]
kvm_mmu_page_fault+0x8d/0x620 [kvm]
vmx_handle_exit+0x18c/0x5a0 [kvm_intel]
kvm_arch_vcpu_ioctl_run+0xc55/0x1c20 [kvm]
kvm_vcpu_ioctl+0x2d5/0x980 [kvm]
__x64_sys_ioctl+0x8a/0xd0
do_syscall_64+0xb5/0x730
entry_SYSCALL_64_after_hwframe+0x4b/0x53
RIP: 0033:0x47fa3f
</TASK>
---[ end trace 0000000000000000 ]---
Reported-by: Alexander Bulekov <bkov(a)amazon.com>
Debugged-by: Alexander Bulekov <bkov(a)amazon.com>
Suggested-by: Fred Griffoul <fgriffo(a)amazon.co.uk>
Fixes: a54aa15c6bda3 ("KVM: x86/mmu: Handle MMIO SPTEs directly in mmu_set_spte()")
Cc: stable(a)vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc(a)google.com>
Conflicts:
arch/x86/kvm/mmu/mmu.c
[1075d41-not-merged,-and-context-conflicts]
Signed-off-by: Zhang Kunbo <zhangkunbo(a)huawei.com>
---
arch/x86/kvm/mmu/mmu.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 0fee502a5f29..6ee54414295b 100755
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2820,11 +2820,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
pgprintk("%s: spte %llx write_fault %d gfn %llx\n", __func__,
*sptep, write_fault, gfn);
- if (unlikely(is_noslot_pfn(pfn))) {
- mark_mmio_spte(vcpu, sptep, gfn, pte_access);
- return RET_PF_EMULATE;
- }
-
if (is_shadow_present_pte(*sptep)) {
/*
* If we overwrite a PTE page pointer with a 2MB PMD, unlink
@@ -2846,6 +2841,13 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
was_rmapped = 1;
}
+ if (unlikely(is_noslot_pfn(pfn))) {
+ mark_mmio_spte(vcpu, sptep, gfn, pte_access);
+ if (flush)
+ kvm_flush_remote_tlbs_gfn(vcpu->kvm, gfn, level);
+ return RET_PF_EMULATE;
+ }
+
wrprot = make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefetch,
true, host_writable, &spte);
--
2.34.1
2
1
[PATCH OLK-6.6] KVM: x86/mmu: Drop/zap existing present SPTE even when creating an MMIO SPTE
by Zhang Kunbo 02 Apr '26
by Zhang Kunbo 02 Apr '26
02 Apr '26
From: Sean Christopherson <seanjc(a)google.com>
mainline inclusion
from mainline-v7.0-rc6
commit aad885e774966e97b675dfe928da164214a71605
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14045
CVE: CVE-2026-23401
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
When installing an emulated MMIO SPTE, do so *after* dropping/zapping the
existing SPTE (if it's shadow-present). While commit a54aa15c6bda3 was
right about it being impossible to convert a shadow-present SPTE to an
MMIO SPTE due to a _guest_ write, it failed to account for writes to guest
memory that are outside the scope of KVM.
E.g. if host userspace modifies a shadowed gPTE to switch from a memslot
to emulted MMIO and then the guest hits a relevant page fault, KVM will
install the MMIO SPTE without first zapping the shadow-present SPTE.
------------[ cut here ]------------
is_shadow_present_pte(*sptep)
WARNING: arch/x86/kvm/mmu/mmu.c:484 at mark_mmio_spte+0xb2/0xc0 [kvm], CPU#0: vmx_ept_stale_r/4292
Modules linked in: kvm_intel kvm irqbypass
CPU: 0 UID: 1000 PID: 4292 Comm: vmx_ept_stale_r Not tainted 7.0.0-rc2-eafebd2d2ab0-sink-vm #319 PREEMPT
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
RIP: 0010:mark_mmio_spte+0xb2/0xc0 [kvm]
Call Trace:
<TASK>
mmu_set_spte+0x237/0x440 [kvm]
ept_page_fault+0x535/0x7f0 [kvm]
kvm_mmu_do_page_fault+0xee/0x1f0 [kvm]
kvm_mmu_page_fault+0x8d/0x620 [kvm]
vmx_handle_exit+0x18c/0x5a0 [kvm_intel]
kvm_arch_vcpu_ioctl_run+0xc55/0x1c20 [kvm]
kvm_vcpu_ioctl+0x2d5/0x980 [kvm]
__x64_sys_ioctl+0x8a/0xd0
do_syscall_64+0xb5/0x730
entry_SYSCALL_64_after_hwframe+0x4b/0x53
RIP: 0033:0x47fa3f
</TASK>
---[ end trace 0000000000000000 ]---
Reported-by: Alexander Bulekov <bkov(a)amazon.com>
Debugged-by: Alexander Bulekov <bkov(a)amazon.com>
Suggested-by: Fred Griffoul <fgriffo(a)amazon.co.uk>
Fixes: a54aa15c6bda ("KVM: x86/mmu: Handle MMIO SPTEs directly in mmu_set_spte()")
Cc: stable(a)vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc(a)google.com>
Conflicts:
arch/x86/kvm/mmu/mmu.c
[Context conflicts.]
Signed-off-by: Zhang Kunbo <zhangkunbo(a)huawei.com>
---
arch/x86/kvm/mmu/mmu.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index c54c8385b16d..2837f83b807d 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2914,12 +2914,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
bool prefetch = !fault || fault->prefetch;
bool write_fault = fault && fault->write;
- if (unlikely(is_noslot_pfn(pfn))) {
- vcpu->stat.pf_mmio_spte_created++;
- mark_mmio_spte(vcpu, sptep, gfn, pte_access);
- return RET_PF_EMULATE;
- }
-
if (is_shadow_present_pte(*sptep)) {
/*
* If we overwrite a PTE page pointer with a 2MB PMD, unlink
@@ -2939,6 +2933,14 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
was_rmapped = 1;
}
+ if (unlikely(is_noslot_pfn(pfn))) {
+ vcpu->stat.pf_mmio_spte_created++;
+ mark_mmio_spte(vcpu, sptep, gfn, pte_access);
+ if (flush)
+ kvm_flush_remote_tlbs_gfn(vcpu->kvm, gfn, level);
+ return RET_PF_EMULATE;
+ }
+
wrprot = make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefetch,
true, host_writable, &spte);
--
2.34.1
2
1
[PATCH OLK-6.6] KVM: x86/mmu: Drop/zap existing present SPTE even when creating an MMIO SPTE
by Zhang Kunbo 02 Apr '26
by Zhang Kunbo 02 Apr '26
02 Apr '26
From: Sean Christopherson <seanjc(a)google.com>
stable inclusion
from stable-v7.0-rc6
commit aad885e774966e97b675dfe928da164214a71605
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14045
CVE: CVE-2026-23401
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
When installing an emulated MMIO SPTE, do so *after* dropping/zapping the
existing SPTE (if it's shadow-present). While commit a54aa15c6bda3 was
right about it being impossible to convert a shadow-present SPTE to an
MMIO SPTE due to a _guest_ write, it failed to account for writes to guest
memory that are outside the scope of KVM.
E.g. if host userspace modifies a shadowed gPTE to switch from a memslot
to emulted MMIO and then the guest hits a relevant page fault, KVM will
install the MMIO SPTE without first zapping the shadow-present SPTE.
------------[ cut here ]------------
is_shadow_present_pte(*sptep)
WARNING: arch/x86/kvm/mmu/mmu.c:484 at mark_mmio_spte+0xb2/0xc0 [kvm], CPU#0: vmx_ept_stale_r/4292
Modules linked in: kvm_intel kvm irqbypass
CPU: 0 UID: 1000 PID: 4292 Comm: vmx_ept_stale_r Not tainted 7.0.0-rc2-eafebd2d2ab0-sink-vm #319 PREEMPT
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
RIP: 0010:mark_mmio_spte+0xb2/0xc0 [kvm]
Call Trace:
<TASK>
mmu_set_spte+0x237/0x440 [kvm]
ept_page_fault+0x535/0x7f0 [kvm]
kvm_mmu_do_page_fault+0xee/0x1f0 [kvm]
kvm_mmu_page_fault+0x8d/0x620 [kvm]
vmx_handle_exit+0x18c/0x5a0 [kvm_intel]
kvm_arch_vcpu_ioctl_run+0xc55/0x1c20 [kvm]
kvm_vcpu_ioctl+0x2d5/0x980 [kvm]
__x64_sys_ioctl+0x8a/0xd0
do_syscall_64+0xb5/0x730
entry_SYSCALL_64_after_hwframe+0x4b/0x53
RIP: 0033:0x47fa3f
</TASK>
---[ end trace 0000000000000000 ]---
Reported-by: Alexander Bulekov <bkov(a)amazon.com>
Debugged-by: Alexander Bulekov <bkov(a)amazon.com>
Suggested-by: Fred Griffoul <fgriffo(a)amazon.co.uk>
Fixes: a54aa15c6bda ("KVM: x86/mmu: Handle MMIO SPTEs directly in mmu_set_spte()")
Cc: stable(a)vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc(a)google.com>
Conflicts:
arch/x86/kvm/mmu/mmu.c
[Context conflicts.]
Signed-off-by: Zhang Kunbo <zhangkunbo(a)huawei.com>
---
arch/x86/kvm/mmu/mmu.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index c54c8385b16d..2837f83b807d 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2914,12 +2914,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
bool prefetch = !fault || fault->prefetch;
bool write_fault = fault && fault->write;
- if (unlikely(is_noslot_pfn(pfn))) {
- vcpu->stat.pf_mmio_spte_created++;
- mark_mmio_spte(vcpu, sptep, gfn, pte_access);
- return RET_PF_EMULATE;
- }
-
if (is_shadow_present_pte(*sptep)) {
/*
* If we overwrite a PTE page pointer with a 2MB PMD, unlink
@@ -2939,6 +2933,14 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
was_rmapped = 1;
}
+ if (unlikely(is_noslot_pfn(pfn))) {
+ vcpu->stat.pf_mmio_spte_created++;
+ mark_mmio_spte(vcpu, sptep, gfn, pte_access);
+ if (flush)
+ kvm_flush_remote_tlbs_gfn(vcpu->kvm, gfn, level);
+ return RET_PF_EMULATE;
+ }
+
wrprot = make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefetch,
true, host_writable, &spte);
--
2.34.1
2
1
[PATCH OLK-6.6] KVM: x86/mmu: Drop/zap existing present SPTE even when creating an MMIO SPTE
by Zhang Kunbo 02 Apr '26
by Zhang Kunbo 02 Apr '26
02 Apr '26
From: Sean Christopherson <seanjc(a)google.com>
stable inclusion
from stable-v7.0-rc6
commit aad885e774966e97b675dfe928da164214a71605
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14045
CVE: CVE-2026-23401
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
When installing an emulated MMIO SPTE, do so *after* dropping/zapping the
existing SPTE (if it's shadow-present). While commit a54aa15c6bda3 was
right about it being impossible to convert a shadow-present SPTE to an
MMIO SPTE due to a _guest_ write, it failed to account for writes to guest
memory that are outside the scope of KVM.
E.g. if host userspace modifies a shadowed gPTE to switch from a memslot
to emulted MMIO and then the guest hits a relevant page fault, KVM will
install the MMIO SPTE without first zapping the shadow-present SPTE.
------------[ cut here ]------------
is_shadow_present_pte(*sptep)
WARNING: arch/x86/kvm/mmu/mmu.c:484 at mark_mmio_spte+0xb2/0xc0 [kvm], CPU#0: vmx_ept_stale_r/4292
Modules linked in: kvm_intel kvm irqbypass
CPU: 0 UID: 1000 PID: 4292 Comm: vmx_ept_stale_r Not tainted 7.0.0-rc2-eafebd2d2ab0-sink-vm #319 PREEMPT
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
RIP: 0010:mark_mmio_spte+0xb2/0xc0 [kvm]
Call Trace:
<TASK>
mmu_set_spte+0x237/0x440 [kvm]
ept_page_fault+0x535/0x7f0 [kvm]
kvm_mmu_do_page_fault+0xee/0x1f0 [kvm]
kvm_mmu_page_fault+0x8d/0x620 [kvm]
vmx_handle_exit+0x18c/0x5a0 [kvm_intel]
kvm_arch_vcpu_ioctl_run+0xc55/0x1c20 [kvm]
kvm_vcpu_ioctl+0x2d5/0x980 [kvm]
__x64_sys_ioctl+0x8a/0xd0
do_syscall_64+0xb5/0x730
entry_SYSCALL_64_after_hwframe+0x4b/0x53
RIP: 0033:0x47fa3f
</TASK>
---[ end trace 0000000000000000 ]---
Reported-by: Alexander Bulekov <bkov(a)amazon.com>
Debugged-by: Alexander Bulekov <bkov(a)amazon.com>
Suggested-by: Fred Griffoul <fgriffo(a)amazon.co.uk>
Fixes: a54aa15c6bda3 ("KVM: x86/mmu: Handle MMIO SPTEs directly in mmu_set_spte()")
Cc: stable(a)vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc(a)google.com>
Conflicts:
arch/x86/kvm/mmu/mmu.c
[Context conflicts.]
Signed-off-by: Zhang Kunbo <zhangkunbo(a)huawei.com>
---
arch/x86/kvm/mmu/mmu.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index c54c8385b16d..2837f83b807d 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2914,12 +2914,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
bool prefetch = !fault || fault->prefetch;
bool write_fault = fault && fault->write;
- if (unlikely(is_noslot_pfn(pfn))) {
- vcpu->stat.pf_mmio_spte_created++;
- mark_mmio_spte(vcpu, sptep, gfn, pte_access);
- return RET_PF_EMULATE;
- }
-
if (is_shadow_present_pte(*sptep)) {
/*
* If we overwrite a PTE page pointer with a 2MB PMD, unlink
@@ -2939,6 +2933,14 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
was_rmapped = 1;
}
+ if (unlikely(is_noslot_pfn(pfn))) {
+ vcpu->stat.pf_mmio_spte_created++;
+ mark_mmio_spte(vcpu, sptep, gfn, pte_access);
+ if (flush)
+ kvm_flush_remote_tlbs_gfn(vcpu->kvm, gfn, level);
+ return RET_PF_EMULATE;
+ }
+
wrprot = make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefetch,
true, host_writable, &spte);
--
2.34.1
2
1
[PATCH OLK-5.10] KVM: x86/mmu: Drop/zap existing present SPTE even when creating an MMIO SPTE
by Zhang Kunbo 02 Apr '26
by Zhang Kunbo 02 Apr '26
02 Apr '26
From: Sean Christopherson <seanjc(a)google.com>
stable inclusion
from stable-v6.6.130
commit 3990f352bb0adc8688d0949a9c13e3110570eb61
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13978
CVE: CVE-2026-23303
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
When installing an emulated MMIO SPTE, do so *after* dropping/zapping the
existing SPTE (if it's shadow-present). While commit a54aa15c6bda3 was
right about it being impossible to convert a shadow-present SPTE to an
MMIO SPTE due to a _guest_ write, it failed to account for writes to guest
memory that are outside the scope of KVM.
E.g. if host userspace modifies a shadowed gPTE to switch from a memslot
to emulted MMIO and then the guest hits a relevant page fault, KVM will
install the MMIO SPTE without first zapping the shadow-present SPTE.
------------[ cut here ]------------
is_shadow_present_pte(*sptep)
WARNING: arch/x86/kvm/mmu/mmu.c:484 at mark_mmio_spte+0xb2/0xc0 [kvm], CPU#0: vmx_ept_stale_r/4292
Modules linked in: kvm_intel kvm irqbypass
CPU: 0 UID: 1000 PID: 4292 Comm: vmx_ept_stale_r Not tainted 7.0.0-rc2-eafebd2d2ab0-sink-vm #319 PREEMPT
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
RIP: 0010:mark_mmio_spte+0xb2/0xc0 [kvm]
Call Trace:
<TASK>
mmu_set_spte+0x237/0x440 [kvm]
ept_page_fault+0x535/0x7f0 [kvm]
kvm_mmu_do_page_fault+0xee/0x1f0 [kvm]
kvm_mmu_page_fault+0x8d/0x620 [kvm]
vmx_handle_exit+0x18c/0x5a0 [kvm_intel]
kvm_arch_vcpu_ioctl_run+0xc55/0x1c20 [kvm]
kvm_vcpu_ioctl+0x2d5/0x980 [kvm]
__x64_sys_ioctl+0x8a/0xd0
do_syscall_64+0xb5/0x730
entry_SYSCALL_64_after_hwframe+0x4b/0x53
RIP: 0033:0x47fa3f
</TASK>
---[ end trace 0000000000000000 ]---
Reported-by: Alexander Bulekov <bkov(a)amazon.com>
Debugged-by: Alexander Bulekov <bkov(a)amazon.com>
Suggested-by: Fred Griffoul <fgriffo(a)amazon.co.uk>
Fixes: a54aa15c6bda3 ("KVM: x86/mmu: Handle MMIO SPTEs directly in mmu_set_spte()")
Cc: stable(a)vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc(a)google.com>
Conflicts:
arch/x86/kvm/mmu/mmu.c
[1075d41 not merged, and context conflicts]
Signed-off-by: Zhang Kunbo <zhangkunbo(a)huawei.com>
---
arch/x86/kvm/mmu/mmu.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 0fee502a5f29..8d6f331d3a8f 100755
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2820,11 +2820,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
pgprintk("%s: spte %llx write_fault %d gfn %llx\n", __func__,
*sptep, write_fault, gfn);
- if (unlikely(is_noslot_pfn(pfn))) {
- mark_mmio_spte(vcpu, sptep, gfn, pte_access);
- return RET_PF_EMULATE;
- }
-
if (is_shadow_present_pte(*sptep)) {
/*
* If we overwrite a PTE page pointer with a 2MB PMD, unlink
@@ -2846,6 +2841,14 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
was_rmapped = 1;
}
+ if (unlikely(is_noslot_pfn(pfn))) {
+ vcpu->stat.pf_mmio_spte_created++;
+ mark_mmio_spte(vcpu, sptep, gfn, pte_access);
+ if (flush)
+ kvm_flush_remote_tlbs_gfn(vcpu->kvm, gfn, level);
+ return RET_PF_EMULATE;
+ }
+
wrprot = make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefetch,
true, host_writable, &spte);
--
2.34.1
2
1
From: Zilin Guan <zilin(a)seu.edu.cn>
mainline inclusion
from mainline-v7.0-rc3
commit fe868b499d16f55bbeea89992edb98043c9de416
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14020/
CVE: CVE-2026-23389
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
In ice_set_ringparam, tx_rings and xdp_rings are allocated before
rx_rings. If the allocation of rx_rings fails, the code jumps to
the done label leaking both tx_rings and xdp_rings. Furthermore, if
the setup of an individual Rx ring fails during the loop, the code jumps
to the free_tx label which releases tx_rings but leaks xdp_rings.
Fix this by introducing a free_xdp label and updating the error paths to
ensure both xdp_rings and tx_rings are properly freed if rx_rings
allocation or setup fails.
Compile tested only. Issue found using a prototype static analysis tool
and code review.
Fixes: fcea6f3da546 ("ice: Add stats and ethtool support")
Fixes: efc2214b6047 ("ice: Add support for XDP")
Signed-off-by: Zilin Guan <zilin(a)seu.edu.cn>
Reviewed-by: Paul Menzel <pmenzel(a)molgen.mpg.de>
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov(a)intel.com>
Tested-by: Rinitha S <sx.rinitha(a)intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen(a)intel.com>
Conflicts:
drivers/net/ethernet/intel/ice/ice_ethtool.c
[ Context conflicts due to different memory allocation function used.
Also, ice_for_each_xdp_txq is introduced in unmmerged commit
2faf63b650bb ("ice: make use of ice_for_each_* macros"), use normal
for loops here instead. ]
Signed-off-by: Pan Taixi <pantaixi1(a)huawei.com>
---
drivers/net/ethernet/intel/ice/ice_ethtool.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
index 9659668279dc..0d6965b5deb4 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
@@ -2865,7 +2865,7 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring,
rx_rings = kcalloc(vsi->num_rxq, sizeof(*rx_rings), GFP_KERNEL);
if (!rx_rings) {
err = -ENOMEM;
- goto done;
+ goto free_xdp;
}
ice_for_each_rxq(vsi, i) {
@@ -2894,7 +2894,7 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring,
}
kfree(rx_rings);
err = -ENOMEM;
- goto free_tx;
+ goto free_xdp;
}
}
@@ -2945,6 +2945,14 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring,
}
goto done;
+free_xdp:
+ if (xdp_rings) {
+ for (i = 0; i < vsi->num_xdp_txq; i++) {
+ ice_free_tx_ring(vsi->xdp_rings[i]);
+ }
+ kfree(xdp_rings);
+ }
+
free_tx:
/* error cleanup if the Rx allocations failed after getting Tx */
if (tx_rings) {
--
2.34.1
2
1
From: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
mainline inclusion
from mainline-v7.0-rc2
commit c58b6c29a4c9b8125e8ad3bca0637e00b71e2693
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13916
CVE: CVE-2026-23365
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
The kalmia driver should validate that the device it is probing has the
proper number and types of USB endpoints it is expecting before it binds
to it. If a malicious device were to not have the same urbs the driver
will crash later on when it blindly accesses these endpoints.
Cc: stable <stable(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Reviewed-by: Simon Horman <horms(a)kernel.org>
Fixes: d40261236e8e ("net/usb: Add Samsung Kalmia driver for Samsung GT-B3730")
Link: https://patch.msgid.link/2026022326-shack-headstone-ef6f@gregkh
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Signed-off-by: Qi Xi <xiqi2(a)huawei.com>
---
drivers/net/usb/kalmia.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/net/usb/kalmia.c b/drivers/net/usb/kalmia.c
index 613fc6910f14..ee9c48f7f68f 100644
--- a/drivers/net/usb/kalmia.c
+++ b/drivers/net/usb/kalmia.c
@@ -132,11 +132,18 @@ kalmia_bind(struct usbnet *dev, struct usb_interface *intf)
{
int status;
u8 ethernet_addr[ETH_ALEN];
+ static const u8 ep_addr[] = {
+ 1 | USB_DIR_IN,
+ 2 | USB_DIR_OUT,
+ 0};
/* Don't bind to AT command interface */
if (intf->cur_altsetting->desc.bInterfaceClass != USB_CLASS_VENDOR_SPEC)
return -EINVAL;
+ if (!usb_check_bulk_endpoints(intf, ep_addr))
+ return -ENODEV;
+
dev->in = usb_rcvbulkpipe(dev->udev, 0x81 & USB_ENDPOINT_NUMBER_MASK);
dev->out = usb_sndbulkpipe(dev->udev, 0x02 & USB_ENDPOINT_NUMBER_MASK);
dev->status = NULL;
--
2.33.0
2
1
Kuniyuki Iwashima (2):
tcp: Clear tcp_sk(sk)->fastopen_rsk in tcp_disconnect().
tcp: Don't call reqsk_fastopen_remove() in tcp_conn_request().
net/ipv4/tcp.c | 5 +++++
net/ipv4/tcp_input.c | 1 -
2 files changed, 5 insertions(+), 1 deletion(-)
--
2.9.5
2
3
From: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
mainline inclusion
from mainline-v7.0-rc2
commit 11de1d3ae5565ed22ef1f89d73d8f2d00322c699
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13993
CVE: CVE-2026-23290
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
The pegasus driver should validate that the device it is probing has the
proper number and types of USB endpoints it is expecting before it binds
to it. If a malicious device were to not have the same urbs the driver
will crash later on when it blindly accesses these endpoints.
Cc: Petko Manolov <petkan(a)nucleusys.com>
Cc: stable <stable(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Link: https://patch.msgid.link/2026022347-legibly-attest-cc5c@gregkh
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Conflicts:
drivers/net/usb/pegasus.c
[conflicts due not to merge commit 23a64c514631("net: usb: pegasus: use
new tasklet API").]
Signed-off-by: Ze Zuo <zuoze1(a)huawei.com>
---
change since v1:
-- add some commit.
drivers/net/usb/pegasus.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/drivers/net/usb/pegasus.c b/drivers/net/usb/pegasus.c
index 138279bbb544..99a8702c1df7 100644
--- a/drivers/net/usb/pegasus.c
+++ b/drivers/net/usb/pegasus.c
@@ -828,8 +828,19 @@ static void unlink_all_urbs(pegasus_t *pegasus)
static int alloc_urbs(pegasus_t *pegasus)
{
+ static const u8 bulk_ep_addr[] = {
+ 1 | USB_DIR_IN,
+ 2 | USB_DIR_OUT,
+ 0};
+ static const u8 int_ep_addr[] = {
+ 3 | USB_DIR_IN,
+ 0};
int res = -ENOMEM;
+ if (!usb_check_bulk_endpoints(pegasus->intf, bulk_ep_addr) ||
+ !usb_check_int_endpoints(pegasus->intf, int_ep_addr))
+ return -ENODEV;
+
pegasus->rx_urb = usb_alloc_urb(0, GFP_KERNEL);
if (!pegasus->rx_urb) {
return res;
@@ -1170,6 +1181,7 @@ static int pegasus_probe(struct usb_interface *intf,
pegasus = netdev_priv(net);
pegasus->dev_index = dev_index;
+ pegasus->intf = intf;
res = alloc_urbs(pegasus);
if (res < 0) {
@@ -1181,7 +1193,6 @@ static int pegasus_probe(struct usb_interface *intf,
INIT_DELAYED_WORK(&pegasus->carrier_check, check_carrier);
- pegasus->intf = intf;
pegasus->usb = dev;
pegasus->net = net;
--
2.25.1
2
1
02 Apr '26
From: Kaushlendra Kumar <kaushlendra.kumar(a)intel.com>
stable inclusion
from stable-v6.6.124
commit d61171cf097156030142643942c217759a9cc806
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8792
CVE: CVE-2026-23260
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit f3f380ce6b3d5c9805c7e0b3d5bc28d9ec41e2e8 ]
regcache_maple_write() allocates a new block ('entry') to merge
adjacent ranges and then stores it with mas_store_gfp().
When mas_store_gfp() fails, the new 'entry' remains allocated and
is never freed, leaking memory.
Free 'entry' on the failure path; on success continue freeing the
replaced neighbor blocks ('lower', 'upper').
Signed-off-by: Kaushlendra Kumar <kaushlendra.kumar(a)intel.com>
Link: https://patch.msgid.link/20260105031820.260119-1-kaushlendra.kumar@intel.com
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Yuan can <yuancan(a)huawei.com>
---
drivers/base/regmap/regcache-maple.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/base/regmap/regcache-maple.c b/drivers/base/regmap/regcache-maple.c
index fb5761a5ef6e..86de71ce2c19 100644
--- a/drivers/base/regmap/regcache-maple.c
+++ b/drivers/base/regmap/regcache-maple.c
@@ -96,12 +96,13 @@ static int regcache_maple_write(struct regmap *map, unsigned int reg,
mas_unlock(&mas);
- if (ret == 0) {
- kfree(lower);
- kfree(upper);
+ if (ret) {
+ kfree(entry);
+ return ret;
}
-
- return ret;
+ kfree(lower);
+ kfree(upper);
+ return 0;
}
static int regcache_maple_drop(struct regmap *map, unsigned int min,
--
2.43.0
2
1
[PATCH OLK-5.10] ALSA: usb-audio: Use correct version for UAC3 header validation
by Lin Ruifeng 02 Apr '26
by Lin Ruifeng 02 Apr '26
02 Apr '26
From: Jun Seo <jun.seo.93(a)proton.me>
stable inclusion
from stable-v6.1.167
commit 0dcd1ed96c03459cf14706885c9dd3c1fd8bd29f
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13963
CVE: CVE-2026-23318
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 54f9d645a5453d0bfece0c465d34aaf072ea99fa upstream.
The entry of the validators table for UAC3 AC header descriptor is
defined with the wrong protocol version UAC_VERSION_2, while it should
have been UAC_VERSION_3. This results in the validator never matching
for actual UAC3 devices (protocol == UAC_VERSION_3), causing their
header descriptors to bypass validation entirely. A malicious USB
device presenting a truncated UAC3 header could exploit this to cause
out-of-bounds reads when the driver later accesses unvalidated
descriptor fields.
The bug was introduced in the same commit as the recently fixed UAC3
feature unit sub-type typo, and appears to be from the same copy-paste
error when the UAC3 section was created from the UAC2 section.
Fixes: 57f8770620e9 ("ALSA: usb-audio: More validations of descriptor units")
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Jun Seo <jun.seo.93(a)proton.me>
Link: https://patch.msgid.link/20260226010820.36529-1-jun.seo.93@proton.me
Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Lin Ruifeng <linruifeng4(a)huawei.com>
---
sound/usb/validate.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sound/usb/validate.c b/sound/usb/validate.c
index 4f4e8e87a14c..61eb3f39e092 100644
--- a/sound/usb/validate.c
+++ b/sound/usb/validate.c
@@ -277,7 +277,7 @@ static const struct usb_desc_validator audio_validators[] = {
/* UAC_VERSION_2, UAC2_SAMPLE_RATE_CONVERTER: not implemented yet */
/* UAC3 */
- FIXED(UAC_VERSION_2, UAC_HEADER, struct uac3_ac_header_descriptor),
+ FIXED(UAC_VERSION_3, UAC_HEADER, struct uac3_ac_header_descriptor),
FIXED(UAC_VERSION_3, UAC_INPUT_TERMINAL,
struct uac3_input_terminal_descriptor),
FIXED(UAC_VERSION_3, UAC_OUTPUT_TERMINAL,
--
2.43.0
2
1
[PATCH OLK-5.10] bus: fsl-mc: fix use-after-free in driver_override_show()
by Lin Ruifeng 02 Apr '26
by Lin Ruifeng 02 Apr '26
02 Apr '26
From: Gui-Dong Han <hanguidong02(a)gmail.com>
stable inclusion
from stable-v6.18.11
commit 1d6bd6183e723a7b256ff34bbb5b498b5f4f2ec0
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13796
CVE: CVE-2026-23221
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 148891e95014b5dc5878acefa57f1940c281c431 upstream.
The driver_override_show() function reads the driver_override string
without holding the device_lock. However, driver_override_store() uses
driver_set_override(), which modifies and frees the string while holding
the device_lock.
This can result in a concurrent use-after-free if the string is freed
by the store function while being read by the show function.
Fix this by holding the device_lock around the read operation.
Fixes: 1f86a00c1159 ("bus/fsl-mc: add support for 'driver_override' in the mc-bus")
Cc: stable(a)vger.kernel.org
Signed-off-by: Gui-Dong Han <hanguidong02(a)gmail.com>
Reviewed-by: Ioana Ciornei <ioana.ciornei(a)nxp.com>
Link: https://lore.kernel.org/r/20251202174438.12658-1-hanguidong02@gmail.com
Signed-off-by: Christophe Leroy (CS GROUP) <chleroy(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Conflicts:
drivers/bus/fsl-mc/fsl-mc-bus.c
[Context Conflicts]
Signed-off-by: Lin Ruifeng <linruifeng4(a)huawei.com>
---
drivers/bus/fsl-mc/fsl-mc-bus.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/bus/fsl-mc/fsl-mc-bus.c b/drivers/bus/fsl-mc/fsl-mc-bus.c
index 0481b8a321b6..59389ad93595 100644
--- a/drivers/bus/fsl-mc/fsl-mc-bus.c
+++ b/drivers/bus/fsl-mc/fsl-mc-bus.c
@@ -178,12 +178,14 @@ static ssize_t driver_override_store(struct device *dev,
if (cp)
*cp = '\0';
+ device_lock(dev);
if (strlen(driver_override)) {
mc_dev->driver_override = driver_override;
} else {
kfree(driver_override);
mc_dev->driver_override = NULL;
}
+ device_unlock(dev);
kfree(old);
@@ -194,8 +196,12 @@ static ssize_t driver_override_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev);
+ ssize_t len;
- return snprintf(buf, PAGE_SIZE, "%s\n", mc_dev->driver_override);
+ device_lock(dev);
+ len = snprintf(buf, PAGE_SIZE, "%s\n", mc_dev->driver_override);
+ device_unlock(dev);
+ return len;
}
static DEVICE_ATTR_RW(driver_override);
--
2.43.0
2
1
*** fix CVE-2026-23208 ***
Edward Adam Davis (1):
ALSA: usb-audio: Prevent excessive number of frames
Takashi Iwai (1):
ALSA: usb-audio: Use the right limit for PCM OOB check
sound/usb/pcm.c | 3 +++
1 file changed, 3 insertions(+)
--
2.43.0
2
3
02 Apr '26
Reuse SUBSYS for xcu and freezer to preserve KABI
Liu Kai (5):
xSched/cgroup: reuse SUBSYS for xcu and freezer to preserve KABI
xSched/cgroup: make xcu.stat invisible at root cgroup
cgroup: sync CGROUP_SUBSYS_COUNT limit with upstream to 16
xSched: enable CONFIG_CGROUP_XCU and CONFIG_XCU_SCHED_CFS in arm64/x86
defconfig
xSched: update xSched manual for xcu cmdline enable option
Documentation/scheduler/xsched.md | 6 +-
arch/arm64/configs/openeuler_defconfig | 3 +-
arch/x86/configs/openeuler_defconfig | 3 +-
include/linux/cgroup_subsys.h | 8 +-
include/linux/freezer.h | 24 ++++
kernel/cgroup/cgroup.c | 2 +-
kernel/cgroup/legacy_freezer.c | 25 ++--
kernel/xsched/cgroup.c | 166 +++++++++++++++++++++++--
8 files changed, 209 insertions(+), 28 deletions(-)
--
2.34.1
2
6
From: Deepanshu Kartikey <kartikey406(a)gmail.com>
stable inclusion
from stable-v6.12.78
commit 08de46a75f91a6661bc1ce0a93614f4bc313c581
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14006
CVE: CVE-2026-23375
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit dd085fe9a8ebfc5d10314c60452db38d2b75e609 upstream.
file_thp_enabled() incorrectly allows THP for files on anonymous inodes
(e.g. guest_memfd and secretmem). These files are created via
alloc_file_pseudo(), which does not call get_write_access() and leaves
inode->i_writecount at 0. Combined with S_ISREG(inode->i_mode) being
true, they appear as read-only regular files when
CONFIG_READ_ONLY_THP_FOR_FS is enabled, making them eligible for THP
collapse.
Anonymous inodes can never pass the inode_is_open_for_write() check
since their i_writecount is never incremented through the normal VFS
open path. The right thing to do is to exclude them from THP eligibility
altogether, since CONFIG_READ_ONLY_THP_FOR_FS was designed for real
filesystem files (e.g. shared libraries), not for pseudo-filesystem
inodes.
For guest_memfd, this allows khugepaged and MADV_COLLAPSE to create
large folios in the page cache via the collapse path, but the
guest_memfd fault handler does not support large folios. This triggers
WARN_ON_ONCE(folio_test_large(folio)) in kvm_gmem_fault_user_mapping().
For secretmem, collapse_file() tries to copy page contents through the
direct map, but secretmem pages are removed from the direct map. This
can result in a kernel crash:
BUG: unable to handle page fault for address: ffff88810284d000
RIP: 0010:memcpy_orig+0x16/0x130
Call Trace:
collapse_file
hpage_collapse_scan_file
madvise_collapse
Secretmem is not affected by the crash on upstream as the memory failure
recovery handles the failed copy gracefully, but it still triggers
confusing false memory failure reports:
Memory failure: 0x106d96f: recovery action for clean unevictable
LRU page: Recovered
Check IS_ANON_FILE(inode) in file_thp_enabled() to deny THP for all
anonymous inode files.
Link: https://syzkaller.appspot.com/bug?extid=33a04338019ac7e43a44
Link: https://lore.kernel.org/linux-mm/CAEvNRgHegcz3ro35ixkDw39ES8=U6rs6S7iP0gkR9…
Link: https://lkml.kernel.org/r/20260214001535.435626-1-kartikey406@gmail.com
Fixes: 7fbb5e188248 ("mm: remove VM_EXEC requirement for THP eligibility")
Signed-off-by: Deepanshu Kartikey <Kartikey406(a)gmail.com>
Reported-by: syzbot+33a04338019ac7e43a44(a)syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=33a04338019ac7e43a44
Tested-by: syzbot+33a04338019ac7e43a44(a)syzkaller.appspotmail.com
Tested-by: Lance Yang <lance.yang(a)linux.dev>
Acked-by: David Hildenbrand (Arm) <david(a)kernel.org>
Reviewed-by: Barry Song <baohua(a)kernel.org>
Reviewed-by: Ackerley Tng <ackerleytng(a)google.com>
Tested-by: Ackerley Tng <ackerleytng(a)google.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes(a)oracle.com>
Cc: Baolin Wang <baolin.wang(a)linux.alibaba.com>
Cc: Dev Jain <dev.jain(a)arm.com>
Cc: Fangrui Song <i(a)maskray.me>
Cc: Liam Howlett <liam.howlett(a)oracle.com>
Cc: Nico Pache <npache(a)redhat.com>
Cc: Ryan Roberts <ryan.roberts(a)arm.com>
Cc: Yang Shi <shy828301(a)gmail.com>
Cc: Zi Yan <ziy(a)nvidia.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
[ Ackerley: we don't have IS_ANON_FILE() yet. As guest_memfd does
not apply yet, simply check for secretmem explicitly. ]
Signed-off-by: Ackerley Tng <ackerleytng(a)google.com>
Reviewed-by: David Hildenbrand (Arm) <david(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Ze Zuo <zuoze1(a)huawei.com>
---
include/linux/huge_mm.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 328b1fbb134c..86565da790e0 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -7,6 +7,7 @@
#include <linux/fs.h> /* only for vma_is_dax() */
#include <linux/kobject.h>
+#include <linux/secretmem.h>
vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf);
int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
@@ -270,6 +271,9 @@ static inline bool file_thp_enabled(struct vm_area_struct *vma)
inode = vma->vm_file->f_inode;
+ if (secretmem_mapping(inode->i_mapping))
+ return false;
+
return (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS)) &&
!inode_is_open_for_write(inode) && S_ISREG(inode->i_mode);
}
--
2.25.1
2
1
From: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
stable inclusion
from stable-v6.6.130
commit 7f8505c7ce3f186ef9d2495f3c0bd6ad6fce999f
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13993
CVE: CVE-2026-23290
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 11de1d3ae5565ed22ef1f89d73d8f2d00322c699 upstream.
The pegasus driver should validate that the device it is probing has the
proper number and types of USB endpoints it is expecting before it binds
to it. If a malicious device were to not have the same urbs the driver
will crash later on when it blindly accesses these endpoints.
Cc: Petko Manolov <petkan(a)nucleusys.com>
Cc: stable <stable(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Link: https://patch.msgid.link/2026022347-legibly-attest-cc5c@gregkh
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Ze Zuo <zuoze1(a)huawei.com>
---
drivers/net/usb/pegasus.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/drivers/net/usb/pegasus.c b/drivers/net/usb/pegasus.c
index c514483134f0..7cc949460edc 100644
--- a/drivers/net/usb/pegasus.c
+++ b/drivers/net/usb/pegasus.c
@@ -804,8 +804,19 @@ static void unlink_all_urbs(pegasus_t *pegasus)
static int alloc_urbs(pegasus_t *pegasus)
{
+ static const u8 bulk_ep_addr[] = {
+ 1 | USB_DIR_IN,
+ 2 | USB_DIR_OUT,
+ 0};
+ static const u8 int_ep_addr[] = {
+ 3 | USB_DIR_IN,
+ 0};
int res = -ENOMEM;
+ if (!usb_check_bulk_endpoints(pegasus->intf, bulk_ep_addr) ||
+ !usb_check_int_endpoints(pegasus->intf, int_ep_addr))
+ return -ENODEV;
+
pegasus->rx_urb = usb_alloc_urb(0, GFP_KERNEL);
if (!pegasus->rx_urb) {
return res;
@@ -1146,6 +1157,7 @@ static int pegasus_probe(struct usb_interface *intf,
pegasus = netdev_priv(net);
pegasus->dev_index = dev_index;
+ pegasus->intf = intf;
res = alloc_urbs(pegasus);
if (res < 0) {
@@ -1157,7 +1169,6 @@ static int pegasus_probe(struct usb_interface *intf,
INIT_DELAYED_WORK(&pegasus->carrier_check, check_carrier);
- pegasus->intf = intf;
pegasus->usb = dev;
pegasus->net = net;
--
2.25.1
2
1
From: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
mainline inclusion
from mainline-v7.0-rc2
commit 11de1d3ae5565ed22ef1f89d73d8f2d00322c699
category: bugfix
bugzilla: CVE-2026-23290
CVE: CVE-2026-23290
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
The pegasus driver should validate that the device it is probing has the
proper number and types of USB endpoints it is expecting before it binds
to it. If a malicious device were to not have the same urbs the driver
will crash later on when it blindly accesses these endpoints.
Cc: Petko Manolov <petkan(a)nucleusys.com>
Cc: stable <stable(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Link: https://patch.msgid.link/2026022347-legibly-attest-cc5c@gregkh
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Signed-off-by: Ze Zuo <zuoze1(a)huawei.com>
---
drivers/net/usb/pegasus.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/drivers/net/usb/pegasus.c b/drivers/net/usb/pegasus.c
index 138279bbb544..99a8702c1df7 100644
--- a/drivers/net/usb/pegasus.c
+++ b/drivers/net/usb/pegasus.c
@@ -828,8 +828,19 @@ static void unlink_all_urbs(pegasus_t *pegasus)
static int alloc_urbs(pegasus_t *pegasus)
{
+ static const u8 bulk_ep_addr[] = {
+ 1 | USB_DIR_IN,
+ 2 | USB_DIR_OUT,
+ 0};
+ static const u8 int_ep_addr[] = {
+ 3 | USB_DIR_IN,
+ 0};
int res = -ENOMEM;
+ if (!usb_check_bulk_endpoints(pegasus->intf, bulk_ep_addr) ||
+ !usb_check_int_endpoints(pegasus->intf, int_ep_addr))
+ return -ENODEV;
+
pegasus->rx_urb = usb_alloc_urb(0, GFP_KERNEL);
if (!pegasus->rx_urb) {
return res;
@@ -1170,6 +1181,7 @@ static int pegasus_probe(struct usb_interface *intf,
pegasus = netdev_priv(net);
pegasus->dev_index = dev_index;
+ pegasus->intf = intf;
res = alloc_urbs(pegasus);
if (res < 0) {
@@ -1181,7 +1193,6 @@ static int pegasus_probe(struct usb_interface *intf,
INIT_DELAYED_WORK(&pegasus->carrier_check, check_carrier);
- pegasus->intf = intf;
pegasus->usb = dev;
pegasus->net = net;
--
2.25.1
2
1
[PATCH OLK-5.10 0/1] cpufreq: CPPC: Use `ktime` to replace `jiffies` to get the system time in cppc_get_perf_ctrs_sample()
by Lifeng Zheng 02 Apr '26
by Lifeng Zheng 02 Apr '26
02 Apr '26
From: Hongye Lin <linhongye(a)h-partners.com>
driver inclusion
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8860
----------------------------------------------------------------------
Lifeng Zheng (1):
cpufreq: CPPC: Use `ktime` to replace `jiffies` to get the system time
in cppc_get_perf_ctrs_sample()
drivers/cpufreq/cppc_cpufreq.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
--
2.33.0
2
2
[PATCH OLK-6.6 0/1] cpufreq: CPPC: Use `ktime` to replace `jiffies` to get the system time in cppc_get_perf_ctrs_pair()
by Lifeng Zheng 01 Apr '26
by Lifeng Zheng 01 Apr '26
01 Apr '26
From: Hongye Lin <linhongye(a)h-partners.com>
driver inclusion
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8860
----------------------------------------------------------------------
Lifeng Zheng (1):
cpufreq: CPPC: Use `ktime` to replace `jiffies` to get the system time
in cppc_get_perf_ctrs_pair()
drivers/cpufreq/cppc_cpufreq.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
--
2.33.0
2
2
[PATCH OLK-6.6] xSched/cgroup: utilize xcu cmdline to dynamically switch between xcu and freezer subsys
by Liu Kai 01 Apr '26
by Liu Kai 01 Apr '26
01 Apr '26
hulk inclusion
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8424
----------------------------------------
To support both cgroup v1 and v2 while adhering to the
CGROUP_SUBSYS_COUNT limit (16), this patch introduces a mechanism to
share the same SUBSYS(xcu) slot between the 'xcu' and 'freezer'
subsystems.
Since 'xcu' is a cgroup v2-only controller and 'freezer' is a cgroup
v1-only controller, they are mutually exclusive at runtime. We introduce
a new kernel command line parameter, "xcu", to control this behavior
dynamically.
This approach allows us to enable both CONFIG_CGROUP_XCU and
CONFIG_CGROUP_FREEZER simultaneously without exceeding the subsystem
limit.
The behavior based on the "xcu" cmdline parameter is as follows:
1. xcu=disable, cgroup v1:
- The legacy 'frezzer' subsystem is active and functional.
- The 'xcu' subsystem remains dormant.
2. xcu=enable, cgroup v1:
- The 'freezer' subsystem is effectively disabled/blocked.
- (Note: 'xcu' is not usable in v1 mode as it is v2-only).
3. xcu=disable, cgroup v2:
- The 'xcu' subsystem is not enabled in the hierarchy.
4. xcu=enable, cgroup v2:
- The 'xcu' subsystem is active and usable.
- The 'freezer' logic is bypassed.
This ensures backward compatibility for v1 users while enabling the new
functionality for v2, all within the constraints of the kernel subsystem
limit.
Fixes: 43bbefc53356 ("xsched: Add XCU control group implementation and its backend in xsched CFS")
Signed-off-by: Liu Kai <liukai284(a)huawei.com>
---
Documentation/scheduler/xsched.md | 6 +-
arch/arm64/configs/openeuler_defconfig | 3 +-
arch/x86/configs/openeuler_defconfig | 3 +-
include/linux/cgroup_subsys.h | 8 +-
include/linux/freezer.h | 24 ++++
kernel/cgroup/cgroup.c | 2 +-
kernel/cgroup/legacy_freezer.c | 25 ++--
kernel/xsched/cgroup.c | 166 +++++++++++++++++++++++--
8 files changed, 209 insertions(+), 28 deletions(-)
diff --git a/Documentation/scheduler/xsched.md b/Documentation/scheduler/xsched.md
index 11dc0c964e0a..c5e643ab35f0 100644
--- a/Documentation/scheduler/xsched.md
+++ b/Documentation/scheduler/xsched.md
@@ -64,11 +64,11 @@ CONFIG_CGROUP_XCU=y
# 修改内核引导文件,根据实际情况编辑
vim /etc/grub2-efi.cfg
-# 在XSched内核新增 cmdline 配置,关闭驱动签名校验、开启cgroup-v2
-module.sig_enforce=0 systemd.unified_cgroup_hierarchy=1 cgroup_no_v1=all
+# 在XSched内核新增 cmdline 配置,关闭驱动签名校验、开启cgroup-v2,使能 xcu cgroup 子系统
+module.sig_enforce=0 systemd.unified_cgroup_hierarchy=1 cgroup_no_v1=all xcu=enable
```
-保存引导文件后,重启切换内核
+保存引导文件后,重启切换内核,**注意!!!,xcu 子系统仅支持 cgroup-v2**
### 1.3 重编驱动
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index fc581adb563b..622d44e6d9ff 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -101,7 +101,8 @@ CONFIG_XCU_SCHEDULER=y
CONFIG_XCU_VSTREAM=y
CONFIG_XSCHED_NR_CUS=128
CONFIG_XCU_SCHED_RT=y
-# CONFIG_XCU_SCHED_CFS is not set
+CONFIG_XCU_SCHED_CFS=y
+CONFIG_CGROUP_XCU=y
#
# CPU/Task time and stats accounting
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index d493dbf6b8a1..e66724b15bb4 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -121,7 +121,8 @@ CONFIG_XCU_SCHEDULER=y
CONFIG_XCU_VSTREAM=y
CONFIG_XSCHED_NR_CUS=128
CONFIG_XCU_SCHED_RT=y
-# CONFIG_XCU_SCHED_CFS is not set
+CONFIG_XCU_SCHED_CFS=y
+CONFIG_CGROUP_XCU=y
#
# CPU/Task time and stats accounting
diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
index e65ae90946c2..9ee14c9cab33 100644
--- a/include/linux/cgroup_subsys.h
+++ b/include/linux/cgroup_subsys.h
@@ -33,7 +33,9 @@ SUBSYS(memory)
SUBSYS(devices)
#endif
-#if IS_ENABLED(CONFIG_CGROUP_FREEZER)
+#if IS_ENABLED(CONFIG_CGROUP_XCU)
+SUBSYS(xcu)
+#elif IS_ENABLED(CONFIG_CGROUP_FREEZER)
SUBSYS(freezer)
#endif
@@ -61,10 +63,6 @@ SUBSYS(pids)
SUBSYS(rdma)
#endif
-#if IS_ENABLED(CONFIG_CGROUP_XCU)
-SUBSYS(xcu)
-#endif
-
#if IS_ENABLED(CONFIG_CGROUP_MISC)
SUBSYS(misc)
#endif
diff --git a/include/linux/freezer.h b/include/linux/freezer.h
index b303472255be..0c7a6da03d43 100644
--- a/include/linux/freezer.h
+++ b/include/linux/freezer.h
@@ -10,6 +10,10 @@
#include <linux/atomic.h>
#include <linux/jump_label.h>
+#ifdef CONFIG_CGROUP_XCU
+#include <linux/cgroup-defs.h>
+#endif
+
#ifdef CONFIG_FREEZER
DECLARE_STATIC_KEY_FALSE(freezer_active);
@@ -87,4 +91,24 @@ static inline void set_freezable(void) {}
#endif /* !CONFIG_FREEZER */
+/*
+ * When CONFIG_CGROUP_XCU is enabled, freezer_cgrp_subsys and xcu_cgrp_subsys
+ * share the same set of cgroup_subsys hook functions. Consequently, the hooks for
+ * freezer_cgrp_subsys must be exposed externally to allow linkage with the XCU
+ * cgroup_subsys.
+ *
+ */
+#ifdef CONFIG_CGROUP_XCU
+#define freezer_cgrp_id xcu_cgrp_id
+
+extern struct cftype files[];
+struct cgroup_subsys_state *
+freezer_css_alloc(struct cgroup_subsys_state *parent_css);
+int freezer_css_online(struct cgroup_subsys_state *css);
+void freezer_css_offline(struct cgroup_subsys_state *css);
+void freezer_css_free(struct cgroup_subsys_state *css);
+void freezer_attach(struct cgroup_taskset *tset);
+void freezer_fork(struct task_struct *task);
+#endif /* CONFIG_CGROUP_XCU */
+
#endif /* FREEZER_H_INCLUDED */
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 17521bc192ee..04301432e84a 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -6256,7 +6256,7 @@ int __init cgroup_init(void)
struct cgroup_subsys *ss;
int ssid;
- BUILD_BUG_ON(CGROUP_SUBSYS_COUNT > 17);
+ BUILD_BUG_ON(CGROUP_SUBSYS_COUNT > 16);
BUG_ON(cgroup_init_cftypes(NULL, cgroup_base_files));
BUG_ON(cgroup_init_cftypes(NULL, cgroup_psi_files));
BUG_ON(cgroup_init_cftypes(NULL, cgroup1_base_files));
diff --git a/kernel/cgroup/legacy_freezer.c b/kernel/cgroup/legacy_freezer.c
index bee2f9ea5e4a..9ef242b73947 100644
--- a/kernel/cgroup/legacy_freezer.c
+++ b/kernel/cgroup/legacy_freezer.c
@@ -24,6 +24,17 @@
#include <linux/mutex.h>
#include <linux/cpu.h>
+/*
+ * The STATIC macro is used to handle this conditional visibility:
+ * - Enabled: interfaces are defined as non-static (exported).
+ * - Disabled: interfaces remain static (file-local).
+ */
+#ifdef CONFIG_CGROUP_XCU
+#define STATIC
+#else
+#define STATIC static
+#endif
+
/*
* A cgroup is freezing if any FREEZING flags are set. FREEZING_SELF is
* set if "FROZEN" is written to freezer.state cgroupfs file, and cleared
@@ -83,7 +94,7 @@ static const char *freezer_state_strs(unsigned int state)
return "THAWED";
};
-static struct cgroup_subsys_state *
+STATIC struct cgroup_subsys_state *
freezer_css_alloc(struct cgroup_subsys_state *parent_css)
{
struct freezer *freezer;
@@ -103,7 +114,7 @@ freezer_css_alloc(struct cgroup_subsys_state *parent_css)
* parent's freezing state while holding both parent's and our
* freezer->lock.
*/
-static int freezer_css_online(struct cgroup_subsys_state *css)
+STATIC int freezer_css_online(struct cgroup_subsys_state *css)
{
struct freezer *freezer = css_freezer(css);
struct freezer *parent = parent_freezer(freezer);
@@ -130,7 +141,7 @@ static int freezer_css_online(struct cgroup_subsys_state *css)
* @css is going away. Mark it dead and decrement system_freezing_count if
* it was holding one.
*/
-static void freezer_css_offline(struct cgroup_subsys_state *css)
+STATIC void freezer_css_offline(struct cgroup_subsys_state *css)
{
struct freezer *freezer = css_freezer(css);
@@ -146,7 +157,7 @@ static void freezer_css_offline(struct cgroup_subsys_state *css)
cpus_read_unlock();
}
-static void freezer_css_free(struct cgroup_subsys_state *css)
+STATIC void freezer_css_free(struct cgroup_subsys_state *css)
{
kfree(css_freezer(css));
}
@@ -160,7 +171,7 @@ static void freezer_css_free(struct cgroup_subsys_state *css)
* @freezer->lock. freezer_attach() makes the new tasks conform to the
* current state and all following state changes can see the new tasks.
*/
-static void freezer_attach(struct cgroup_taskset *tset)
+STATIC void freezer_attach(struct cgroup_taskset *tset)
{
struct task_struct *task;
struct cgroup_subsys_state *new_css;
@@ -205,7 +216,7 @@ static void freezer_attach(struct cgroup_taskset *tset)
* to do anything as freezer_attach() will put @task into the appropriate
* state.
*/
-static void freezer_fork(struct task_struct *task)
+STATIC void freezer_fork(struct task_struct *task)
{
struct freezer *freezer;
@@ -449,7 +460,7 @@ static u64 freezer_parent_freezing_read(struct cgroup_subsys_state *css,
return (bool)(freezer->state & CGROUP_FREEZING_PARENT);
}
-static struct cftype files[] = {
+STATIC struct cftype files[] = {
{
.name = "state",
.flags = CFTYPE_NOT_ON_ROOT,
diff --git a/kernel/xsched/cgroup.c b/kernel/xsched/cgroup.c
index 73f044475939..8a85faaa8dc4 100644
--- a/kernel/xsched/cgroup.c
+++ b/kernel/xsched/cgroup.c
@@ -21,6 +21,10 @@
#include <linux/xsched.h>
#include <linux/delay.h>
+#ifdef CONFIG_CGROUP_FREEZER
+#include <linux/freezer.h>
+#endif
+
static struct xsched_group root_xsched_group;
struct xsched_group *root_xcg = &root_xsched_group;
@@ -39,6 +43,61 @@ static const char xcu_sched_name[XSCHED_TYPE_NUM][SCHED_CLASS_MAX_LENGTH] = {
[XSCHED_TYPE_CFS] = "cfs"
};
+/*
+ * xcu_mode:
+ * 0 = disable (freezer cgroup)
+ * 1 = enable (xcu cgroup)
+ */
+static int xcu_mode;
+
+/**
+ * Parse the "xcu=" kernel command line parameter.
+ *
+ * Usage:
+ * xcu=enable → enable xcu_cgrp_subsys
+ * Otherwise → enable freezer_cgrp_subsys
+ *
+ * Returns:
+ * 1 (handled), 0 (not handled)
+ */
+static int __init xcu_setup(char *str)
+{
+ if (!str)
+ return 0;
+
+ if (strcmp(str, "enable") == 0)
+ xcu_mode = 1;
+
+ return 1;
+}
+__setup("xcu=", xcu_setup);
+
+static bool xcu_cgroup_enabled(void)
+{
+ return xcu_mode;
+}
+
+/**
+ * xcu_cgroup_check_compat - Verify XCU mode matches the cgroup hierarchy version.
+ *
+ * Checks if the current xcu_mode aligns with the cgroup subsystem's default
+ * hierarchy status.
+ *
+ * IMPORTANT: cgroup_subsys_on_dfl() only returns a valid version indicator
+ * after the cgroup filesystem has been mounted at the root node. Calling
+ * this function prior to mount may yield incorrect results.
+ *
+ * Return: true if compatible, false otherwise (with a warning logged).
+ */
+static bool xcu_cgroup_check_compat(void)
+{
+ if (xcu_mode != cgroup_subsys_on_dfl(xcu_cgrp_subsys)) {
+ XSCHED_WARN("XCU cgrp is incompatible with the cgroup version\n");
+ return false;
+ }
+ return true;
+}
+
static int xcu_cg_set_file_show(struct xsched_group *xg, int sched_class)
{
if (!xg) {
@@ -742,6 +801,7 @@ static struct cftype xcu_cg_files[] = {
},
{
.name = "stat",
+ .flags = CFTYPE_NOT_ON_ROOT,
.seq_show = xcu_stat,
},
{
@@ -753,17 +813,103 @@ static struct cftype xcu_cg_files[] = {
{} /* terminate */
};
+static struct cgroup_subsys_state *
+xcu_freezer_compat_css_alloc(struct cgroup_subsys_state *parent_css)
+{
+ /* Skip allocation if XCU cmdline mismatches the cgroup version. */
+ if (parent_css && !xcu_cgroup_check_compat())
+ return ERR_PTR(-EPERM);
+
+ if (xcu_cgroup_enabled())
+ return xcu_css_alloc(parent_css);
+
+#ifdef CONFIG_CGROUP_FREEZER
+ return freezer_css_alloc(parent_css);
+#else /* CONFIG_CGROUP_FREEZER=n xcu=disable cgroup=v1 */
+ if (!parent_css)
+ return &root_xsched_group.css;
+ else
+ return ERR_PTR(-EPERM);
+#endif
+}
+
+static int xcu_freezer_compat_css_online(struct cgroup_subsys_state *css)
+{
+ if (xcu_cgroup_enabled())
+ return xcu_css_online(css);
+
+#ifdef CONFIG_CGROUP_FREEZER
+ return freezer_css_online(css);
+#else
+ return 0;
+#endif
+}
+
+static void xcu_freezer_compat_css_offline(struct cgroup_subsys_state *css)
+{
+ if (xcu_cgroup_enabled())
+ return xcu_css_offline(css);
+
+#ifdef CONFIG_CGROUP_FREEZER
+ return freezer_css_offline(css);
+#endif
+}
+
+static void xcu_freezer_compat_css_released(struct cgroup_subsys_state *css)
+{
+ if (xcu_cgroup_enabled())
+ return xcu_css_released(css);
+}
+
+static void xcu_freezer_compat_css_free(struct cgroup_subsys_state *css)
+{
+ if (xcu_cgroup_enabled())
+ return xcu_css_free(css);
+
+#ifdef CONFIG_CGROUP_FREEZER
+ return freezer_css_free(css);
+#endif
+}
+
+static int xcu_freezer_compat_can_attach(struct cgroup_taskset *tset)
+{
+ if (xcu_cgroup_enabled())
+ return xcu_can_attach(tset);
+
+ return 0;
+}
+
+static void xcu_freezer_compat_cancel_attach(struct cgroup_taskset *tset)
+{
+ if (xcu_cgroup_enabled())
+ return xcu_cancel_attach(tset);
+}
+
+static void xcu_freezer_compat_attach(struct cgroup_taskset *tset)
+{
+ if (xcu_cgroup_enabled())
+ return xcu_attach(tset);
+
+#ifdef CONFIG_CGROUP_FREEZER
+ return freezer_attach(tset);
+#endif
+}
+
struct cgroup_subsys xcu_cgrp_subsys = {
- .css_alloc = xcu_css_alloc,
- .css_online = xcu_css_online,
- .css_offline = xcu_css_offline,
- .css_released = xcu_css_released,
- .css_free = xcu_css_free,
- .can_attach = xcu_can_attach,
- .cancel_attach = xcu_cancel_attach,
- .attach = xcu_attach,
+ .css_alloc = xcu_freezer_compat_css_alloc,
+ .css_online = xcu_freezer_compat_css_online,
+ .css_offline = xcu_freezer_compat_css_offline,
+ .css_released = xcu_freezer_compat_css_released,
+ .css_free = xcu_freezer_compat_css_free,
+ .can_attach = xcu_freezer_compat_can_attach,
+ .cancel_attach = xcu_freezer_compat_cancel_attach,
+ .attach = xcu_freezer_compat_attach,
.dfl_cftypes = xcu_cg_files,
+#ifdef CONFIG_CGROUP_FREEZER
+ .fork = freezer_fork,
+ .legacy_cftypes = files,
+ .legacy_name = "freezer",
+#else
.legacy_cftypes = xcu_cg_files,
- .early_init = false,
- .threaded = true
+#endif
};
--
2.34.1
2
1
[PATCH openEuler-1.0-LTS] mm/huge_memory: fix folio isn't locked in softleaf_to_folio()
by Jinjiang Tu 01 Apr '26
by Jinjiang Tu 01 Apr '26
01 Apr '26
mainline inclusion
from mainline-v7.0-rc6
commit 4c5e7f0fcd592801c9cc18f29f80fbee84eb8669
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8836
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
On arm64 server, we found folio that get from migration entry isn't locked
in softleaf_to_folio(). This issue triggers when mTHP splitting and
zap_nonpresent_ptes() races, and the root cause is lack of memory barrier
in softleaf_to_folio(). The race is as follows:
CPU0 CPU1
deferred_split_scan() zap_nonpresent_ptes()
lock folio
split_folio()
unmap_folio()
change ptes to migration entries
__split_folio_to_order() softleaf_to_folio()
set flags(including PG_locked) for tail pages folio = pfn_folio(softleaf_to_pfn(entry))
smp_wmb() VM_WARN_ON_ONCE(!folio_test_locked(folio))
prep_compound_page() for tail pages
In __split_folio_to_order(), smp_wmb() guarantees page flags of tail pages
are visible before the tail page becomes non-compound. smp_wmb() should
be paired with smp_rmb() in softleaf_to_folio(), which is missed. As a
result, if zap_nonpresent_ptes() accesses migration entry that stores tail
pfn, softleaf_to_folio() may see the updated compound_head of tail page
before page->flags.
This issue will trigger VM_WARN_ON_ONCE() in pfn_swap_entry_folio()
because of the race between folio split and zap_nonpresent_ptes()
leading to a folio incorrectly undergoing modification without a folio
lock being held.
This is a BUG_ON() before commit 93976a20345b ("mm: eliminate further
swapops predicates"), which in merged in v6.19-rc1.
To fix it, add missing smp_rmb() if the softleaf entry is migration entry
in softleaf_to_folio() and softleaf_to_page().
[tujinjiang(a)huawei.com: update function name and comments]
Link: https://lkml.kernel.org/r/20260321075214.3305564-1-tujinjiang@huawei.com
Link: https://lkml.kernel.org/r/20260319012541.4158561-1-tujinjiang@huawei.com
Fixes: e9b61f19858a ("thp: reintroduce split_huge_page()")
Signed-off-by: Jinjiang Tu <tujinjiang(a)huawei.com>
Acked-by: David Hildenbrand (Arm) <david(a)kernel.org>
Reviewed-by: Lorenzo Stoakes (Oracle) <ljs(a)kernel.org>
Cc: Barry Song <baohua(a)kernel.org>
Cc: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Cc: Liam Howlett <liam.howlett(a)oracle.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Mike Rapoport <rppt(a)kernel.org>
Cc: Nanyong Sun <sunnanyong(a)huawei.com>
Cc: Ryan Roberts <ryan.roberts(a)arm.com>
Cc: Suren Baghdasaryan <surenb(a)google.com>
Cc: Vlastimil Babka <vbabka(a)kernel.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Conflicts:
include/linux/leafops.h
include/linux/swapops.h
[miragtion entry hasn't been renamed to softleaf entry.]
Signed-off-by: Jinjiang Tu <tujinjiang(a)huawei.com>
---
include/linux/swapops.h | 20 +++++++++++++++++---
1 file changed, 17 insertions(+), 3 deletions(-)
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index 22af9d8a84ae..c742e778e024 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -205,14 +205,28 @@ static inline unsigned long migration_entry_to_pfn(swp_entry_t entry)
return swp_offset(entry);
}
-static inline struct page *migration_entry_to_page(swp_entry_t entry)
+static inline void migration_entry_sync_page(struct page *head)
{
- struct page *p = pfn_to_page(swp_offset(entry));
+ /*
+ * Ensure we do not race with split, which might alter tail pages into new
+ * head pages and thus result in observing an unlocked page.
+ * This matches the write barrier in __split_huge_page_tail().
+ */
+ smp_rmb();
+
/*
* Any use of migration entries may only occur while the
* corresponding page is locked
*/
- BUG_ON(!PageLocked(compound_head(p)));
+ BUG_ON(!PageLocked(head));
+}
+
+static inline struct page *migration_entry_to_page(swp_entry_t entry)
+{
+ struct page *p = pfn_to_page(swp_offset(entry));
+
+ migration_entry_sync_page(compound_head(p));
+
return p;
}
--
2.43.0
2
1
[PATCH OLK-5.10] mm/huge_memory: fix folio isn't locked in softleaf_to_folio()
by Jinjiang Tu 01 Apr '26
by Jinjiang Tu 01 Apr '26
01 Apr '26
mainline inclusion
from mainline-v7.0-rc6
commit 4c5e7f0fcd592801c9cc18f29f80fbee84eb8669
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8836
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
On arm64 server, we found folio that get from migration entry isn't locked
in softleaf_to_folio(). This issue triggers when mTHP splitting and
zap_nonpresent_ptes() races, and the root cause is lack of memory barrier
in softleaf_to_folio(). The race is as follows:
CPU0 CPU1
deferred_split_scan() zap_nonpresent_ptes()
lock folio
split_folio()
unmap_folio()
change ptes to migration entries
__split_folio_to_order() softleaf_to_folio()
set flags(including PG_locked) for tail pages folio = pfn_folio(softleaf_to_pfn(entry))
smp_wmb() VM_WARN_ON_ONCE(!folio_test_locked(folio))
prep_compound_page() for tail pages
In __split_folio_to_order(), smp_wmb() guarantees page flags of tail pages
are visible before the tail page becomes non-compound. smp_wmb() should
be paired with smp_rmb() in softleaf_to_folio(), which is missed. As a
result, if zap_nonpresent_ptes() accesses migration entry that stores
tail pfn, softleaf_to_folio() may see the updated compound_head of tail
page before page->flags.
To fix it, add missing smp_rmb() if the softleaf entry is migration entry
in softleaf_to_folio() and softleaf_to_page().
[tujinjiang(a)huawei.com: update function name and comments]
Link: https://lkml.kernel.org/r/20260321075214.3305564-1-tujinjiang@huawei.com
Link: https://lkml.kernel.org/r/20260319012541.4158561-1-tujinjiang@huawei.com
Fixes: e9b61f19858a ("thp: reintroduce split_huge_page()")
Signed-off-by: Jinjiang Tu <tujinjiang(a)huawei.com>
Acked-by: David Hildenbrand (Arm) <david(a)kernel.org>
Reviewed-by: Lorenzo Stoakes (Oracle) <ljs(a)kernel.org>
Cc: Barry Song <baohua(a)kernel.org>
Cc: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Cc: Liam Howlett <liam.howlett(a)oracle.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Mike Rapoport <rppt(a)kernel.org>
Cc: Nanyong Sun <sunnanyong(a)huawei.com>
Cc: Ryan Roberts <ryan.roberts(a)arm.com>
Cc: Suren Baghdasaryan <surenb(a)google.com>
Cc: Vlastimil Babka <vbabka(a)kernel.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Conflicts:
include/linux/leafops.h
include/linux/swapops.h
mm/filemap.c
[miragtion entry hasn't been renamed to softleaf entry. Add new helper
migration_entry_to_compound_page().]
Signed-off-by: Jinjiang Tu <tujinjiang(a)huawei.com>
---
include/linux/swapops.h | 31 ++++++++++++++++++++++++++++---
mm/filemap.c | 2 +-
2 files changed, 29 insertions(+), 4 deletions(-)
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index e749c4c86b26..ed33367fb6a6 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -194,17 +194,42 @@ static inline unsigned long migration_entry_to_pfn(swp_entry_t entry)
return swp_offset(entry);
}
-static inline struct page *migration_entry_to_page(swp_entry_t entry)
+static inline void migration_entry_sync_page(struct page *head)
{
- struct page *p = pfn_to_page(swp_offset(entry));
+ /*
+ * Ensure we do not race with split, which might alter tail pages into new
+ * head pages and thus result in observing an unlocked page.
+ * This matches the write barrier in __split_huge_page_tail().
+ */
+ smp_rmb();
+
/*
* Any use of migration entries may only occur while the
* corresponding page is locked
*/
- BUG_ON(!PageLocked(compound_head(p)));
+ BUG_ON(!PageLocked(head));
+}
+
+static inline struct page *migration_entry_to_page(swp_entry_t entry)
+{
+ struct page *p = pfn_to_page(swp_offset(entry));
+
+ migration_entry_sync_page(compound_head(p));
+
return p;
}
+static inline struct page *migration_entry_to_compound_page(swp_entry_t entry)
+{
+ struct page *p = pfn_to_page(swp_offset(entry));
+ struct page *head;
+
+ head = compound_head(p);
+ migration_entry_sync_page(head);
+
+ return head;
+}
+
static inline void make_migration_entry_read(swp_entry_t *entry)
{
*entry = swp_entry(SWP_MIGRATION_READ, swp_offset(*entry));
diff --git a/mm/filemap.c b/mm/filemap.c
index 18e304ce6229..c2932db70212 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1334,7 +1334,7 @@ void migration_entry_wait_on_locked(swp_entry_t entry, pte_t *ptep,
bool delayacct = false;
unsigned long pflags = 0;
wait_queue_head_t *q;
- struct page *page = compound_head(migration_entry_to_page(entry));
+ struct page *page = migration_entry_to_compound_page(entry);
q = page_waitqueue(page);
if (!PageUptodate(page) && PageWorkingset(page)) {
--
2.43.0
2
1
[PATCH OLK-6.6] mm/huge_memory: fix folio isn't locked in softleaf_to_folio()
by Jinjiang Tu 01 Apr '26
by Jinjiang Tu 01 Apr '26
01 Apr '26
mainline inclusion
from mainline-v7.0-rc6
commit 4c5e7f0fcd592801c9cc18f29f80fbee84eb8669
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8836
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
On arm64 server, we found folio that get from migration entry isn't locked
in softleaf_to_folio(). This issue triggers when mTHP splitting and
zap_nonpresent_ptes() races, and the root cause is lack of memory barrier
in softleaf_to_folio(). The race is as follows:
CPU0 CPU1
deferred_split_scan() zap_nonpresent_ptes()
lock folio
split_folio()
unmap_folio()
change ptes to migration entries
__split_folio_to_order() softleaf_to_folio()
set flags(including PG_locked) for tail pages folio = pfn_folio(softleaf_to_pfn(entry))
smp_wmb() VM_WARN_ON_ONCE(!folio_test_locked(folio))
prep_compound_page() for tail pages
In __split_folio_to_order(), smp_wmb() guarantees page flags of tail pages
are visible before the tail page becomes non-compound. smp_wmb() should
be paired with smp_rmb() in softleaf_to_folio(), which is missed. As a
result, if zap_nonpresent_ptes() accesses migration entry that stores tail
pfn, softleaf_to_folio() may see the updated compound_head of tail page
before page->flags.
This issue will trigger VM_WARN_ON_ONCE() in pfn_swap_entry_folio()
because of the race between folio split and zap_nonpresent_ptes()
leading to a folio incorrectly undergoing modification without a folio
lock being held.
This is a BUG_ON() before commit 93976a20345b ("mm: eliminate further
swapops predicates"), which in merged in v6.19-rc1.
To fix it, add missing smp_rmb() if the softleaf entry is migration entry
in softleaf_to_folio() and softleaf_to_page().
[tujinjiang(a)huawei.com: update function name and comments]
Link: https://lkml.kernel.org/r/20260321075214.3305564-1-tujinjiang@huawei.com
Link: https://lkml.kernel.org/r/20260319012541.4158561-1-tujinjiang@huawei.com
Fixes: e9b61f19858a ("thp: reintroduce split_huge_page()")
Signed-off-by: Jinjiang Tu <tujinjiang(a)huawei.com>
Acked-by: David Hildenbrand (Arm) <david(a)kernel.org>
Reviewed-by: Lorenzo Stoakes (Oracle) <ljs(a)kernel.org>
Cc: Barry Song <baohua(a)kernel.org>
Cc: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Cc: Liam Howlett <liam.howlett(a)oracle.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Mike Rapoport <rppt(a)kernel.org>
Cc: Nanyong Sun <sunnanyong(a)huawei.com>
Cc: Ryan Roberts <ryan.roberts(a)arm.com>
Cc: Suren Baghdasaryan <surenb(a)google.com>
Cc: Vlastimil Babka <vbabka(a)kernel.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Conflicts:
include/linux/leafops.h
include/linux/swapops.h
[miragtion entry hasn't been renamed to softleaf entry.]
Signed-off-by: Jinjiang Tu <tujinjiang(a)huawei.com>
---
include/linux/swapops.h | 26 ++++++++++++++++++--------
1 file changed, 18 insertions(+), 8 deletions(-)
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index b32d696242b6..7bb5937a3f3c 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -500,15 +500,28 @@ static inline int is_userswap_entry(swp_entry_t entry)
}
#endif
-static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry)
+static inline void migration_entry_sync(struct folio *folio)
{
- struct page *p = pfn_to_page(swp_offset_pfn(entry));
+ /*
+ * Ensure we do not race with split, which might alter tail pages into new
+ * folios and thus result in observing an unlocked folio.
+ * This matches the write barrier in __split_folio_to_order().
+ */
+ smp_rmb();
/*
* Any use of migration entries may only occur while the
* corresponding page is locked
*/
- BUG_ON(is_migration_entry(entry) && !PageLocked(p));
+ BUG_ON(!folio_test_locked(folio));
+}
+
+static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry)
+{
+ struct page *p = pfn_to_page(swp_offset_pfn(entry));
+
+ if (is_migration_entry(entry))
+ migration_entry_sync(page_folio(p));
return p;
}
@@ -517,11 +530,8 @@ static inline struct folio *pfn_swap_entry_folio(swp_entry_t entry)
{
struct folio *folio = pfn_folio(swp_offset_pfn(entry));
- /*
- * Any use of migration entries may only occur while the
- * corresponding folio is locked
- */
- BUG_ON(is_migration_entry(entry) && !folio_test_locked(folio));
+ if (is_migration_entry(entry))
+ migration_entry_sync(folio);
return folio;
}
--
2.43.0
2
1
[PATCH openEuler-1.0-LTS] mm/huge_memory: fix folio isn't locked in softleaf_to_folio()
by Jinjiang Tu 31 Mar '26
by Jinjiang Tu 31 Mar '26
31 Mar '26
mainline inclusion
from mainline-v7.0-rc6
commit 4c5e7f0fcd592801c9cc18f29f80fbee84eb8669
category: bugfix
bugzilla: https://gitcode.com/openeuler/kernel/issues/8836
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
On arm64 server, we found folio that get from migration entry isn't locked
in softleaf_to_folio(). This issue triggers when mTHP splitting and
zap_nonpresent_ptes() races, and the root cause is lack of memory barrier
in softleaf_to_folio(). The race is as follows:
CPU0 CPU1
deferred_split_scan() zap_nonpresent_ptes()
lock folio
split_folio()
unmap_folio()
change ptes to migration entries
__split_folio_to_order() softleaf_to_folio()
set flags(including PG_locked) for tail pages folio = pfn_folio(softleaf_to_pfn(entry))
smp_wmb() VM_WARN_ON_ONCE(!folio_test_locked(folio))
prep_compound_page() for tail pages
In __split_folio_to_order(), smp_wmb() guarantees page flags of tail pages
are visible before the tail page becomes non-compound. smp_wmb() should
be paired with smp_rmb() in softleaf_to_folio(), which is missed. As a
result, if zap_nonpresent_ptes() accesses migration entry that stores tail
pfn, softleaf_to_folio() may see the updated compound_head of tail page
before page->flags.
This issue will trigger VM_WARN_ON_ONCE() in pfn_swap_entry_folio()
because of the race between folio split and zap_nonpresent_ptes()
leading to a folio incorrectly undergoing modification without a folio
lock being held.
This is a BUG_ON() before commit 93976a20345b ("mm: eliminate further
swapops predicates"), which in merged in v6.19-rc1.
To fix it, add missing smp_rmb() if the softleaf entry is migration entry
in softleaf_to_folio() and softleaf_to_page().
[tujinjiang(a)huawei.com: update function name and comments]
Link: https://lkml.kernel.org/r/20260321075214.3305564-1-tujinjiang@huawei.com
Link: https://lkml.kernel.org/r/20260319012541.4158561-1-tujinjiang@huawei.com
Fixes: e9b61f19858a ("thp: reintroduce split_huge_page()")
Signed-off-by: Jinjiang Tu <tujinjiang(a)huawei.com>
Acked-by: David Hildenbrand (Arm) <david(a)kernel.org>
Reviewed-by: Lorenzo Stoakes (Oracle) <ljs(a)kernel.org>
Cc: Barry Song <baohua(a)kernel.org>
Cc: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Cc: Liam Howlett <liam.howlett(a)oracle.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Mike Rapoport <rppt(a)kernel.org>
Cc: Nanyong Sun <sunnanyong(a)huawei.com>
Cc: Ryan Roberts <ryan.roberts(a)arm.com>
Cc: Suren Baghdasaryan <surenb(a)google.com>
Cc: Vlastimil Babka <vbabka(a)kernel.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Conflicts:
include/linux/leafops.h
include/linux/swapops.h
[miragtion entry hasn't been renamed to softleaf entry.]
Signed-off-by: Jinjiang Tu <tujinjiang(a)huawei.com>
---
include/linux/swapops.h | 20 +++++++++++++++++---
1 file changed, 17 insertions(+), 3 deletions(-)
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index 22af9d8a84ae..c742e778e024 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -205,14 +205,28 @@ static inline unsigned long migration_entry_to_pfn(swp_entry_t entry)
return swp_offset(entry);
}
-static inline struct page *migration_entry_to_page(swp_entry_t entry)
+static inline void migration_entry_sync_page(struct page *head)
{
- struct page *p = pfn_to_page(swp_offset(entry));
+ /*
+ * Ensure we do not race with split, which might alter tail pages into new
+ * head pages and thus result in observing an unlocked page.
+ * This matches the write barrier in __split_huge_page_tail().
+ */
+ smp_rmb();
+
/*
* Any use of migration entries may only occur while the
* corresponding page is locked
*/
- BUG_ON(!PageLocked(compound_head(p)));
+ BUG_ON(!PageLocked(head));
+}
+
+static inline struct page *migration_entry_to_page(swp_entry_t entry)
+{
+ struct page *p = pfn_to_page(swp_offset(entry));
+
+ migration_entry_sync_page(compound_head(p));
+
return p;
}
--
2.43.0
2
1
[PATCH OLK-5.10] mm/huge_memory: fix folio isn't locked in softleaf_to_folio()
by Jinjiang Tu 31 Mar '26
by Jinjiang Tu 31 Mar '26
31 Mar '26
mainline inclusion
from mainline-v7.0-rc6
commit 4c5e7f0fcd592801c9cc18f29f80fbee84eb8669
category: bugfix
bugzilla: https://gitcode.com/openeuler/kernel/issues/8836
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
On arm64 server, we found folio that get from migration entry isn't locked
in softleaf_to_folio(). This issue triggers when mTHP splitting and
zap_nonpresent_ptes() races, and the root cause is lack of memory barrier
in softleaf_to_folio(). The race is as follows:
CPU0 CPU1
deferred_split_scan() zap_nonpresent_ptes()
lock folio
split_folio()
unmap_folio()
change ptes to migration entries
__split_folio_to_order() softleaf_to_folio()
set flags(including PG_locked) for tail pages folio = pfn_folio(softleaf_to_pfn(entry))
smp_wmb() VM_WARN_ON_ONCE(!folio_test_locked(folio))
prep_compound_page() for tail pages
In __split_folio_to_order(), smp_wmb() guarantees page flags of tail pages
are visible before the tail page becomes non-compound. smp_wmb() should
be paired with smp_rmb() in softleaf_to_folio(), which is missed. As a
result, if zap_nonpresent_ptes() accesses migration entry that stores
tail pfn, softleaf_to_folio() may see the updated compound_head of tail
page before page->flags.
To fix it, add missing smp_rmb() if the softleaf entry is migration entry
in softleaf_to_folio() and softleaf_to_page().
[tujinjiang(a)huawei.com: update function name and comments]
Link: https://lkml.kernel.org/r/20260321075214.3305564-1-tujinjiang@huawei.com
Link: https://lkml.kernel.org/r/20260319012541.4158561-1-tujinjiang@huawei.com
Fixes: e9b61f19858a ("thp: reintroduce split_huge_page()")
Signed-off-by: Jinjiang Tu <tujinjiang(a)huawei.com>
Acked-by: David Hildenbrand (Arm) <david(a)kernel.org>
Reviewed-by: Lorenzo Stoakes (Oracle) <ljs(a)kernel.org>
Cc: Barry Song <baohua(a)kernel.org>
Cc: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Cc: Liam Howlett <liam.howlett(a)oracle.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Mike Rapoport <rppt(a)kernel.org>
Cc: Nanyong Sun <sunnanyong(a)huawei.com>
Cc: Ryan Roberts <ryan.roberts(a)arm.com>
Cc: Suren Baghdasaryan <surenb(a)google.com>
Cc: Vlastimil Babka <vbabka(a)kernel.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Conflicts:
include/linux/leafops.h
include/linux/swapops.h
mm/filemap.c
[miragtion entry hasn't been renamed to softleaf entry. Add new helper
migration_entry_to_compound_page().]
Signed-off-by: Jinjiang Tu <tujinjiang(a)huawei.com>
---
include/linux/swapops.h | 31 ++++++++++++++++++++++++++++---
mm/filemap.c | 2 +-
2 files changed, 29 insertions(+), 4 deletions(-)
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index e749c4c86b26..ed33367fb6a6 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -194,17 +194,42 @@ static inline unsigned long migration_entry_to_pfn(swp_entry_t entry)
return swp_offset(entry);
}
-static inline struct page *migration_entry_to_page(swp_entry_t entry)
+static inline void migration_entry_sync_page(struct page *head)
{
- struct page *p = pfn_to_page(swp_offset(entry));
+ /*
+ * Ensure we do not race with split, which might alter tail pages into new
+ * head pages and thus result in observing an unlocked page.
+ * This matches the write barrier in __split_huge_page_tail().
+ */
+ smp_rmb();
+
/*
* Any use of migration entries may only occur while the
* corresponding page is locked
*/
- BUG_ON(!PageLocked(compound_head(p)));
+ BUG_ON(!PageLocked(head));
+}
+
+static inline struct page *migration_entry_to_page(swp_entry_t entry)
+{
+ struct page *p = pfn_to_page(swp_offset(entry));
+
+ migration_entry_sync_page(compound_head(p));
+
return p;
}
+static inline struct page *migration_entry_to_compound_page(swp_entry_t entry)
+{
+ struct page *p = pfn_to_page(swp_offset(entry));
+ struct page *head;
+
+ head = compound_head(p);
+ migration_entry_sync_page(head);
+
+ return head;
+}
+
static inline void make_migration_entry_read(swp_entry_t *entry)
{
*entry = swp_entry(SWP_MIGRATION_READ, swp_offset(*entry));
diff --git a/mm/filemap.c b/mm/filemap.c
index 18e304ce6229..c2932db70212 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1334,7 +1334,7 @@ void migration_entry_wait_on_locked(swp_entry_t entry, pte_t *ptep,
bool delayacct = false;
unsigned long pflags = 0;
wait_queue_head_t *q;
- struct page *page = compound_head(migration_entry_to_page(entry));
+ struct page *page = migration_entry_to_compound_page(entry);
q = page_waitqueue(page);
if (!PageUptodate(page) && PageWorkingset(page)) {
--
2.43.0
2
1
[PATCH OLK-6.6] mm/huge_memory: fix folio isn't locked in softleaf_to_folio()
by Jinjiang Tu 31 Mar '26
by Jinjiang Tu 31 Mar '26
31 Mar '26
mainline inclusion
from mainline-v7.0-rc6
commit 4c5e7f0fcd592801c9cc18f29f80fbee84eb8669
category: bugfix
bugzilla: https://gitcode.com/openeuler/kernel/issues/8836
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
On arm64 server, we found folio that get from migration entry isn't locked
in softleaf_to_folio(). This issue triggers when mTHP splitting and
zap_nonpresent_ptes() races, and the root cause is lack of memory barrier
in softleaf_to_folio(). The race is as follows:
CPU0 CPU1
deferred_split_scan() zap_nonpresent_ptes()
lock folio
split_folio()
unmap_folio()
change ptes to migration entries
__split_folio_to_order() softleaf_to_folio()
set flags(including PG_locked) for tail pages folio = pfn_folio(softleaf_to_pfn(entry))
smp_wmb() VM_WARN_ON_ONCE(!folio_test_locked(folio))
prep_compound_page() for tail pages
In __split_folio_to_order(), smp_wmb() guarantees page flags of tail pages
are visible before the tail page becomes non-compound. smp_wmb() should
be paired with smp_rmb() in softleaf_to_folio(), which is missed. As a
result, if zap_nonpresent_ptes() accesses migration entry that stores tail
pfn, softleaf_to_folio() may see the updated compound_head of tail page
before page->flags.
This issue will trigger VM_WARN_ON_ONCE() in pfn_swap_entry_folio()
because of the race between folio split and zap_nonpresent_ptes()
leading to a folio incorrectly undergoing modification without a folio
lock being held.
This is a BUG_ON() before commit 93976a20345b ("mm: eliminate further
swapops predicates"), which in merged in v6.19-rc1.
To fix it, add missing smp_rmb() if the softleaf entry is migration entry
in softleaf_to_folio() and softleaf_to_page().
[tujinjiang(a)huawei.com: update function name and comments]
Link: https://lkml.kernel.org/r/20260321075214.3305564-1-tujinjiang@huawei.com
Link: https://lkml.kernel.org/r/20260319012541.4158561-1-tujinjiang@huawei.com
Fixes: e9b61f19858a ("thp: reintroduce split_huge_page()")
Signed-off-by: Jinjiang Tu <tujinjiang(a)huawei.com>
Acked-by: David Hildenbrand (Arm) <david(a)kernel.org>
Reviewed-by: Lorenzo Stoakes (Oracle) <ljs(a)kernel.org>
Cc: Barry Song <baohua(a)kernel.org>
Cc: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Cc: Liam Howlett <liam.howlett(a)oracle.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Mike Rapoport <rppt(a)kernel.org>
Cc: Nanyong Sun <sunnanyong(a)huawei.com>
Cc: Ryan Roberts <ryan.roberts(a)arm.com>
Cc: Suren Baghdasaryan <surenb(a)google.com>
Cc: Vlastimil Babka <vbabka(a)kernel.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Conflicts:
include/linux/leafops.h
include/linux/swapops.h
[miragtion entry hasn't been renamed to softleaf entry.]
Signed-off-by: Jinjiang Tu <tujinjiang(a)huawei.com>
---
include/linux/swapops.h | 26 ++++++++++++++++++--------
1 file changed, 18 insertions(+), 8 deletions(-)
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index b32d696242b6..7bb5937a3f3c 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -500,15 +500,28 @@ static inline int is_userswap_entry(swp_entry_t entry)
}
#endif
-static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry)
+static inline void migration_entry_sync(struct folio *folio)
{
- struct page *p = pfn_to_page(swp_offset_pfn(entry));
+ /*
+ * Ensure we do not race with split, which might alter tail pages into new
+ * folios and thus result in observing an unlocked folio.
+ * This matches the write barrier in __split_folio_to_order().
+ */
+ smp_rmb();
/*
* Any use of migration entries may only occur while the
* corresponding page is locked
*/
- BUG_ON(is_migration_entry(entry) && !PageLocked(p));
+ BUG_ON(!folio_test_locked(folio));
+}
+
+static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry)
+{
+ struct page *p = pfn_to_page(swp_offset_pfn(entry));
+
+ if (is_migration_entry(entry))
+ migration_entry_sync(page_folio(p));
return p;
}
@@ -517,11 +530,8 @@ static inline struct folio *pfn_swap_entry_folio(swp_entry_t entry)
{
struct folio *folio = pfn_folio(swp_offset_pfn(entry));
- /*
- * Any use of migration entries may only occur while the
- * corresponding folio is locked
- */
- BUG_ON(is_migration_entry(entry) && !folio_test_locked(folio));
+ if (is_migration_entry(entry))
+ migration_entry_sync(folio);
return folio;
}
--
2.43.0
2
1
[PATCH OLK-6.6 v4 0/2] kvm: arm64: Transition from CPU Type to MIDR Register for Virtualization Feature Detection
by liqiqi 31 Mar '26
by liqiqi 31 Mar '26
31 Mar '26
Currently, there are two methods for determining whether a chip supports
specific virtualization features:
1. Reading the chip's CPU type from BIOS
2. Reading the value of the MIDR register
The issue with the first method is that each time a new chip is introduced,
the new CPU type must be defined, which leads to poor code portability and
maintainability.
Therefore, the second method has been adopted to replace the first. This
approach eliminates the dependency on CPU type by using the MIDR register.
liqiqi (2):
kvm: arm64: Add MIDR definitions and use MIDR to determine whether
features are supported
kvm: arm64: Remove cpu_type definition and it's related interfaces
arch/arm64/include/asm/cache.h | 2 +-
arch/arm64/include/asm/cputype.h | 8 +-
arch/arm64/kernel/cpu_errata.c | 4 +-
arch/arm64/kernel/cpufeature.c | 2 +-
arch/arm64/kernel/proton-pack.c | 4 +-
arch/arm64/kvm/arm.c | 1 -
arch/arm64/kvm/hisilicon/hisi_virt.c | 111 +++--------------------
arch/arm64/kvm/hisilicon/hisi_virt.h | 12 ---
drivers/perf/hisilicon/hisi_uncore_pmu.c | 2 +-
tools/arch/arm64/include/asm/cputype.h | 4 +-
10 files changed, 26 insertions(+), 124 deletions(-)
--
2.43.0
2
7
From: Omar Sandoval <osandov(a)fb.com>
mainline inclusion
from mainline-v6.7-rc1
commit f63a5b3769ad7659da4c0420751d78958ab97675
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14036
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
We've been seeing XFS errors like the following:
XFS: Internal error i != 1 at line 3526 of file fs/xfs/libxfs/xfs_btree.c. Caller xfs_btree_insert+0x1ec/0x280
...
Call Trace:
xfs_corruption_error+0x94/0xa0
xfs_btree_insert+0x221/0x280
xfs_alloc_fixup_trees+0x104/0x3e0
xfs_alloc_ag_vextent_size+0x667/0x820
xfs_alloc_fix_freelist+0x5d9/0x750
xfs_free_extent_fix_freelist+0x65/0xa0
__xfs_free_extent+0x57/0x180
...
This is the XFS_IS_CORRUPT() check in xfs_btree_insert() when
xfs_btree_insrec() fails.
After converting this into a panic and dissecting the core dump, I found
that xfs_btree_insrec() is failing because it's trying to split a leaf
node in the cntbt when the AG free list is empty. In particular, it's
failing to get a block from the AGFL _while trying to refill the AGFL_.
If a single operation splits every level of the bnobt and the cntbt (and
the rmapbt if it is enabled) at once, the free list will be empty. Then,
when the next operation tries to refill the free list, it allocates
space. If the allocation does not use a full extent, it will need to
insert records for the remaining space in the bnobt and cntbt. And if
those new records go in full leaves, the leaves (and potentially more
nodes up to the old root) need to be split.
Fix it by accounting for the additional splits that may be required to
refill the free list in the calculation for the minimum free list size.
P.S. As far as I can tell, this bug has existed for a long time -- maybe
back to xfs-history commit afdf80ae7405 ("Add XFS_AG_MAXLEVELS macros
...") in April 1994! It requires a very unlucky sequence of events, and
in fact we didn't hit it until a particular sparse mmap workload updated
from 5.12 to 5.19. But this bug existed in 5.12, so it must've been
exposed by some other change in allocation or writeback patterns. It's
also much less likely to be hit with the rmapbt enabled, since that
increases the minimum free list size and is unlikely to split at the
same time as the bnobt and cntbt.
Reviewed-by: "Darrick J. Wong" <djwong(a)kernel.org>
Reviewed-by: Dave Chinner <dchinner(a)redhat.com>
Signed-off-by: Omar Sandoval <osandov(a)fb.com>
Signed-off-by: Chandan Babu R <chandanbabu(a)kernel.org>
Conflicts:
fs/xfs/libxfs/xfs_alloc.c
[Context conflicts]
Signed-off-by: Long Li <leo.lilong(a)huawei.com>
---
fs/xfs/libxfs/xfs_alloc.c | 27 ++++++++++++++++++++++++---
1 file changed, 24 insertions(+), 3 deletions(-)
diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
index 23c0e666d2f4..15dce9276d45 100644
--- a/fs/xfs/libxfs/xfs_alloc.c
+++ b/fs/xfs/libxfs/xfs_alloc.c
@@ -2328,16 +2328,37 @@ xfs_alloc_min_freelist(
ASSERT(mp->m_ag_maxlevels > 0);
+ /*
+ * For a btree shorter than the maximum height, the worst case is that
+ * every level gets split and a new level is added, then while inserting
+ * another entry to refill the AGFL, every level under the old root gets
+ * split again. This is:
+ *
+ * (full height split reservation) + (AGFL refill split height)
+ * = (current height + 1) + (current height - 1)
+ * = (new height) + (new height - 2)
+ * = 2 * new height - 2
+ *
+ * For a btree of maximum height, the worst case is that every level
+ * under the root gets split, then while inserting another entry to
+ * refill the AGFL, every level under the root gets split again. This is
+ * also:
+ *
+ * 2 * (current height - 1)
+ * = 2 * (new height - 1)
+ * = 2 * new height - 2
+ */
+
/* space needed by-bno freespace btree */
min_free = min_t(unsigned int, levels[XFS_BTNUM_BNOi] + 1,
- mp->m_ag_maxlevels);
+ mp->m_ag_maxlevels) * 2 - 2;
/* space needed by-size freespace btree */
min_free += min_t(unsigned int, levels[XFS_BTNUM_CNTi] + 1,
- mp->m_ag_maxlevels);
+ mp->m_ag_maxlevels) * 2 - 2;
/* space needed reverse mapping used space btree */
if (xfs_has_rmapbt(mp))
min_free += min_t(unsigned int, levels[XFS_BTNUM_RMAPi] + 1,
- mp->m_rmap_maxlevels);
+ mp->m_rmap_maxlevels) * 2 - 2;
return min_free;
}
--
2.39.2
2
1
[PATCH OLK-6.6 v3 0/3] kvm: arm64: Transition from CPU Type to MIDR Register for Virtualization Feature Detection
by liqiqi 31 Mar '26
by liqiqi 31 Mar '26
31 Mar '26
Currently, there are two methods for determining whether a chip supports
specific virtualization features:
1. Reading the chip's CPU type from BIOS
2. Reading the value of the MIDR register
The issue with the first method is that each time a new chip is introduced,
the new CPU type must be defined, which leads to poor code portability and
maintainability.
Therefore, the second method has been adopted to replace the first. This
approach eliminates the dependency on CPU type by using the MIDR register.
liqiqi (3):
kvm: arm64: Add HIP08, HIP10, HIP10C MIDR definitions
kvm: arm64: use MIDR to determine whether features are supported
kvm: arm64: Remove cpu_type definition and it's related interfaces
arch/arm64/include/asm/cache.h | 2 +-
arch/arm64/include/asm/cputype.h | 8 +-
arch/arm64/kernel/cpu_errata.c | 4 +-
arch/arm64/kernel/cpufeature.c | 2 +-
arch/arm64/kernel/proton-pack.c | 4 +-
arch/arm64/kvm/arm.c | 1 -
arch/arm64/kvm/hisilicon/hisi_virt.c | 111 +++--------------------
arch/arm64/kvm/hisilicon/hisi_virt.h | 12 ---
drivers/perf/hisilicon/hisi_uncore_pmu.c | 2 +-
tools/arch/arm64/include/asm/cputype.h | 4 +-
10 files changed, 26 insertions(+), 124 deletions(-)
--
2.43.0
2
4
From: Steven Rostedt <rostedt(a)goodmis.org>
mainline inclusion
from mainline-v7.0-rc4
commit 755a648e78f12574482d4698d877375793867fa1
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8837
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
The trace_clock_jiffies() function that handles the "uptime" clock for
tracing calls jiffies_64_to_clock_t(). This causes the function tracer to
constantly recurse when the tracing clock is set to "uptime". Mark it
notrace to prevent unnecessary recursion when using the "uptime" clock.
Fixes: 58d4e21e50ff3 ("tracing: Fix wraparound problems in "uptime" trace clock")
Signed-off-by: Steven Rostedt (Google) <rostedt(a)goodmis.org>
Signed-off-by: Thomas Gleixner <tglx(a)kernel.org>
Link: https://patch.msgid.link/20260306212403.72270bb2@robin
Signed-off-by: Liu Kai <liukai284(a)huawei.com>
---
kernel/time/time.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/time/time.c b/kernel/time/time.c
index 1ad88e97b4eb..da7e8a02a096 100644
--- a/kernel/time/time.c
+++ b/kernel/time/time.c
@@ -702,7 +702,7 @@ EXPORT_SYMBOL(clock_t_to_jiffies);
*
* Return: jiffies_64 value converted to 64-bit "clock_t" (CLOCKS_PER_SEC)
*/
-u64 jiffies_64_to_clock_t(u64 x)
+notrace u64 jiffies_64_to_clock_t(u64 x)
{
#if (TICK_NSEC % (NSEC_PER_SEC / USER_HZ)) == 0
# if HZ < USER_HZ
--
2.34.1
2
1
[PATCH OLK-6.6 0/2] arm-smmu-v3: add HIP09A, HIP09B, HIP10C, HIP10CA for 162100602 errata
by Zeng Heng 31 Mar '26
by Zeng Heng 31 Mar '26
31 Mar '26
Qinxin Xia (2):
arm-smmu-v3: add HIP09A, HIP09B, HIP10C, HIP10CA for 162100602 errata
ACPI/IORT: Add PMCG platform information for 162001900
drivers/acpi/arm64/iort.c | 4 ++++
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 6 ++++++
2 files changed, 10 insertions(+)
--
2.25.1
1
1
31 Mar '26
From: Steven Rostedt <rostedt(a)goodmis.org>
mainline inclusion
from mainline-v7.0-rc4
commit 755a648e78f12574482d4698d877375793867fa1
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8837
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
The trace_clock_jiffies() function that handles the "uptime" clock for
tracing calls jiffies_64_to_clock_t(). This causes the function tracer to
constantly recurse when the tracing clock is set to "uptime". Mark it
notrace to prevent unnecessary recursion when using the "uptime" clock.
Fixes: 58d4e21e50ff3 ("tracing: Fix wraparound problems in "uptime" trace clock")
Signed-off-by: Steven Rostedt (Google) <rostedt(a)goodmis.org>
Signed-off-by: Thomas Gleixner <tglx(a)kernel.org>
Link: https://patch.msgid.link/20260306212403.72270bb2@robin
Conflicts:
kernel/time/time.c
[Context conflict]
Signed-off-by: Liu Kai <liukai284(a)huawei.com>
---
kernel/time/time.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/time/time.c b/kernel/time/time.c
index 3985b2b32d08..22551ee392e4 100644
--- a/kernel/time/time.c
+++ b/kernel/time/time.c
@@ -649,7 +649,7 @@ unsigned long clock_t_to_jiffies(unsigned long x)
}
EXPORT_SYMBOL(clock_t_to_jiffies);
-u64 jiffies_64_to_clock_t(u64 x)
+notrace u64 jiffies_64_to_clock_t(u64 x)
{
#if (TICK_NSEC % (NSEC_PER_SEC / USER_HZ)) == 0
# if HZ < USER_HZ
--
2.34.1
2
1
[PATCH OLK-6.6 v2 0/3] kvm: arm64: Transition from CPU Type to MIDR Register for Virtualization Feature Detection
by liqiqi 31 Mar '26
by liqiqi 31 Mar '26
31 Mar '26
Currently, there are two methods for determining whether a chip supports
specific virtualization features:
1. Reading the chip's CPU type from BIOS
2. Reading the value of the MIDR register
The issue with the first method is that each time a new chip is introduced,
the new CPU type must be defined, which leads to poor code portability and
maintainability.
Therefore, the second method has been adopted to replace the first. This
approach eliminates the dependency on CPU type by using the MIDR register
and removes the need for defining CPU types and their related interfaces.
liqiqi (3):
kvm: arm64: Add HIP08, HIP10, HIP10C MIDR definitions
kvm: arm64: use MIDR to determine whether features are supported
kvm: arm64: Remove cpu_type definition and it's related interfaces
arch/arm64/include/asm/cache.h | 2 +-
arch/arm64/include/asm/cputype.h | 8 +-
arch/arm64/kernel/cpu_errata.c | 4 +-
arch/arm64/kernel/cpufeature.c | 2 +-
arch/arm64/kernel/proton-pack.c | 4 +-
arch/arm64/kvm/arm.c | 1 -
arch/arm64/kvm/hisilicon/hisi_virt.c | 111 +++----------------------
arch/arm64/kvm/hisilicon/hisi_virt.h | 12 ---
tools/arch/arm64/include/asm/cputype.h | 4 +-
9 files changed, 25 insertions(+), 123 deletions(-)
--
2.43.0
2
4
[PATCH OLK-6.6 0/2] arm-smmu-v3: add HIP09A, HIP09B, HIP10C, HIP10CA for 162100602 errata
by Zeng Heng 31 Mar '26
by Zeng Heng 31 Mar '26
31 Mar '26
Qinxin Xia (2):
arm-smmu-v3: add HIP09A, HIP09B, HIP10C, HIP10CA for 162100602 errata
ACPI/IORT: Add PMCG platform information for 162001900
drivers/acpi/arm64/iort.c | 4 ++++
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 6 ++++++
2 files changed, 10 insertions(+)
--
2.25.1
2
3
[PATCH openEuler-1.0-LTS] mm/huge_memory: fix folio isn't locked in softleaf_to_folio()
by Jinjiang Tu 31 Mar '26
by Jinjiang Tu 31 Mar '26
31 Mar '26
mainline inclusion
from mainline-v7.0-rc6
commit 4c5e7f0fcd592801c9cc18f29f80fbee84eb8669
category: bugfix
bugzilla: https://hulk.rnd.huawei.com/issue/info/45383
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
On arm64 server, we found folio that get from migration entry isn't locked
in softleaf_to_folio(). This issue triggers when mTHP splitting and
zap_nonpresent_ptes() races, and the root cause is lack of memory barrier
in softleaf_to_folio(). The race is as follows:
CPU0 CPU1
deferred_split_scan() zap_nonpresent_ptes()
lock folio
split_folio()
unmap_folio()
change ptes to migration entries
__split_folio_to_order() softleaf_to_folio()
set flags(including PG_locked) for tail pages folio = pfn_folio(softleaf_to_pfn(entry))
smp_wmb() VM_WARN_ON_ONCE(!folio_test_locked(folio))
prep_compound_page() for tail pages
In __split_folio_to_order(), smp_wmb() guarantees page flags of tail pages
are visible before the tail page becomes non-compound. smp_wmb() should
be paired with smp_rmb() in softleaf_to_folio(), which is missed. As a
result, if zap_nonpresent_ptes() accesses migration entry that stores tail
pfn, softleaf_to_folio() may see the updated compound_head of tail page
before page->flags.
This issue will trigger VM_WARN_ON_ONCE() in pfn_swap_entry_folio()
because of the race between folio split and zap_nonpresent_ptes()
leading to a folio incorrectly undergoing modification without a folio
lock being held.
This is a BUG_ON() before commit 93976a20345b ("mm: eliminate further
swapops predicates"), which in merged in v6.19-rc1.
To fix it, add missing smp_rmb() if the softleaf entry is migration entry
in softleaf_to_folio() and softleaf_to_page().
[tujinjiang(a)huawei.com: update function name and comments]
Link: https://lkml.kernel.org/r/20260321075214.3305564-1-tujinjiang@huawei.com
Link: https://lkml.kernel.org/r/20260319012541.4158561-1-tujinjiang@huawei.com
Fixes: e9b61f19858a ("thp: reintroduce split_huge_page()")
Signed-off-by: Jinjiang Tu <tujinjiang(a)huawei.com>
Acked-by: David Hildenbrand (Arm) <david(a)kernel.org>
Reviewed-by: Lorenzo Stoakes (Oracle) <ljs(a)kernel.org>
Cc: Barry Song <baohua(a)kernel.org>
Cc: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Cc: Liam Howlett <liam.howlett(a)oracle.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Mike Rapoport <rppt(a)kernel.org>
Cc: Nanyong Sun <sunnanyong(a)huawei.com>
Cc: Ryan Roberts <ryan.roberts(a)arm.com>
Cc: Suren Baghdasaryan <surenb(a)google.com>
Cc: Vlastimil Babka <vbabka(a)kernel.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Conflicts:
include/linux/leafops.h
include/linux/swapops.h
[miragtion entry hasn't been renamed to softleaf entry.]
Signed-off-by: Jinjiang Tu <tujinjiang(a)huawei.com>
---
include/linux/swapops.h | 20 +++++++++++++++++---
1 file changed, 17 insertions(+), 3 deletions(-)
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index 22af9d8a84ae..c742e778e024 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -205,14 +205,28 @@ static inline unsigned long migration_entry_to_pfn(swp_entry_t entry)
return swp_offset(entry);
}
-static inline struct page *migration_entry_to_page(swp_entry_t entry)
+static inline void migration_entry_sync_page(struct page *head)
{
- struct page *p = pfn_to_page(swp_offset(entry));
+ /*
+ * Ensure we do not race with split, which might alter tail pages into new
+ * head pages and thus result in observing an unlocked page.
+ * This matches the write barrier in __split_huge_page_tail().
+ */
+ smp_rmb();
+
/*
* Any use of migration entries may only occur while the
* corresponding page is locked
*/
- BUG_ON(!PageLocked(compound_head(p)));
+ BUG_ON(!PageLocked(head));
+}
+
+static inline struct page *migration_entry_to_page(swp_entry_t entry)
+{
+ struct page *p = pfn_to_page(swp_offset(entry));
+
+ migration_entry_sync_page(compound_head(p));
+
return p;
}
--
2.43.0
2
1
31 Mar '26
From: Thomas Gleixner <tglx(a)kernel.org>
mainline inclusion
from mainline-v7.0-rc3
commit 4b3d54a85bd37ebf2d9836f0d0de775c0ff21af9
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13968
CVE: CVE-2026-23313
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
----------------------------------------------------------------------
Using get_cpu() in the tracepoint assignment causes an obvious preempt
count leak because nothing invokes put_cpu() to undo it:
softirq: huh, entered softirq 3 NET_RX with preempt_count 00000100, exited with 00000101?
This clearly has seen a lot of testing in the last 3+ years...
Use smp_processor_id() instead.
Fixes: 6d4d584a7ea8 ("i40e: Add i40e_napi_poll tracepoint")
Signed-off-by: Thomas Gleixner <tglx(a)kernel.org>
Cc: Tony Nguyen <anthony.l.nguyen(a)intel.com>
Cc: Przemek Kitszel <przemyslaw.kitszel(a)intel.com>
Cc: intel-wired-lan(a)lists.osuosl.org
Cc: netdev(a)vger.kernel.org
Reviewed-by: Joe Damato <joe(a)dama.to>
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov(a)intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen(a)intel.com>
Conflicts:
drivers/net/ethernet/intel/i40e/i40e_trace.h
[Fix conflicts because of context diff.]
Signed-off-by: Luo Gengkun <luogengkun2(a)huawei.com>
---
drivers/net/ethernet/intel/i40e/i40e_trace.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_trace.h b/drivers/net/ethernet/intel/i40e/i40e_trace.h
index 33b4e30f5e00..9b735a9e2114 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_trace.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_trace.h
@@ -88,7 +88,7 @@ TRACE_EVENT(i40e_napi_poll,
__entry->rx_clean_complete = rx_clean_complete;
__entry->tx_clean_complete = tx_clean_complete;
__entry->irq_num = q->irq_num;
- __entry->curr_cpu = get_cpu();
+ __entry->curr_cpu = smp_processor_id();
__assign_str(qname, q->name);
__assign_str(dev_name, napi->dev ? napi->dev->name : NO_DEV);
__assign_bitmask(irq_affinity, cpumask_bits(&q->affinity_mask),
--
2.34.1
2
1
[PATCH OLK-6.6 00/14] ext4: fix generic/363 failure in iomap buffered I/O mode
by Yongjian Sun 30 Mar '26
by Yongjian Sun 30 Mar '26
30 Mar '26
Brian Foster (2):
iomap: fix handling of dirty folios over unwritten extents
iomap: make zero range flush conditional on unwritten mappings
Zhang Yi (12):
ext4: rename and extend ext4_block_truncate_page()
ext4: factor out journalled block zeroing range
ext4: rename ext4_block_zero_page_range() to ext4_block_zero_range()
ext4: move ordered data handling out of ext4_block_do_zero_range()
ext4: remove handle parameters from zero partial block functions
ext4: pass allocate range as loff_t to ext4_alloc_file_blocks()
ext4: move zero partial block range functions out of active handle
ext4: ensure zeroed partial blocks are persisted in SYNC mode
ext4: unify SYNC mode checks in fallocate paths
ext4: remove ctime/mtime update from ext4_alloc_file_blocks()
ext4: move pagecache_isize_extended() out of active handle
ext4: zero post EOF partial block before iomap appending write
fs/ext4/ext4.h | 5 +-
fs/ext4/extents.c | 137 ++++++++++++-----------
fs/ext4/file.c | 17 +++
fs/ext4/inode.c | 244 +++++++++++++++++++++++------------------
fs/iomap/buffered-io.c | 63 ++++++++++-
fs/xfs/xfs_iops.c | 10 --
6 files changed, 291 insertions(+), 185 deletions(-)
--
2.39.2
2
15
Kui-Feng Lee (1):
bpf: export bpf_link_inc_not_zero.
Lang Xu (1):
bpf: Fix a UAF issue in bpf_trampoline_link_cgroup_shim
include/linux/bpf.h | 6 ++++++
kernel/bpf/syscall.c | 3 ++-
kernel/bpf/trampoline.c | 4 +---
3 files changed, 9 insertions(+), 4 deletions(-)
--
2.34.1
2
3
From: Kohei Enju <kohei(a)enjuk.jp>
stable inclusion
from stable-v6.6.130
commit 8a95fb9df1105b1618872c2846a6c01e3ba20b45
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13922
CVE: CVE-2026-23359
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit b7bf516c3ecd9a2aae2dc2635178ab87b734fef1 ]
get_upper_ifindexes() iterates over all upper devices and writes their
indices into an array without checking bounds.
Also the callers assume that the max number of upper devices is
MAX_NEST_DEV and allocate excluded_devices[1+MAX_NEST_DEV] on the stack,
but that assumption is not correct and the number of upper devices could
be larger than MAX_NEST_DEV (e.g., many macvlans), causing a
stack-out-of-bounds write.
Add a max parameter to get_upper_ifindexes() to avoid the issue.
When there are too many upper devices, return -EOVERFLOW and abort the
redirect.
To reproduce, create more than MAX_NEST_DEV(8) macvlans on a device with
an XDP program attached using BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS.
Then send a packet to the device to trigger the XDP redirect path.
Reported-by: syzbot+10cc7f13760b31bd2e61(a)syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/698c4ce3.050a0220.340abe.000b.GAE@google.com/T/
Fixes: aeea1b86f936 ("bpf, devmap: Exclude XDP broadcast to master device")
Reviewed-by: Toke Høiland-Jørgensen <toke(a)redhat.com>
Signed-off-by: Kohei Enju <kohei(a)enjuk.jp>
Link: https://lore.kernel.org/r/20260225053506.4738-1-kohei@enjuk.jp
Signed-off-by: Alexei Starovoitov <ast(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Pu Lehui <pulehui(a)huawei.com>
---
kernel/bpf/devmap.c | 22 +++++++++++++++++-----
1 file changed, 17 insertions(+), 5 deletions(-)
diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
index 5f2356b47b2d..3bdec239be61 100644
--- a/kernel/bpf/devmap.c
+++ b/kernel/bpf/devmap.c
@@ -577,18 +577,22 @@ static inline bool is_ifindex_excluded(int *excluded, int num_excluded, int ifin
}
/* Get ifindex of each upper device. 'indexes' must be able to hold at
- * least MAX_NEST_DEV elements.
- * Returns the number of ifindexes added.
+ * least 'max' elements.
+ * Returns the number of ifindexes added, or -EOVERFLOW if there are too
+ * many upper devices.
*/
-static int get_upper_ifindexes(struct net_device *dev, int *indexes)
+static int get_upper_ifindexes(struct net_device *dev, int *indexes, int max)
{
struct net_device *upper;
struct list_head *iter;
int n = 0;
netdev_for_each_upper_dev_rcu(dev, upper, iter) {
+ if (n >= max)
+ return -EOVERFLOW;
indexes[n++] = upper->ifindex;
}
+
return n;
}
@@ -604,7 +608,11 @@ int dev_map_enqueue_multi(struct xdp_frame *xdpf, struct net_device *dev_rx,
int err;
if (exclude_ingress) {
- num_excluded = get_upper_ifindexes(dev_rx, excluded_devices);
+ num_excluded = get_upper_ifindexes(dev_rx, excluded_devices,
+ ARRAY_SIZE(excluded_devices) - 1);
+ if (num_excluded < 0)
+ return num_excluded;
+
excluded_devices[num_excluded++] = dev_rx->ifindex;
}
@@ -722,7 +730,11 @@ int dev_map_redirect_multi(struct net_device *dev, struct sk_buff *skb,
int err;
if (exclude_ingress) {
- num_excluded = get_upper_ifindexes(dev, excluded_devices);
+ num_excluded = get_upper_ifindexes(dev, excluded_devices,
+ ARRAY_SIZE(excluded_devices) - 1);
+ if (num_excluded < 0)
+ return num_excluded;
+
excluded_devices[num_excluded++] = dev->ifindex;
}
--
2.34.1
2
1
30 Mar '26
From: Lang Xu <xulang(a)uniontech.com>
stable inclusion
from stable-v6.6.130
commit 9b02c5c4147f8af8ed783c8deb5df927a55c3951
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13962
CVE: CVE-2026-23319
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 56145d237385ca0e7ca9ff7b226aaf2eb8ef368b ]
The root cause of this bug is that when 'bpf_link_put' reduces the
refcount of 'shim_link->link.link' to zero, the resource is considered
released but may still be referenced via 'tr->progs_hlist' in
'cgroup_shim_find'. The actual cleanup of 'tr->progs_hlist' in
'bpf_shim_tramp_link_release' is deferred. During this window, another
process can cause a use-after-free via 'bpf_trampoline_link_cgroup_shim'.
Based on Martin KaFai Lau's suggestions, I have created a simple patch.
To fix this:
Add an atomic non-zero check in 'bpf_trampoline_link_cgroup_shim'.
Only increment the refcount if it is not already zero.
Testing:
I verified the fix by adding a delay in
'bpf_shim_tramp_link_release' to make the bug easier to trigger:
static void bpf_shim_tramp_link_release(struct bpf_link *link)
{
/* ... */
if (!shim_link->trampoline)
return;
+ msleep(100);
WARN_ON_ONCE(bpf_trampoline_unlink_prog(&shim_link->link,
shim_link->trampoline, NULL));
bpf_trampoline_put(shim_link->trampoline);
}
Before the patch, running a PoC easily reproduced the crash(almost 100%)
with a call trace similar to KaiyanM's report.
After the patch, the bug no longer occurs even after millions of
iterations.
Fixes: 69fd337a975c ("bpf: per-cgroup lsm flavor")
Reported-by: Kaiyan Mei <M202472210(a)hust.edu.cn>
Closes: https://lore.kernel.org/bpf/3c4ebb0b.46ff8.19abab8abe2.Coremail.kaiyanm@hus…
Signed-off-by: Lang Xu <xulang(a)uniontech.com>
Signed-off-by: Martin KaFai Lau <martin.lau(a)kernel.org>
Link: https://patch.msgid.link/279EEE1BA1DDB49D+20260303095217.34436-1-xulang@uni…
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Pu Lehui <pulehui(a)huawei.com>
---
kernel/bpf/trampoline.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 08325a65835b..bb842decab0d 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -733,10 +733,8 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog,
mutex_lock(&tr->mutex);
shim_link = cgroup_shim_find(tr, bpf_func);
- if (shim_link) {
+ if (shim_link && !IS_ERR(bpf_link_inc_not_zero(&shim_link->link.link))) {
/* Reusing existing shim attached by the other program. */
- bpf_link_inc(&shim_link->link.link);
-
mutex_unlock(&tr->mutex);
bpf_trampoline_put(tr); /* bpf_trampoline_get above */
return 0;
--
2.34.1
2
1
From: Kumar Kartikeya Dwivedi <memxor(a)gmail.com>
mainline inclusion
from mainline-v7.0-rc5
commit 146bd2a87a65aa407bb17fac70d8d583d19aba06
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8835
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
Gregory reported in [0] that the global_map_resize test when run in
repeatedly ends up failing during program load. This stems from the fact
that BTF reference has not dropped to zero after the previous run's
module is unloaded, and the older module's BTF is still discoverable and
visible. Later, in libbpf, load_module_btfs() will find the ID for this
stale BTF, open its fd, and then it will be used during program load
where later steps taking module reference using btf_try_get_module()
fail since the underlying module for the BTF is gone.
Logically, once a module is unloaded, it's associated BTF artifacts
should become hidden. The BTF object inside the kernel may still remain
alive as long its reference counts are alive, but it should no longer be
discoverable.
To fix this, let us call btf_free_id() from the MODULE_STATE_GOING case
for the module unload to free the BTF associated IDR entry, and disable
its discovery once module unload returns to user space. If a race
happens during unload, the outcome is non-deterministic anyway. However,
user space should be able to rely on the guarantee that once it has
synchronously established a successful module unload, no more stale
artifacts associated with this module can be obtained subsequently.
Note that we must be careful to not invoke btf_free_id() in btf_put()
when btf_is_module() is true now. There could be a window where the
module unload drops a non-terminal reference, frees the IDR, but the
same ID gets reused and the second unconditional btf_free_id() ends up
releasing an unrelated entry.
To avoid a special case for btf_is_module() case, set btf->id to zero to
make btf_free_id() idempotent, such that we can unconditionally invoke it
from btf_put(), and also from the MODULE_STATE_GOING case. Since zero is
an invalid IDR, the idr_remove() should be a noop.
Note that we can be sure that by the time we reach final btf_put() for
btf_is_module() case, the btf_free_id() is already done, since the
module itself holds the BTF reference, and it will call this function
for the BTF before dropping its own reference.
[0]: https://lore.kernel.org/bpf/cover.1773170190.git.grbell@redhat.com
Fixes: 36e68442d1af ("bpf: Load and verify kernel module BTFs")
Acked-by: Martin KaFai Lau <martin.lau(a)kernel.org>
Suggested-by: Martin KaFai Lau <martin.lau(a)kernel.org>
Reported-by: Gregory Bell <grbell(a)redhat.com>
Reviewed-by: Emil Tsalapatis <emil(a)etsalapatis.com>
Signed-off-by: Kumar Kartikeya Dwivedi <memxor(a)gmail.com>
Link: https://lore.kernel.org/r/20260312205307.1346991-1-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast(a)kernel.org>
Signed-off-by: Pu Lehui <pulehui(a)huawei.com>
---
kernel/bpf/btf.c | 24 ++++++++++++++++++++----
1 file changed, 20 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 7c888ee0ccc7..5d1413e95185 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -1640,7 +1640,16 @@ static void btf_free_id(struct btf *btf)
* of the _bh() version.
*/
spin_lock_irqsave(&btf_idr_lock, flags);
- idr_remove(&btf_idr, btf->id);
+ if (btf->id) {
+ idr_remove(&btf_idr, btf->id);
+ /*
+ * Clear the id here to make this function idempotent, since it will get
+ * called a couple of times for module BTFs: on module unload, and then
+ * the final btf_put(). btf_alloc_id() starts IDs with 1, so we can use
+ * 0 as sentinel value.
+ */
+ WRITE_ONCE(btf->id, 0);
+ }
spin_unlock_irqrestore(&btf_idr_lock, flags);
}
@@ -7408,7 +7417,7 @@ static void bpf_btf_show_fdinfo(struct seq_file *m, struct file *filp)
{
const struct btf *btf = filp->private_data;
- seq_printf(m, "btf_id:\t%u\n", btf->id);
+ seq_printf(m, "btf_id:\t%u\n", READ_ONCE(btf->id));
}
#endif
@@ -7500,7 +7509,7 @@ int btf_get_info_by_fd(const struct btf *btf,
if (copy_from_user(&info, uinfo, info_copy))
return -EFAULT;
- info.id = btf->id;
+ info.id = READ_ONCE(btf->id);
ubtf = u64_to_user_ptr(info.btf);
btf_copy = min_t(u32, btf->data_size, info.btf_size);
if (copy_to_user(ubtf, btf->data, btf_copy))
@@ -7563,7 +7572,7 @@ int btf_get_fd_by_id(u32 id)
u32 btf_obj_id(const struct btf *btf)
{
- return btf->id;
+ return READ_ONCE(btf->id);
}
bool btf_is_kernel(const struct btf *btf)
@@ -7695,6 +7704,13 @@ static int btf_module_notify(struct notifier_block *nb, unsigned long op,
if (btf_mod->module != module)
continue;
+ /*
+ * For modules, we do the freeing of BTF IDR as soon as
+ * module goes away to disable BTF discovery, since the
+ * btf_try_get_module() on such BTFs will fail. This may
+ * be called again on btf_put(), but it's ok to do so.
+ */
+ btf_free_id(btf_mod->btf);
list_del(&btf_mod->list);
if (btf_mod->sysfs_attr)
sysfs_remove_bin_file(btf_kobj, btf_mod->sysfs_attr);
--
2.34.1
2
1
28 Mar '26
From: Mathy Vanhoef <Mathy.Vanhoef(a)kuleuven.be>
stable inclusion
from stable-v6.6.99
commit ec6392061de6681148b63ee6c8744da833498cdd
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/7758
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 737bb912ebbe4571195c56eba557c4d7315b26fb upstream.
This patch is a mitigation to prevent the A-MSDU spoofing vulnerability
for mesh networks. The initial update to the IEEE 802.11 standard, in
response to the FragAttacks, missed this case (CVE-2025-27558). It can
be considered a variant of CVE-2020-24588 but for mesh networks.
This patch tries to detect if a standard MSDU was turned into an A-MSDU
by an adversary. This is done by parsing a received A-MSDU as a standard
MSDU, calculating the length of the Mesh Control header, and seeing if
the 6 bytes after this header equal the start of an rfc1042 header. If
equal, this is a strong indication of an ongoing attack attempt.
This defense was tested with mac80211_hwsim against a mesh network that
uses an empty Mesh Address Extension field, i.e., when four addresses
are used, and when using a 12-byte Mesh Address Extension field, i.e.,
when six addresses are used. Functionality of normal MSDUs and A-MSDUs
was also tested, and confirmed working, when using both an empty and
12-byte Mesh Address Extension field.
It was also tested with mac80211_hwsim that A-MSDU attacks in non-mesh
networks keep being detected and prevented.
Note that the vulnerability being patched, and the defense being
implemented, was also discussed in the following paper and in the
following IEEE 802.11 presentation:
https://papers.mathyvanhoef.com/wisec2025.pdf
https://mentor.ieee.org/802.11/dcn/25/11-25-0949-00-000m-a-msdu-mesh-spoof-…
Cc: stable(a)vger.kernel.org
Signed-off-by: Mathy Vanhoef <Mathy.Vanhoef(a)kuleuven.be>
Link: https://patch.msgid.link/20250616004635.224344-1-Mathy.Vanhoef@kuleuven.be
Signed-off-by: Johannes Berg <johannes.berg(a)intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Wang Hai <wanghai38(a)huawei.com>
---
net/wireless/util.c | 51 +++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 49 insertions(+), 2 deletions(-)
diff --git a/net/wireless/util.c b/net/wireless/util.c
index 4e81210e5b24..4af280c1a6d4 100644
--- a/net/wireless/util.c
+++ b/net/wireless/util.c
@@ -623,6 +623,51 @@ __ieee80211_amsdu_copy(struct sk_buff *skb, unsigned int hlen,
return frame;
}
+/*
+ * Detects if an MSDU frame was maliciously converted into an A-MSDU
+ * frame by an adversary. This is done by parsing the received frame
+ * as if it were a regular MSDU, even though the A-MSDU flag is set.
+ *
+ * For non-mesh interfaces, detection involves checking whether the
+ * payload, when interpreted as an MSDU, begins with a valid RFC1042
+ * header. This is done by comparing the A-MSDU subheader's destination
+ * address to the start of the RFC1042 header.
+ *
+ * For mesh interfaces, the MSDU includes a 6-byte Mesh Control field
+ * and an optional variable-length Mesh Address Extension field before
+ * the RFC1042 header. The position of the RFC1042 header must therefore
+ * be calculated based on the mesh header length.
+ *
+ * Since this function intentionally parses an A-MSDU frame as an MSDU,
+ * it only assumes that the A-MSDU subframe header is present, and
+ * beyond this it performs its own bounds checks under the assumption
+ * that the frame is instead parsed as a non-aggregated MSDU.
+ */
+static bool
+is_amsdu_aggregation_attack(struct ethhdr *eth, struct sk_buff *skb,
+ enum nl80211_iftype iftype)
+{
+ int offset;
+
+ /* Non-mesh case can be directly compared */
+ if (iftype != NL80211_IFTYPE_MESH_POINT)
+ return ether_addr_equal(eth->h_dest, rfc1042_header);
+
+ offset = __ieee80211_get_mesh_hdrlen(eth->h_dest[0]);
+ if (offset == 6) {
+ /* Mesh case with empty address extension field */
+ return ether_addr_equal(eth->h_source, rfc1042_header);
+ } else if (offset + ETH_ALEN <= skb->len) {
+ /* Mesh case with non-empty address extension field */
+ u8 temp[ETH_ALEN];
+
+ skb_copy_bits(skb, offset, temp, ETH_ALEN);
+ return ether_addr_equal(temp, rfc1042_header);
+ }
+
+ return false;
+}
+
void ieee80211_amsdu_to_8023s(struct sk_buff *skb, struct sk_buff_head *list,
const u8 *addr, enum nl80211_iftype iftype,
const unsigned int extra_headroom,
@@ -655,8 +700,10 @@ void ieee80211_amsdu_to_8023s(struct sk_buff *skb, struct sk_buff_head *list,
/* the last MSDU has no padding */
if (subframe_len > remaining)
goto purge;
- /* mitigate A-MSDU aggregation injection attacks */
- if (ether_addr_equal(eth.h_dest, rfc1042_header))
+ /* mitigate A-MSDU aggregation injection attacks, to be
+ * checked when processing first subframe (offset == 0).
+ */
+ if (offset == 0 && is_amsdu_aggregation_attack(ð, skb, iftype))
goto purge;
offset += sizeof(struct ethhdr);
--
2.22.0
2
1
28 Mar '26
From: Mathy Vanhoef <Mathy.Vanhoef(a)kuleuven.be>
stable inclusion
from stable-v6.6.99
commit ec6392061de6681148b63ee6c8744da833498cdd
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/7758
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 737bb912ebbe4571195c56eba557c4d7315b26fb upstream.
This patch is a mitigation to prevent the A-MSDU spoofing vulnerability
for mesh networks. The initial update to the IEEE 802.11 standard, in
response to the FragAttacks, missed this case (CVE-2025-27558). It can
be considered a variant of CVE-2020-24588 but for mesh networks.
This patch tries to detect if a standard MSDU was turned into an A-MSDU
by an adversary. This is done by parsing a received A-MSDU as a standard
MSDU, calculating the length of the Mesh Control header, and seeing if
the 6 bytes after this header equal the start of an rfc1042 header. If
equal, this is a strong indication of an ongoing attack attempt.
This defense was tested with mac80211_hwsim against a mesh network that
uses an empty Mesh Address Extension field, i.e., when four addresses
are used, and when using a 12-byte Mesh Address Extension field, i.e.,
when six addresses are used. Functionality of normal MSDUs and A-MSDUs
was also tested, and confirmed working, when using both an empty and
12-byte Mesh Address Extension field.
It was also tested with mac80211_hwsim that A-MSDU attacks in non-mesh
networks keep being detected and prevented.
Note that the vulnerability being patched, and the defense being
implemented, was also discussed in the following paper and in the
following IEEE 802.11 presentation:
https://papers.mathyvanhoef.com/wisec2025.pdf
https://mentor.ieee.org/802.11/dcn/25/11-25-0949-00-000m-a-msdu-mesh-spoof-…
Cc: stable(a)vger.kernel.org
Signed-off-by: Mathy Vanhoef <Mathy.Vanhoef(a)kuleuven.be>
Link: https://patch.msgid.link/20250616004635.224344-1-Mathy.Vanhoef@kuleuven.be
Signed-off-by: Johannes Berg <johannes.berg(a)intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Wang Hai <wanghai38(a)huawei.com>
---
net/wireless/util.c | 51 +++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 49 insertions(+), 2 deletions(-)
diff --git a/net/wireless/util.c b/net/wireless/util.c
index 4e81210e5b24..dcca1b882455 100644
--- a/net/wireless/util.c
+++ b/net/wireless/util.c
@@ -623,6 +623,51 @@ __ieee80211_amsdu_copy(struct sk_buff *skb, unsigned int hlen,
return frame;
}
+/*
+ * Detects if an MSDU frame was maliciously converted into an A-MSDU
+ * frame by an adversary. This is done by parsing the received frame
+ * as if it were a regular MSDU, even though the A-MSDU flag is set.
+ *
+ * For non-mesh interfaces, detection involves checking whether the
+ * payload, when interpreted as an MSDU, begins with a valid RFC1042
+ * header. This is done by comparing the A-MSDU subheader's destination
+ * address to the start of the RFC1042 header.
+ *
+ * For mesh interfaces, the MSDU includes a 6-byte Mesh Control field
+ * and an optional variable-length Mesh Address Extension field before
+ * the RFC1042 header. The position of the RFC1042 header must therefore
+ * be calculated based on the mesh header length.
+ *
+ * Since this function intentionally parses an A-MSDU frame as an MSDU,
+ * it only assumes that the A-MSDU subframe header is present, and
+ * beyond this it performs its own bounds checks under the assumption
+ * that the frame is instead parsed as a non-aggregated MSDU.
+ */
+static bool
+is_amsdu_aggregation_attack(struct ethhdr *eth, struct sk_buff *skb,
+ enum nl80211_iftype iftype)
+{
+ int offset;
+
+ /* Non-mesh case can be directly compared */
+ if (iftype != NL80211_IFTYPE_MESH_POINT)
+ return ether_addr_equal(eth->h_dest, rfc1042_header);
+
+ offset = __ieee80211_get_mesh_hdrlen(eth->h_dest[0]);
+ if (offset == 6) {
+ /* Mesh case with empty address extension field */
+ return ether_addr_equal(eth->h_source, rfc1042_header);
+ } else if (offset + ETH_ALEN <= skb->len) {
+ /* Mesh case with non-empty address extension field */
+ u8 temp[ETH_ALEN];
+
+ skb_copy_bits(skb, offset, temp, ETH_ALEN);
+ return ether_addr_equal(temp, rfc1042_header);
+ }
+
+ return false;
+}
+
void ieee80211_amsdu_to_8023s(struct sk_buff *skb, struct sk_buff_head *list,
const u8 *addr, enum nl80211_iftype iftype,
const unsigned int extra_headroom,
@@ -655,8 +700,10 @@ void ieee80211_amsdu_to_8023s(struct sk_buff *skb, struct sk_buff_head *list,
/* the last MSDU has no padding */
if (subframe_len > remaining)
goto purge;
- /* mitigate A-MSDU aggregation injection attacks */
- if (ether_addr_equal(eth.h_dest, rfc1042_header))
+ /* mitigate A-MSDU aggregation injection attacks, to be
+ * checked when processing first subframe (offset == 0).
+ */
+ if (offset == 0 && is_amsdu_aggregation_attack(&hdr.eth, skb, iftype))
goto purge;
offset += sizeof(struct ethhdr);
--
2.22.0
2
1
28 Mar '26
From: YunJe Shin <yjshin0438(a)gmail.com>
stable inclusion
from stable-v5.10.252
commit 1371ef6b1ecf3676b8942f5dfb3634fb0648128e
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13865
CVE: CVE-2026-23243
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 5551b02fdbfd85a325bb857f3a8f9c9f33397ed2 upstream.
ib_umad_write computes data_len from user-controlled count and the
MAD header sizes. With a mismatched user MAD header size and RMPP
header length, data_len can become negative and reach ib_create_send_mad().
This can make the padding calculation exceed the segment size and trigger
an out-of-bounds memset in alloc_send_rmpp_list().
Add an explicit check to reject negative data_len before creating the
send buffer.
KASAN splat:
[ 211.363464] BUG: KASAN: slab-out-of-bounds in ib_create_send_mad+0xa01/0x11b0
[ 211.364077] Write of size 220 at addr ffff88800c3fa1f8 by task spray_thread/102
[ 211.365867] ib_create_send_mad+0xa01/0x11b0
[ 211.365887] ib_umad_write+0x853/0x1c80
Fixes: 2be8e3ee8efd ("IB/umad: Add P_Key index support")
Signed-off-by: YunJe Shin <ioerts(a)kookmin.ac.kr>
Link: https://patch.msgid.link/20260203100628.1215408-1-ioerts@kookmin.ac.kr
Signed-off-by: Leon Romanovsky <leon(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Liu Kai <liukai284(a)huawei.com>
---
drivers/infiniband/core/user_mad.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/core/user_mad.c b/drivers/infiniband/core/user_mad.c
index 063707dd4fe3..c8ad6ef39fa5 100644
--- a/drivers/infiniband/core/user_mad.c
+++ b/drivers/infiniband/core/user_mad.c
@@ -514,7 +514,8 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
struct rdma_ah_attr ah_attr;
struct ib_ah *ah;
__be64 *tid;
- int ret, data_len, hdr_len, copy_offset, rmpp_active;
+ int ret, hdr_len, copy_offset, rmpp_active;
+ size_t data_len;
u8 base_version;
if (count < hdr_size(file) + IB_MGMT_RMPP_HDR)
@@ -588,7 +589,10 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
}
base_version = ((struct ib_mad_hdr *)&packet->mad.data)->base_version;
- data_len = count - hdr_size(file) - hdr_len;
+ if (check_sub_overflow(count, hdr_size(file) + hdr_len, &data_len)) {
+ ret = -EINVAL;
+ goto err_ah;
+ }
packet->msg = ib_create_send_mad(agent,
be32_to_cpu(packet->mad.hdr.qpn),
packet->mad.hdr.pkey_index, rmpp_active,
--
2.34.1
2
1
28 Mar '26
From: YunJe Shin <yjshin0438(a)gmail.com>
stable inclusion
from stable-v6.6.128
commit a6a3e4af10993cb9e4b8f0548680aba0ab5f3b0d
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13865
CVE: CVE-2026-23243
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 5551b02fdbfd85a325bb857f3a8f9c9f33397ed2 upstream.
ib_umad_write computes data_len from user-controlled count and the
MAD header sizes. With a mismatched user MAD header size and RMPP
header length, data_len can become negative and reach ib_create_send_mad().
This can make the padding calculation exceed the segment size and trigger
an out-of-bounds memset in alloc_send_rmpp_list().
Add an explicit check to reject negative data_len before creating the
send buffer.
KASAN splat:
[ 211.363464] BUG: KASAN: slab-out-of-bounds in ib_create_send_mad+0xa01/0x11b0
[ 211.364077] Write of size 220 at addr ffff88800c3fa1f8 by task spray_thread/102
[ 211.365867] ib_create_send_mad+0xa01/0x11b0
[ 211.365887] ib_umad_write+0x853/0x1c80
Fixes: 2be8e3ee8efd ("IB/umad: Add P_Key index support")
Signed-off-by: YunJe Shin <ioerts(a)kookmin.ac.kr>
Link: https://patch.msgid.link/20260203100628.1215408-1-ioerts@kookmin.ac.kr
Signed-off-by: Leon Romanovsky <leon(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Liu Kai <liukai284(a)huawei.com>
---
drivers/infiniband/core/user_mad.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/core/user_mad.c b/drivers/infiniband/core/user_mad.c
index 2ed749f50a29..285f251fc014 100644
--- a/drivers/infiniband/core/user_mad.c
+++ b/drivers/infiniband/core/user_mad.c
@@ -514,7 +514,8 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
struct rdma_ah_attr ah_attr;
struct ib_ah *ah;
__be64 *tid;
- int ret, data_len, hdr_len, copy_offset, rmpp_active;
+ int ret, hdr_len, copy_offset, rmpp_active;
+ size_t data_len;
u8 base_version;
if (count < hdr_size(file) + IB_MGMT_RMPP_HDR)
@@ -588,7 +589,10 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
}
base_version = ((struct ib_mad_hdr *)&packet->mad.data)->base_version;
- data_len = count - hdr_size(file) - hdr_len;
+ if (check_sub_overflow(count, hdr_size(file) + hdr_len, &data_len)) {
+ ret = -EINVAL;
+ goto err_ah;
+ }
packet->msg = ib_create_send_mad(agent,
be32_to_cpu(packet->mad.hdr.qpn),
packet->mad.hdr.pkey_index, rmpp_active,
--
2.34.1
2
1
From: Ming Lei <ming.lei(a)redhat.com>
stable inclusion
from stable-v6.12.77
commit 64f87b96de0e645a4c066c7cffd753f334446db6
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13921
CVE: CVE-2026-23360
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit b84bb7bd913d8ca2f976ee6faf4a174f91c02b8d ]
When nvme_alloc_admin_tag_set() is called during a controller reset,
a previous admin queue may still exist. Release it properly before
allocating a new one to avoid orphaning the old queue.
This fixes a regression introduced by commit 03b3bcd319b3 ("nvme: fix
admin request_queue lifetime").
Cc: Keith Busch <kbusch(a)kernel.org>
Fixes: 03b3bcd319b3 ("nvme: fix admin request_queue lifetime").
Reported-and-tested-by: Yi Zhang <yi.zhang(a)redhat.com>
Closes: https://lore.kernel.org/linux-block/CAHj4cs9wv3SdPo+N01Fw2SHBYDs9tj2M_e1-Gd…
Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
Signed-off-by: Keith Busch <kbusch(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Conflicts:
drivers/nvme/host/core.c
[Commit 9ac4dd8c47d5 ("block: pass a queue_limits argument to
blk_mq_init_queue") replace blk_mq_init_queue by blk_mq_alloc_queue to get
admin_q.]
Signed-off-by: Zizhi Wo <wozizhi(a)huawei.com>
---
drivers/nvme/host/core.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 2997d0561c39..50de9e76d07d 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4301,10 +4301,17 @@ int nvme_alloc_admin_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
set->timeout = NVME_ADMIN_TIMEOUT;
ret = blk_mq_alloc_tag_set(set);
if (ret)
return ret;
+ /*
+ * If a previous admin queue exists (e.g., from before a reset),
+ * put it now before allocating a new one to avoid orphaning it.
+ */
+ if (ctrl->admin_q)
+ blk_put_queue(ctrl->admin_q);
+
ctrl->admin_q = blk_mq_init_queue(set);
if (IS_ERR(ctrl->admin_q)) {
ret = PTR_ERR(ctrl->admin_q);
goto out_free_tagset;
}
--
2.39.2
2
1
[PATCH OLK-6.6] scsi: target: Fix recursive locking in __configfs_open_file()
by Zizhi Wo 28 Mar '26
by Zizhi Wo 28 Mar '26
28 Mar '26
From: Prithvi Tambewagh <activprithvi(a)gmail.com>
stable inclusion
from stable-v6.6.130
commit e8ef82cb6443d5f3260b1b830e17f03dda4229ea
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13995
CVE: CVE-2026-23292
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 14d4ac19d1895397532eec407433c5d74d9da53b upstream.
In flush_write_buffer, &p->frag_sem is acquired and then the loaded store
function is called, which, here, is target_core_item_dbroot_store(). This
function called filp_open(), following which these functions were called
(in reverse order), according to the call trace:
down_read
__configfs_open_file
do_dentry_open
vfs_open
do_open
path_openat
do_filp_open
file_open_name
filp_open
target_core_item_dbroot_store
flush_write_buffer
configfs_write_iter
target_core_item_dbroot_store() tries to validate the new file path by
trying to open the file path provided to it; however, in this case, the bug
report shows:
db_root: not a directory: /sys/kernel/config/target/dbroot
indicating that the same configfs file was tried to be opened, on which it
is currently working on. Thus, it is trying to acquire frag_sem semaphore
of the same file of which it already holds the semaphore obtained in
flush_write_buffer(), leading to acquiring the semaphore in a nested manner
and a possibility of recursive locking.
Fix this by modifying target_core_item_dbroot_store() to use kern_path()
instead of filp_open() to avoid opening the file using filesystem-specific
function __configfs_open_file(), and further modifying it to make this fix
compatible.
Reported-by: syzbot+f6e8174215573a84b797(a)syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=f6e8174215573a84b797
Tested-by: syzbot+f6e8174215573a84b797(a)syzkaller.appspotmail.com
Cc: stable(a)vger.kernel.org
Signed-off-by: Prithvi Tambewagh <activprithvi(a)gmail.com>
Reviewed-by: Dmitry Bogdanov <d.bogdanov(a)yadro.com>
Link: https://patch.msgid.link/20260216062002.61937-1-activprithvi@gmail.com
Signed-off-by: Martin K. Petersen <martin.petersen(a)oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Zizhi Wo <wozizhi(a)huawei.com>
---
drivers/target/target_core_configfs.c | 15 ++++++---------
1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c
index eddcfd09c05b..f1a0022e5295 100644
--- a/drivers/target/target_core_configfs.c
+++ b/drivers/target/target_core_configfs.c
@@ -106,12 +106,12 @@ static ssize_t target_core_item_dbroot_show(struct config_item *item,
static ssize_t target_core_item_dbroot_store(struct config_item *item,
const char *page, size_t count)
{
ssize_t read_bytes;
- struct file *fp;
ssize_t r = -EINVAL;
+ struct path path = {};
mutex_lock(&target_devices_lock);
if (target_devices) {
pr_err("db_root: cannot be changed because it's in use\n");
goto unlock;
@@ -129,21 +129,18 @@ static ssize_t target_core_item_dbroot_store(struct config_item *item,
if (db_root_stage[read_bytes - 1] == '\n')
db_root_stage[read_bytes - 1] = '\0';
/* validate new db root before accepting it */
- fp = filp_open(db_root_stage, O_RDONLY, 0);
- if (IS_ERR(fp)) {
+ r = kern_path(db_root_stage, LOOKUP_FOLLOW | LOOKUP_DIRECTORY, &path);
+ if (r) {
pr_err("db_root: cannot open: %s\n", db_root_stage);
+ if (r == -ENOTDIR)
+ pr_err("db_root: not a directory: %s\n", db_root_stage);
goto unlock;
}
- if (!S_ISDIR(file_inode(fp)->i_mode)) {
- filp_close(fp, NULL);
- pr_err("db_root: not a directory: %s\n", db_root_stage);
- goto unlock;
- }
- filp_close(fp, NULL);
+ path_put(&path);
strncpy(db_root, db_root_stage, read_bytes);
pr_debug("Target_Core_ConfigFS: db_root set to %s\n", db_root);
r = read_bytes;
--
2.39.2
2
1
CVE-2026-23352
Mike Rapoport (Microsoft) (2):
x86/efi: defer freeing of boot services memory
x86/efi: efi_unmap_boot_services: fix calculation of ranges_to_free
size
arch/x86/include/asm/efi.h | 2 +-
arch/x86/platform/efi/efi.c | 2 +-
arch/x86/platform/efi/quirks.c | 55 +++++++++++++++++++++++++++--
drivers/firmware/efi/mokvar-table.c | 2 +-
4 files changed, 55 insertions(+), 6 deletions(-)
--
2.34.1
2
3
CVE-2026-23352
Mike Rapoport (Microsoft) (2):
x86/efi: defer freeing of boot services memory
x86/efi: efi_unmap_boot_services: fix calculation of ranges_to_free
size
arch/x86/include/asm/efi.h | 2 +-
arch/x86/platform/efi/efi.c | 2 +-
arch/x86/platform/efi/quirks.c | 55 +++++++++++++++++++++++++++--
drivers/firmware/efi/mokvar-table.c | 2 +-
4 files changed, 55 insertions(+), 6 deletions(-)
--
2.34.1
2
3
*** BLURB HERE ***
Mike Rapoport (Microsoft) (2):
x86/efi: defer freeing of boot services memory
x86/efi: efi_unmap_boot_services: fix calculation of ranges_to_free
size
arch/x86/include/asm/efi.h | 2 +-
arch/x86/platform/efi/efi.c | 2 +-
arch/x86/platform/efi/quirks.c | 55 +++++++++++++++++++++++++++--
drivers/firmware/efi/mokvar-table.c | 2 +-
4 files changed, 55 insertions(+), 6 deletions(-)
--
2.34.1
3
5
*** BLURB HERE ***
Mike Rapoport (Microsoft) (2):
x86/efi: defer freeing of boot services memory
x86/efi: efi_unmap_boot_services: fix calculation of ranges_to_free
size
arch/x86/include/asm/efi.h | 2 +-
arch/x86/platform/efi/efi.c | 2 +-
arch/x86/platform/efi/quirks.c | 55 +++++++++++++++++++++++++++--
drivers/firmware/efi/mokvar-table.c | 2 +-
4 files changed, 55 insertions(+), 6 deletions(-)
--
2.34.1
2
3
28 Mar '26
From: "Mike Rapoport (Microsoft)" <rppt(a)kernel.org>
stable inclusion
from stable-v6.6.130
commit 6a25e25279282c5c8ade554c04c6ab9dc7902c64
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13929
CVE: CVE-2026-23352
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit a4b0bf6a40f3c107c67a24fbc614510ef5719980 upstream.
efi_free_boot_services() frees memory occupied by EFI_BOOT_SERVICES_CODE
and EFI_BOOT_SERVICES_DATA using memblock_free_late().
There are two issue with that: memblock_free_late() should be used for
memory allocated with memblock_alloc() while the memory reserved with
memblock_reserve() should be freed with free_reserved_area().
More acutely, with CONFIG_DEFERRED_STRUCT_PAGE_INIT=y
efi_free_boot_services() is called before deferred initialization of the
memory map is complete.
Benjamin Herrenschmidt reports that this causes a leak of ~140MB of
RAM on EC2 t3a.nano instances which only have 512MB or RAM.
If the freed memory resides in the areas that memory map for them is
still uninitialized, they won't be actually freed because
memblock_free_late() calls memblock_free_pages() and the latter skips
uninitialized pages.
Using free_reserved_area() at this point is also problematic because
__free_page() accesses the buddy of the freed page and that again might
end up in uninitialized part of the memory map.
Delaying the entire efi_free_boot_services() could be problematic
because in addition to freeing boot services memory it updates
efi.memmap without any synchronization and that's undesirable late in
boot when there is concurrency.
More robust approach is to only defer freeing of the EFI boot services
memory.
Split efi_free_boot_services() in two. First efi_unmap_boot_services()
collects ranges that should be freed into an array then
efi_free_boot_services() later frees them after deferred init is complete.
Link: https://lore.kernel.org/all/ec2aaef14783869b3be6e3c253b2dcbf67dbc12a.camel@…
Fixes: 916f676f8dc0 ("x86, efi: Retain boot service code until after switching to virtual mode")
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Mike Rapoport (Microsoft) <rppt(a)kernel.org>
Reviewed-by: Benjamin Herrenschmidt <benh(a)kernel.crashing.org>
Signed-off-by: Ard Biesheuvel <ardb(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Conflicts:
arch/x86/include/asm/efi.h
arch/x86/platform/efi/quirks.c
[ context conflict. ]
Signed-off-by: Jinjie Ruan <ruanjinjie(a)huawei.com>
---
arch/x86/include/asm/efi.h | 2 +-
arch/x86/platform/efi/efi.c | 2 +-
arch/x86/platform/efi/quirks.c | 55 +++++++++++++++++++++++++++--
drivers/firmware/efi/mokvar-table.c | 2 +-
4 files changed, 55 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/efi.h b/arch/x86/include/asm/efi.h
index 2de20f6a765b..fb1c8fded78b 100644
--- a/arch/x86/include/asm/efi.h
+++ b/arch/x86/include/asm/efi.h
@@ -149,7 +149,7 @@ extern int __init efi_reuse_config(u64 tables, int nr_tables);
extern void efi_delete_dummy_variable(void);
extern void efi_switch_mm(struct mm_struct *mm);
extern void efi_recover_from_page_fault(unsigned long phys_addr);
-extern void efi_free_boot_services(void);
+extern void efi_unmap_boot_services(void);
/* kexec external ABI */
struct efi_setup_data {
diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c
index 91f8edd50dd7..9f18c5d2a812 100644
--- a/arch/x86/platform/efi/efi.c
+++ b/arch/x86/platform/efi/efi.c
@@ -793,7 +793,7 @@ static void __init __efi_enter_virtual_mode(void)
}
efi_check_for_embedded_firmwares();
- efi_free_boot_services();
+ efi_unmap_boot_services();
if (!efi_is_mixed())
efi_native_runtime_setup();
diff --git a/arch/x86/platform/efi/quirks.c b/arch/x86/platform/efi/quirks.c
index c1eec019dcee..a152d5762400 100644
--- a/arch/x86/platform/efi/quirks.c
+++ b/arch/x86/platform/efi/quirks.c
@@ -333,7 +333,7 @@ void __init efi_reserve_boot_services(void)
/*
* Because the following memblock_reserve() is paired
- * with memblock_free_late() for this region in
+ * with free_reserved_area() for this region in
* efi_free_boot_services(), we must be extremely
* careful not to reserve, and subsequently free,
* critical regions of memory (like the kernel image) or
@@ -396,17 +396,33 @@ static void __init efi_unmap_pages(efi_memory_desc_t *md)
pr_err("Failed to unmap VA mapping for 0x%llx\n", va);
}
-void __init efi_free_boot_services(void)
+struct efi_freeable_range {
+ u64 start;
+ u64 end;
+};
+
+static struct efi_freeable_range *ranges_to_free;
+
+void __init efi_unmap_boot_services(void)
{
struct efi_memory_map_data data = { 0 };
efi_memory_desc_t *md;
int num_entries = 0;
+ int idx = 0;
+ size_t sz;
void *new, *new_md;
/* Keep all regions for /sys/kernel/debug/efi */
if (efi_enabled(EFI_DBG))
return;
+ sz = sizeof(*ranges_to_free) * efi.memmap.nr_map + 1;
+ ranges_to_free = kzalloc(sz, GFP_KERNEL);
+ if (!ranges_to_free) {
+ pr_err("Failed to allocate storage for freeable EFI regions\n");
+ return;
+ }
+
for_each_efi_memory_desc(md) {
unsigned long long start = md->phys_addr;
unsigned long long size = md->num_pages << EFI_PAGE_SHIFT;
@@ -451,7 +467,15 @@ void __init efi_free_boot_services(void)
size -= rm_size;
}
- memblock_free_late(start, size);
+ /*
+ * With CONFIG_DEFERRED_STRUCT_PAGE_INIT parts of the memory
+ * map are still not initialized and we can't reliably free
+ * memory here.
+ * Queue the ranges to free at a later point.
+ */
+ ranges_to_free[idx].start = start;
+ ranges_to_free[idx].end = start + size;
+ idx++;
}
if (!num_entries)
@@ -492,6 +516,31 @@ void __init efi_free_boot_services(void)
}
}
+static int __init efi_free_boot_services(void)
+{
+ struct efi_freeable_range *range = ranges_to_free;
+ unsigned long freed = 0;
+
+ if (!ranges_to_free)
+ return 0;
+
+ while (range->start) {
+ void *start = phys_to_virt(range->start);
+ void *end = phys_to_virt(range->end);
+
+ free_reserved_area(start, end, -1, NULL);
+ freed += (end - start);
+ range++;
+ }
+ kfree(ranges_to_free);
+
+ if (freed)
+ pr_info("Freeing EFI boot services memory: %ldK\n", freed / SZ_1K);
+
+ return 0;
+}
+arch_initcall(efi_free_boot_services);
+
/*
* A number of config table entries get remapped to virtual addresses
* after entering EFI virtual mode. However, the kexec kernel requires
diff --git a/drivers/firmware/efi/mokvar-table.c b/drivers/firmware/efi/mokvar-table.c
index 38722d2009e2..4a5c2f823788 100644
--- a/drivers/firmware/efi/mokvar-table.c
+++ b/drivers/firmware/efi/mokvar-table.c
@@ -85,7 +85,7 @@ static struct kobject *mokvar_kobj;
* as an alternative to ordinary EFI variables, due to platform-dependent
* limitations. The memory occupied by this table is marked as reserved.
*
- * This routine must be called before efi_free_boot_services() in order
+ * This routine must be called before efi_unmap_boot_services() in order
* to guarantee that it can mark the table as reserved.
*
* Implicit inputs:
--
2.34.1
2
1
From: "Mike Rapoport (Microsoft)" <rppt(a)kernel.org>
stable inclusion
from stable-v6.6.130
commit 6a25e25279282c5c8ade554c04c6ab9dc7902c64
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13929
CVE: CVE-2026-23352
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit a4b0bf6a40f3c107c67a24fbc614510ef5719980 upstream.
efi_free_boot_services() frees memory occupied by EFI_BOOT_SERVICES_CODE
and EFI_BOOT_SERVICES_DATA using memblock_free_late().
There are two issue with that: memblock_free_late() should be used for
memory allocated with memblock_alloc() while the memory reserved with
memblock_reserve() should be freed with free_reserved_area().
More acutely, with CONFIG_DEFERRED_STRUCT_PAGE_INIT=y
efi_free_boot_services() is called before deferred initialization of the
memory map is complete.
Benjamin Herrenschmidt reports that this causes a leak of ~140MB of
RAM on EC2 t3a.nano instances which only have 512MB or RAM.
If the freed memory resides in the areas that memory map for them is
still uninitialized, they won't be actually freed because
memblock_free_late() calls memblock_free_pages() and the latter skips
uninitialized pages.
Using free_reserved_area() at this point is also problematic because
__free_page() accesses the buddy of the freed page and that again might
end up in uninitialized part of the memory map.
Delaying the entire efi_free_boot_services() could be problematic
because in addition to freeing boot services memory it updates
efi.memmap without any synchronization and that's undesirable late in
boot when there is concurrency.
More robust approach is to only defer freeing of the EFI boot services
memory.
Split efi_free_boot_services() in two. First efi_unmap_boot_services()
collects ranges that should be freed into an array then
efi_free_boot_services() later frees them after deferred init is complete.
Link: https://lore.kernel.org/all/ec2aaef14783869b3be6e3c253b2dcbf67dbc12a.camel@…
Fixes: 916f676f8dc0 ("x86, efi: Retain boot service code until after switching to virtual mode")
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Mike Rapoport (Microsoft) <rppt(a)kernel.org>
Reviewed-by: Benjamin Herrenschmidt <benh(a)kernel.crashing.org>
Signed-off-by: Ard Biesheuvel <ardb(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Jinjie Ruan <ruanjinjie(a)huawei.com>
---
arch/x86/include/asm/efi.h | 2 +-
arch/x86/platform/efi/efi.c | 2 +-
arch/x86/platform/efi/quirks.c | 55 +++++++++++++++++++++++++++--
drivers/firmware/efi/mokvar-table.c | 2 +-
4 files changed, 55 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/efi.h b/arch/x86/include/asm/efi.h
index a050d329e34b..aaf1e71b067a 100644
--- a/arch/x86/include/asm/efi.h
+++ b/arch/x86/include/asm/efi.h
@@ -138,7 +138,7 @@ extern void __init efi_apply_memmap_quirks(void);
extern int __init efi_reuse_config(u64 tables, int nr_tables);
extern void efi_delete_dummy_variable(void);
extern void efi_crash_gracefully_on_page_fault(unsigned long phys_addr);
-extern void efi_free_boot_services(void);
+extern void efi_unmap_boot_services(void);
void arch_efi_call_virt_setup(void);
void arch_efi_call_virt_teardown(void);
diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c
index e9f99c56f3ce..55a61aaa3303 100644
--- a/arch/x86/platform/efi/efi.c
+++ b/arch/x86/platform/efi/efi.c
@@ -860,7 +860,7 @@ static void __init __efi_enter_virtual_mode(void)
}
efi_check_for_embedded_firmwares();
- efi_free_boot_services();
+ efi_unmap_boot_services();
if (!efi_is_mixed())
efi_native_runtime_setup();
diff --git a/arch/x86/platform/efi/quirks.c b/arch/x86/platform/efi/quirks.c
index f0cc00032751..df3b45ef1420 100644
--- a/arch/x86/platform/efi/quirks.c
+++ b/arch/x86/platform/efi/quirks.c
@@ -341,7 +341,7 @@ void __init efi_reserve_boot_services(void)
/*
* Because the following memblock_reserve() is paired
- * with memblock_free_late() for this region in
+ * with free_reserved_area() for this region in
* efi_free_boot_services(), we must be extremely
* careful not to reserve, and subsequently free,
* critical regions of memory (like the kernel image) or
@@ -404,17 +404,33 @@ static void __init efi_unmap_pages(efi_memory_desc_t *md)
pr_err("Failed to unmap VA mapping for 0x%llx\n", va);
}
-void __init efi_free_boot_services(void)
+struct efi_freeable_range {
+ u64 start;
+ u64 end;
+};
+
+static struct efi_freeable_range *ranges_to_free;
+
+void __init efi_unmap_boot_services(void)
{
struct efi_memory_map_data data = { 0 };
efi_memory_desc_t *md;
int num_entries = 0;
+ int idx = 0;
+ size_t sz;
void *new, *new_md;
/* Keep all regions for /sys/kernel/debug/efi */
if (efi_enabled(EFI_DBG))
return;
+ sz = sizeof(*ranges_to_free) * efi.memmap.nr_map + 1;
+ ranges_to_free = kzalloc(sz, GFP_KERNEL);
+ if (!ranges_to_free) {
+ pr_err("Failed to allocate storage for freeable EFI regions\n");
+ return;
+ }
+
for_each_efi_memory_desc(md) {
unsigned long long start = md->phys_addr;
unsigned long long size = md->num_pages << EFI_PAGE_SHIFT;
@@ -471,7 +487,15 @@ void __init efi_free_boot_services(void)
start = SZ_1M;
}
- memblock_free_late(start, size);
+ /*
+ * With CONFIG_DEFERRED_STRUCT_PAGE_INIT parts of the memory
+ * map are still not initialized and we can't reliably free
+ * memory here.
+ * Queue the ranges to free at a later point.
+ */
+ ranges_to_free[idx].start = start;
+ ranges_to_free[idx].end = start + size;
+ idx++;
}
if (!num_entries)
@@ -512,6 +536,31 @@ void __init efi_free_boot_services(void)
}
}
+static int __init efi_free_boot_services(void)
+{
+ struct efi_freeable_range *range = ranges_to_free;
+ unsigned long freed = 0;
+
+ if (!ranges_to_free)
+ return 0;
+
+ while (range->start) {
+ void *start = phys_to_virt(range->start);
+ void *end = phys_to_virt(range->end);
+
+ free_reserved_area(start, end, -1, NULL);
+ freed += (end - start);
+ range++;
+ }
+ kfree(ranges_to_free);
+
+ if (freed)
+ pr_info("Freeing EFI boot services memory: %ldK\n", freed / SZ_1K);
+
+ return 0;
+}
+arch_initcall(efi_free_boot_services);
+
/*
* A number of config table entries get remapped to virtual addresses
* after entering EFI virtual mode. However, the kexec kernel requires
diff --git a/drivers/firmware/efi/mokvar-table.c b/drivers/firmware/efi/mokvar-table.c
index 4eb0dff4dfaf..bd84a22805b5 100644
--- a/drivers/firmware/efi/mokvar-table.c
+++ b/drivers/firmware/efi/mokvar-table.c
@@ -85,7 +85,7 @@ static struct kobject *mokvar_kobj;
* as an alternative to ordinary EFI variables, due to platform-dependent
* limitations. The memory occupied by this table is marked as reserved.
*
- * This routine must be called before efi_free_boot_services() in order
+ * This routine must be called before efi_unmap_boot_services() in order
* to guarantee that it can mark the table as reserved.
*
* Implicit inputs:
--
2.34.1
2
1
28 Mar '26
From: Kaushlendra Kumar <kaushlendra.kumar(a)intel.com>
stable inclusion
from stable-v6.6.124
commit d61171cf097156030142643942c217759a9cc806
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13884
CVE: CVE-2026-23260
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit f3f380ce6b3d5c9805c7e0b3d5bc28d9ec41e2e8 ]
regcache_maple_write() allocates a new block ('entry') to merge
adjacent ranges and then stores it with mas_store_gfp().
When mas_store_gfp() fails, the new 'entry' remains allocated and
is never freed, leaking memory.
Free 'entry' on the failure path; on success continue freeing the
replaced neighbor blocks ('lower', 'upper').
Signed-off-by: Kaushlendra Kumar <kaushlendra.kumar(a)intel.com>
Link: https://patch.msgid.link/20260105031820.260119-1-kaushlendra.kumar@intel.com
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Lin Ruifeng <linruifeng4(a)huawei.com>
---
drivers/base/regmap/regcache-maple.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/base/regmap/regcache-maple.c b/drivers/base/regmap/regcache-maple.c
index fb5761a5ef6e..86de71ce2c19 100644
--- a/drivers/base/regmap/regcache-maple.c
+++ b/drivers/base/regmap/regcache-maple.c
@@ -96,12 +96,13 @@ static int regcache_maple_write(struct regmap *map, unsigned int reg,
mas_unlock(&mas);
- if (ret == 0) {
- kfree(lower);
- kfree(upper);
+ if (ret) {
+ kfree(entry);
+ return ret;
}
-
- return ret;
+ kfree(lower);
+ kfree(upper);
+ return 0;
}
static int regcache_maple_drop(struct regmap *map, unsigned int min,
--
2.43.0
2
1
[PATCH openEuler-1.0-LTS] bpf: Fix NULL event->prog pointer access in bpf_overflow_handler
by Pu Lehui 27 Mar '26
by Pu Lehui 27 Mar '26
27 Mar '26
From: Yonghong Song <yhs(a)fb.com>
mainline inclusion
from mainline-v5.15-rc1
commit 594286b7574c6e8217b1c233cc0d0650f2268a77
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8819
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
Andrii reported that libbpf CI hit the following oops when
running selftest send_signal:
[ 1243.160719] BUG: kernel NULL pointer dereference, address: 0000000000000030
[ 1243.161066] #PF: supervisor read access in kernel mode
[ 1243.161066] #PF: error_code(0x0000) - not-present page
[ 1243.161066] PGD 0 P4D 0
[ 1243.161066] Oops: 0000 [#1] PREEMPT SMP NOPTI
[ 1243.161066] CPU: 1 PID: 882 Comm: new_name Tainted: G O 5.14.0-rc5 #1
[ 1243.161066] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
[ 1243.161066] RIP: 0010:bpf_overflow_handler+0x9a/0x1e0
[ 1243.161066] Code: 5a 84 c0 0f 84 06 01 00 00 be 66 02 00 00 48 c7 c7 6d 96 07 82 48 8b ab 18 05 00 00 e8 df 55 eb ff 66 90 48 8d 75 48 48 89 e7 <ff> 55 30 41 89 c4 e8 fb c1 f0 ff 84 c0 0f 84 94 00 00 00 e8 6e 0f
[ 1243.161066] RSP: 0018:ffffc900000c0d80 EFLAGS: 00000046
[ 1243.161066] RAX: 0000000000000002 RBX: ffff8881002e0dd0 RCX: 00000000b4b47cf8
[ 1243.161066] RDX: ffffffff811dcb06 RSI: 0000000000000048 RDI: ffffc900000c0d80
[ 1243.161066] RBP: 0000000000000000 R08: 0000000000000000 R09: 1a9d56bb00000000
[ 1243.161066] R10: 0000000000000001 R11: 0000000000080000 R12: 0000000000000000
[ 1243.161066] R13: ffffc900000c0e00 R14: ffffc900001c3c68 R15: 0000000000000082
[ 1243.161066] FS: 00007fc0be2d3380(0000) GS:ffff88813bd00000(0000) knlGS:0000000000000000
[ 1243.161066] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1243.161066] CR2: 0000000000000030 CR3: 0000000104f8e000 CR4: 00000000000006e0
[ 1243.161066] Call Trace:
[ 1243.161066] <IRQ>
[ 1243.161066] __perf_event_overflow+0x4f/0xf0
[ 1243.161066] perf_swevent_hrtimer+0x116/0x130
[ 1243.161066] ? __lock_acquire+0x378/0x2730
[ 1243.161066] ? __lock_acquire+0x372/0x2730
[ 1243.161066] ? lock_is_held_type+0xd5/0x130
[ 1243.161066] ? find_held_lock+0x2b/0x80
[ 1243.161066] ? lock_is_held_type+0xd5/0x130
[ 1243.161066] ? perf_event_groups_first+0x80/0x80
[ 1243.161066] ? perf_event_groups_first+0x80/0x80
[ 1243.161066] __hrtimer_run_queues+0x1a3/0x460
[ 1243.161066] hrtimer_interrupt+0x110/0x220
[ 1243.161066] __sysvec_apic_timer_interrupt+0x8a/0x260
[ 1243.161066] sysvec_apic_timer_interrupt+0x89/0xc0
[ 1243.161066] </IRQ>
[ 1243.161066] asm_sysvec_apic_timer_interrupt+0x12/0x20
[ 1243.161066] RIP: 0010:finish_task_switch+0xaf/0x250
[ 1243.161066] Code: 31 f6 68 90 2a 09 81 49 8d 7c 24 18 e8 aa d6 03 00 4c 89 e7 e8 12 ff ff ff 4c 89 e7 e8 ca 9c 80 00 e8 35 af 0d 00 fb 4d 85 f6 <58> 74 1d 65 48 8b 04 25 c0 6d 01 00 4c 3b b0 a0 04 00 00 74 37 f0
[ 1243.161066] RSP: 0018:ffffc900001c3d18 EFLAGS: 00000282
[ 1243.161066] RAX: 000000000000031f RBX: ffff888104cf4980 RCX: 0000000000000000
[ 1243.161066] RDX: 0000000000000000 RSI: ffffffff82095460 RDI: ffffffff820adc4e
[ 1243.161066] RBP: ffffc900001c3d58 R08: 0000000000000001 R09: 0000000000000001
[ 1243.161066] R10: 0000000000000001 R11: 0000000000080000 R12: ffff88813bd2bc80
[ 1243.161066] R13: ffff8881002e8000 R14: ffff88810022ad80 R15: 0000000000000000
[ 1243.161066] ? finish_task_switch+0xab/0x250
[ 1243.161066] ? finish_task_switch+0x70/0x250
[ 1243.161066] __schedule+0x36b/0xbb0
[ 1243.161066] ? _raw_spin_unlock_irqrestore+0x2d/0x50
[ 1243.161066] ? lockdep_hardirqs_on+0x79/0x100
[ 1243.161066] schedule+0x43/0xe0
[ 1243.161066] pipe_read+0x30b/0x450
[ 1243.161066] ? wait_woken+0x80/0x80
[ 1243.161066] new_sync_read+0x164/0x170
[ 1243.161066] vfs_read+0x122/0x1b0
[ 1243.161066] ksys_read+0x93/0xd0
[ 1243.161066] do_syscall_64+0x35/0x80
[ 1243.161066] entry_SYSCALL_64_after_hwframe+0x44/0xae
The oops can also be reproduced with the following steps:
./vmtest.sh -s
# at qemu shell
cd /root/bpf && while true; do ./test_progs -t send_signal
Further analysis showed that the failure is introduced with
commit b89fbfbb854c ("bpf: Implement minimal BPF perf link").
With the above commit, the following scenario becomes possible:
cpu1 cpu2
hrtimer_interrupt -> bpf_overflow_handler
(due to closing link_fd)
bpf_perf_link_release ->
perf_event_free_bpf_prog ->
perf_event_free_bpf_handler ->
WRITE_ONCE(event->overflow_handler, event->orig_overflow_handler)
event->prog = NULL
bpf_prog_run(event->prog, &ctx)
In the above case, the event->prog is NULL for bpf_prog_run, hence
causing oops.
To fix the issue, check whether event->prog is NULL or not. If it
is, do not call bpf_prog_run. This seems working as the above
reproducible step runs more than one hour and I didn't see any
failures.
Fixes: b89fbfbb854c ("bpf: Implement minimal BPF perf link")
Signed-off-by: Yonghong Song <yhs(a)fb.com>
Signed-off-by: Andrii Nakryiko <andrii(a)kernel.org>
Link: https://lore.kernel.org/bpf/20210819155209.1927994-1-yhs@fb.com
Conflicts:
kernel/events/core.c
[ctx conflicts]
Signed-off-by: Pu Lehui <pulehui(a)huawei.com>
---
kernel/events/core.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index ff875ddf4aeb..372450750559 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -8747,6 +8747,7 @@ static void bpf_overflow_handler(struct perf_event *event,
.data = data,
.event = event,
};
+ struct bpf_prog *prog;
int ret = 0;
ctx.regs = perf_arch_bpf_user_pt_regs(regs);
@@ -8754,7 +8755,9 @@ static void bpf_overflow_handler(struct perf_event *event,
if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1))
goto out;
rcu_read_lock();
- ret = BPF_PROG_RUN(event->prog, &ctx);
+ prog = READ_ONCE(event->prog);
+ if (prog)
+ ret = BPF_PROG_RUN(prog, &ctx);
rcu_read_unlock();
out:
__this_cpu_dec(bpf_prog_active);
--
2.34.1
2
1
[PATCH OLK-5.10] bpf: Fix NULL event->prog pointer access in bpf_overflow_handler
by Pu Lehui 27 Mar '26
by Pu Lehui 27 Mar '26
27 Mar '26
From: Yonghong Song <yhs(a)fb.com>
mainline inclusion
from mainline-v5.15-rc1
commit 594286b7574c6e8217b1c233cc0d0650f2268a77
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8819
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
Andrii reported that libbpf CI hit the following oops when
running selftest send_signal:
[ 1243.160719] BUG: kernel NULL pointer dereference, address: 0000000000000030
[ 1243.161066] #PF: supervisor read access in kernel mode
[ 1243.161066] #PF: error_code(0x0000) - not-present page
[ 1243.161066] PGD 0 P4D 0
[ 1243.161066] Oops: 0000 [#1] PREEMPT SMP NOPTI
[ 1243.161066] CPU: 1 PID: 882 Comm: new_name Tainted: G O 5.14.0-rc5 #1
[ 1243.161066] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
[ 1243.161066] RIP: 0010:bpf_overflow_handler+0x9a/0x1e0
[ 1243.161066] Code: 5a 84 c0 0f 84 06 01 00 00 be 66 02 00 00 48 c7 c7 6d 96 07 82 48 8b ab 18 05 00 00 e8 df 55 eb ff 66 90 48 8d 75 48 48 89 e7 <ff> 55 30 41 89 c4 e8 fb c1 f0 ff 84 c0 0f 84 94 00 00 00 e8 6e 0f
[ 1243.161066] RSP: 0018:ffffc900000c0d80 EFLAGS: 00000046
[ 1243.161066] RAX: 0000000000000002 RBX: ffff8881002e0dd0 RCX: 00000000b4b47cf8
[ 1243.161066] RDX: ffffffff811dcb06 RSI: 0000000000000048 RDI: ffffc900000c0d80
[ 1243.161066] RBP: 0000000000000000 R08: 0000000000000000 R09: 1a9d56bb00000000
[ 1243.161066] R10: 0000000000000001 R11: 0000000000080000 R12: 0000000000000000
[ 1243.161066] R13: ffffc900000c0e00 R14: ffffc900001c3c68 R15: 0000000000000082
[ 1243.161066] FS: 00007fc0be2d3380(0000) GS:ffff88813bd00000(0000) knlGS:0000000000000000
[ 1243.161066] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1243.161066] CR2: 0000000000000030 CR3: 0000000104f8e000 CR4: 00000000000006e0
[ 1243.161066] Call Trace:
[ 1243.161066] <IRQ>
[ 1243.161066] __perf_event_overflow+0x4f/0xf0
[ 1243.161066] perf_swevent_hrtimer+0x116/0x130
[ 1243.161066] ? __lock_acquire+0x378/0x2730
[ 1243.161066] ? __lock_acquire+0x372/0x2730
[ 1243.161066] ? lock_is_held_type+0xd5/0x130
[ 1243.161066] ? find_held_lock+0x2b/0x80
[ 1243.161066] ? lock_is_held_type+0xd5/0x130
[ 1243.161066] ? perf_event_groups_first+0x80/0x80
[ 1243.161066] ? perf_event_groups_first+0x80/0x80
[ 1243.161066] __hrtimer_run_queues+0x1a3/0x460
[ 1243.161066] hrtimer_interrupt+0x110/0x220
[ 1243.161066] __sysvec_apic_timer_interrupt+0x8a/0x260
[ 1243.161066] sysvec_apic_timer_interrupt+0x89/0xc0
[ 1243.161066] </IRQ>
[ 1243.161066] asm_sysvec_apic_timer_interrupt+0x12/0x20
[ 1243.161066] RIP: 0010:finish_task_switch+0xaf/0x250
[ 1243.161066] Code: 31 f6 68 90 2a 09 81 49 8d 7c 24 18 e8 aa d6 03 00 4c 89 e7 e8 12 ff ff ff 4c 89 e7 e8 ca 9c 80 00 e8 35 af 0d 00 fb 4d 85 f6 <58> 74 1d 65 48 8b 04 25 c0 6d 01 00 4c 3b b0 a0 04 00 00 74 37 f0
[ 1243.161066] RSP: 0018:ffffc900001c3d18 EFLAGS: 00000282
[ 1243.161066] RAX: 000000000000031f RBX: ffff888104cf4980 RCX: 0000000000000000
[ 1243.161066] RDX: 0000000000000000 RSI: ffffffff82095460 RDI: ffffffff820adc4e
[ 1243.161066] RBP: ffffc900001c3d58 R08: 0000000000000001 R09: 0000000000000001
[ 1243.161066] R10: 0000000000000001 R11: 0000000000080000 R12: ffff88813bd2bc80
[ 1243.161066] R13: ffff8881002e8000 R14: ffff88810022ad80 R15: 0000000000000000
[ 1243.161066] ? finish_task_switch+0xab/0x250
[ 1243.161066] ? finish_task_switch+0x70/0x250
[ 1243.161066] __schedule+0x36b/0xbb0
[ 1243.161066] ? _raw_spin_unlock_irqrestore+0x2d/0x50
[ 1243.161066] ? lockdep_hardirqs_on+0x79/0x100
[ 1243.161066] schedule+0x43/0xe0
[ 1243.161066] pipe_read+0x30b/0x450
[ 1243.161066] ? wait_woken+0x80/0x80
[ 1243.161066] new_sync_read+0x164/0x170
[ 1243.161066] vfs_read+0x122/0x1b0
[ 1243.161066] ksys_read+0x93/0xd0
[ 1243.161066] do_syscall_64+0x35/0x80
[ 1243.161066] entry_SYSCALL_64_after_hwframe+0x44/0xae
The oops can also be reproduced with the following steps:
./vmtest.sh -s
# at qemu shell
cd /root/bpf && while true; do ./test_progs -t send_signal
Further analysis showed that the failure is introduced with
commit b89fbfbb854c ("bpf: Implement minimal BPF perf link").
With the above commit, the following scenario becomes possible:
cpu1 cpu2
hrtimer_interrupt -> bpf_overflow_handler
(due to closing link_fd)
bpf_perf_link_release ->
perf_event_free_bpf_prog ->
perf_event_free_bpf_handler ->
WRITE_ONCE(event->overflow_handler, event->orig_overflow_handler)
event->prog = NULL
bpf_prog_run(event->prog, &ctx)
In the above case, the event->prog is NULL for bpf_prog_run, hence
causing oops.
To fix the issue, check whether event->prog is NULL or not. If it
is, do not call bpf_prog_run. This seems working as the above
reproducible step runs more than one hour and I didn't see any
failures.
Fixes: b89fbfbb854c ("bpf: Implement minimal BPF perf link")
Signed-off-by: Yonghong Song <yhs(a)fb.com>
Signed-off-by: Andrii Nakryiko <andrii(a)kernel.org>
Link: https://lore.kernel.org/bpf/20210819155209.1927994-1-yhs@fb.com
Conflicts:
kernel/events/core.c
[ctx conflicts]
Signed-off-by: Pu Lehui <pulehui(a)huawei.com>
---
kernel/events/core.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index d1ffd5fb9c6f..a46f2cefc031 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -9728,13 +9728,16 @@ static void bpf_overflow_handler(struct perf_event *event,
.data = data,
.event = event,
};
+ struct bpf_prog *prog;
int ret = 0;
ctx.regs = perf_arch_bpf_user_pt_regs(regs);
if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1))
goto out;
rcu_read_lock();
- ret = BPF_PROG_RUN(event->prog, &ctx);
+ prog = READ_ONCE(event->prog);
+ if (prog)
+ ret = BPF_PROG_RUN(prog, &ctx);
rcu_read_unlock();
out:
__this_cpu_dec(bpf_prog_active);
--
2.34.1
2
1
CVE-2026-23320
Christian Brauner (1):
file: add take_fd() cleanup helper
Kuen-Han Tsai (1):
usb: gadget: f_ncm: Fix net_device lifecycle with device_move
Peter Zijlstra (1):
cleanup: Make no_free_ptr() __must_check
Thomas Gleixner (1):
cleanup: Provide retain_and_null_ptr()
drivers/usb/gadget/function/f_ncm.c | 35 ++++++++++-----
drivers/usb/gadget/function/u_ether.c | 22 ++++++++++
drivers/usb/gadget/function/u_ether.h | 26 ++++++++++++
drivers/usb/gadget/function/u_ncm.h | 2 +-
include/linux/cleanup.h | 61 +++++++++++++++++++++++++--
include/linux/file.h | 20 +++++++++
6 files changed, 151 insertions(+), 15 deletions(-)
--
2.34.1
2
5