Kernel
Threads by month
- ----- 2026 -----
- April
- March
- February
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- 31 participants
- 23219 discussions
From: Hongye Lin <linhongye(a)h-partners.com>
mainline inclusion
from mainline-v6.8-rc1
commit e3a649ecf8b9253cb1d05ceb085544472b06446f
category: feature
bugzilla: https://atomgit.com/openeuler/kernel/issues/8904
CVE: NA
Reference: https://web.git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/comm…
----------------------------------------------------------------------
DDI0601 2023-09 defines a new sysrem register FPMR (Floating Point Mode
Register) which configures the new FP8 features. Add a definition of this
register.
Qinxin Xia (18):
arm64/sysreg: Add definition for ID_AA64PFR2_EL1
arm64/sysreg: Add definition for ID_AA64ISAR3_EL1
arm64/sysreg: Add definition for ID_AA64FPFR0_EL1
arm64/sysreg: Add definition for FPMR
arm64/sysreg: Add EnFPM field for SCTLR_EL1 and HCRX_EL2
arm64/sysreg: Add LUT field for ID_AA64ISAR2_EL1
arm64/cpufeature: Hook new identification registers up to cpufeature
arm64/fpsimd: Support FEAT_FPMR
arm64/hwcap: Define hwcaps for 2023 DPISA features
kselftest/arm64: Add basic FPMR test
kselftest/arm64: Handle FPMR context in generic signal frame parser
kselftest/arm64: Add 2023 DPISA hwcap test coverage
arm64: Kconfig: Detect toolchain support for LSUI
arm64: cpufeature: add FEAT_LSUI
arm64/signal: Add FPMR signal handling
arm64/ptrace: Expose FPMR via ptrace
Fix kabi for thread struct with FPMR
Add support "arm64.nocnp" for start option
Yifan Wu (4):
arm64/hwcap: Add support for FEAT_CMPBR
kselftest/arm64: Add FEAT_CMPBR to the hwcap selftest
arm64/sysreg: Update ID_AA64ISAR2_EL1 for FEAT_CMPBR
selftest/arm64: Fix sve2p1_sigill() to hwcap test
Documentation/arch/arm64/elf_hwcaps.rst | 30 +++++
arch/arm64/Kconfig | 5 +
arch/arm64/include/asm/cpu.h | 3 +
arch/arm64/include/asm/cpufeature.h | 6 +
arch/arm64/include/asm/fpsimd.h | 2 +
arch/arm64/include/asm/hwcap.h | 10 ++
arch/arm64/include/asm/processor.h | 5 +
arch/arm64/include/asm/signal_common.h | 16 +++
arch/arm64/include/uapi/asm/hwcap.h | 10 ++
arch/arm64/include/uapi/asm/sigcontext.h | 8 ++
arch/arm64/kernel/cpufeature.c | 70 +++++++++-
arch/arm64/kernel/cpuinfo.c | 3 +
arch/arm64/kernel/fpsimd.c | 13 ++
arch/arm64/kernel/hwcap_str.h | 10 ++
arch/arm64/kernel/idreg-override.c | 11 ++
arch/arm64/kernel/ptrace.c | 42 ++++++
arch/arm64/kernel/signal.c | 46 +++++++
arch/arm64/tools/cpucaps | 2 +
arch/arm64/tools/sysreg | 122 +++++++++++++++++-
include/uapi/linux/elf.h | 1 +
tools/testing/selftests/arm64/abi/hwcap.c | 120 ++++++++++++++++-
.../testing/selftests/arm64/signal/.gitignore | 1 +
.../arm64/signal/testcases/fpmr_siginfo.c | 82 ++++++++++++
.../arm64/signal/testcases/testcases.c | 8 ++
.../arm64/signal/testcases/testcases.h | 1 +
25 files changed, 621 insertions(+), 6 deletions(-)
create mode 100644 tools/testing/selftests/arm64/signal/testcases/fpmr_siginfo.c
--
2.33.0
1
22
Chen Yu (1):
sched/eevdf: Fix wakeup-preempt by checking cfs_rq->nr_running
Ingo Molnar (2):
sched/fair: Rename cfs_rq::avg_load to cfs_rq::sum_weight
sched/fair: Rename cfs_rq::avg_vruntime to ::sum_w_vruntime, and
helper functions
Peter Zijlstra (10):
sched/fair: Fix zero_vruntime tracking
sched/fair: Fix EEVDF entity placement bug causing scheduling lag
sched/fair: Adhere to place_entity() constraints
sched: Unify runtime accounting across classes
sched: Remove vruntime from trace_sched_stat_runtime()
sched: Unify more update_curr*()
sched/eevdf: Allow shorter slices to wakeup-preempt
sched/fair: Only set slice protection at pick time
sched/fair: Fix zero_vruntime tracking fix
sched/debug: Fix avg_vruntime() usage
Vincent Guittot (2):
sched/fair: Use protect_slice() instead of direct comparison
sched/fair: Fix NO_RUN_TO_PARITY case
Wang Tao (1):
sched/eevdf: Update se->vprot in reweight_entity()
Zhang Qiao (1):
sched: Fix struct sched_entity kabi broken
Zicheng Qu (5):
sched: Re-evaluate scheduling when migrating queued tasks out of
throttled cgroups
sched: Fix kabi breakage of struct cfs_rq for sum_weight
sched: Fix kabi breakage of struct cfs_rq for sum_w_vruntime
sched/eevdf: Disable shorter slices to wakeup-preempt
sched/fair: Fix vruntime drift by preventing double lag scaling during
reweight
zihan zhou (1):
sched: Cancel the slice protection of the idle entity
include/linux/sched.h | 15 +-
include/trace/events/sched.h | 15 +-
kernel/sched/core.c | 4 +-
kernel/sched/deadline.c | 15 +-
kernel/sched/debug.c | 4 +-
kernel/sched/fair.c | 477 +++++++++++++++++++----------------
kernel/sched/features.h | 5 +
kernel/sched/rt.c | 15 +-
kernel/sched/sched.h | 18 +-
kernel/sched/stop_task.c | 13 +-
10 files changed, 304 insertions(+), 277 deletions(-)
--
2.34.1
2
24
10 Apr '26
Jinjiang Tu (3):
arm64: mm: hardcode domain info and add get_domain_cpumask()
arm64: mm: Track CPUs that a task has run on for TLBID optimization
arm64: tlbflush: Optimize flush_tlb_mm() by using TLBID
Marc Zyngier (2):
arm64: cpufeature: Add ID_AA64MMFR4_EL1 handling
arm64: sysreg: Add layout for ID_AA64MMFR4_EL1
Zeng Heng (1):
arm64: cpufeature: Add TLBID (Domain-based TLB Invalidation) detection
arch/arm64/Kconfig | 12 ++
arch/arm64/include/asm/cpu.h | 1 +
arch/arm64/include/asm/cpufeature.h | 6 +
arch/arm64/include/asm/mmu_context.h | 3 +
arch/arm64/include/asm/tlbflush.h | 74 ++++++++-
arch/arm64/kernel/cpufeature.c | 17 ++
arch/arm64/kernel/cpuinfo.c | 1 +
arch/arm64/mm/context.c | 224 ++++++++++++++++++++++++++-
arch/arm64/tools/cpucaps | 1 +
arch/arm64/tools/sysreg | 41 +++++
10 files changed, 375 insertions(+), 5 deletions(-)
--
2.25.1
3
8
Introduce dmem cgroup
Chen Ridong (3):
cgroup/dmem: fix NULL pointer dereference when setting max
cgroup/dmem: avoid rcu warning when unregister region
cgroup/dmem: avoid pool UAF
Friedrich Vock (1):
cgroup/dmem: Don't open-code css_for_each_descendant_pre
Geert Uytterhoeven (1):
cgroup/rdma: Drop bogus PAGE_COUNTER select
Jiapeng Chong (1):
kernel/cgroup: Remove the unused variable climit
Liu Kai (2):
cgroup/dmem: reuse SUBSYS for dmem and devices to preserve KABI
dmem: enable CONFIG_CGROUP_DMEM in arm64/x86 defconfig
Maarten Lankhorst (2):
mm/page_counter: move calculating protection values to page_counter
kernel/cgroup: Add "dmem" memory accounting cgroup
Maxime Ripard (3):
cgroup/dmem: Select PAGE_COUNTER
cgroup/dmem: Fix parameters documentation
doc/cgroup: Fix title underline length
Roman Gushchin (1):
mm: page_counters: put page_counter_calculate_protection() under
CONFIG_MEMCG
Documentation/admin-guide/cgroup-v2.rst | 58 +-
Documentation/core-api/cgroup.rst | 9 +
Documentation/core-api/index.rst | 1 +
Documentation/gpu/drm-compute.rst | 54 ++
arch/arm64/configs/openeuler_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
include/linux/cgroup_dmem.h | 71 ++
include/linux/cgroup_subsys.h | 4 +-
include/linux/device_cgroup.h | 20 +
include/linux/page_counter.h | 10 +
init/Kconfig | 10 +
kernel/cgroup/Makefile | 1 +
kernel/cgroup/dmem.c | 1025 +++++++++++++++++++++++
mm/memcontrol.c | 154 +---
mm/page_counter.c | 175 ++++
security/device_cgroup.c | 63 +-
16 files changed, 1492 insertions(+), 165 deletions(-)
create mode 100644 Documentation/core-api/cgroup.rst
create mode 100644 Documentation/gpu/drm-compute.rst
create mode 100644 include/linux/cgroup_dmem.h
create mode 100644 kernel/cgroup/dmem.c
--
2.34.1
2
15
[PATCH OLK-6.6] serial: core: fix infinite loop in handle_tx() for PORT_UNKNOWN
by Gu Bowen 09 Apr '26
by Gu Bowen 09 Apr '26
09 Apr '26
From: Jiayuan Chen <jiayuan.chen(a)shopee.com>
mainline inclusion
from mainline-v7.0-rc5
commit 455ce986fa356ff43a43c0d363ba95fa152f21d5
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14111
CVE: CVE-2026-23472
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
uart_write_room() and uart_write() behave inconsistently when
xmit_buf is NULL (which happens for PORT_UNKNOWN ports that were
never properly initialized):
- uart_write_room() returns kfifo_avail() which can be > 0
- uart_write() checks xmit_buf and returns 0 if NULL
This inconsistency causes an infinite loop in drivers that rely on
tty_write_room() to determine if they can write:
while (tty_write_room(tty) > 0) {
written = tty->ops->write(...);
// written is always 0, loop never exits
}
For example, caif_serial's handle_tx() enters an infinite loop when
used with PORT_UNKNOWN serial ports, causing system hangs.
Fix by making uart_write_room() also check xmit_buf and return 0 if
it's NULL, consistent with uart_write().
Reproducer: https://gist.github.com/mrpre/d9a694cc0e19828ee3bc3b37983fde13
Signed-off-by: Jiayuan Chen <jiayuan.chen(a)shopee.com>
Cc: stable <stable(a)kernel.org>
Link: https://patch.msgid.link/20260204074327.226165-1-jiayuan.chen@linux.dev
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Conflicts:
drivers/tty/serial/serial_core.c
[Context conflicts.]
Signed-off-by: Gu Bowen <gubowen5(a)huawei.com>
---
drivers/tty/serial/serial_core.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
index 7ce9c87750da..bc3241d47665 100644
--- a/drivers/tty/serial/serial_core.c
+++ b/drivers/tty/serial/serial_core.c
@@ -643,7 +643,10 @@ static unsigned int uart_write_room(struct tty_struct *tty)
unsigned int ret;
port = uart_port_lock(state, flags);
- ret = uart_circ_chars_free(&state->xmit);
+ if (!state->xmit.buf)
+ ret = 0;
+ else
+ ret = uart_circ_chars_free(&state->xmit);
uart_port_unlock(port, flags);
return ret;
}
--
2.43.0
2
1
Introduce dmem cgroup
Chen Ridong (3):
cgroup/dmem: fix NULL pointer dereference when setting max
cgroup/dmem: avoid rcu warning when unregister region
cgroup/dmem: avoid pool UAF
Friedrich Vock (1):
cgroup/dmem: Don't open-code css_for_each_descendant_pre
Geert Uytterhoeven (1):
cgroup/rdma: Drop bogus PAGE_COUNTER select
Jiapeng Chong (1):
kernel/cgroup: Remove the unused variable climit
Liu Kai (1):
cgroup/dmem: reuse SUBSYS for dmem and devices to preserve KABI
Maarten Lankhorst (2):
mm/page_counter: move calculating protection values to page_counter
kernel/cgroup: Add "dmem" memory accounting cgroup
Maxime Ripard (3):
cgroup/dmem: Select PAGE_COUNTER
cgroup/dmem: Fix parameters documentation
doc/cgroup: Fix title underline length
Roman Gushchin (1):
mm: page_counters: put page_counter_calculate_protection() under
CONFIG_MEMCG
Documentation/admin-guide/cgroup-v2.rst | 58 +-
Documentation/core-api/cgroup.rst | 9 +
Documentation/core-api/index.rst | 1 +
Documentation/gpu/drm-compute.rst | 54 ++
include/linux/cgroup_dmem.h | 71 ++
include/linux/cgroup_subsys.h | 4 +-
include/linux/device_cgroup.h | 20 +
include/linux/page_counter.h | 10 +
init/Kconfig | 10 +
kernel/cgroup/Makefile | 1 +
kernel/cgroup/dmem.c | 1025 +++++++++++++++++++++++
mm/memcontrol.c | 154 +---
mm/page_counter.c | 175 ++++
security/device_cgroup.c | 63 +-
14 files changed, 1490 insertions(+), 165 deletions(-)
create mode 100644 Documentation/core-api/cgroup.rst
create mode 100644 Documentation/gpu/drm-compute.rst
create mode 100644 include/linux/cgroup_dmem.h
create mode 100644 kernel/cgroup/dmem.c
--
2.34.1
2
14
From: Johan Hovold <johan(a)kernel.org>
stable inclusion
from stable-v6.6.130
commit f13100b1f5f111989f0750540a795fdef47492af
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14114
CVE: CVE-2026-23475
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit dee0774bbb2abb172e9069ce5ffef579b12b3ae9 upstream.
The controller per-cpu statistics is not allocated until after the
controller has been registered with driver core, which leaves a window
where accessing the sysfs attributes can trigger a NULL-pointer
dereference.
Fix this by moving the statistics allocation to controller allocation
while tying its lifetime to that of the controller (rather than using
implicit devres).
Fixes: 6598b91b5ac3 ("spi: spi.c: Convert statistics to per-cpu u64_stats_t")
Cc: stable(a)vger.kernel.org # 6.0
Cc: David Jander <david(a)protonic.nl>
Signed-off-by: Johan Hovold <johan(a)kernel.org>
Link: https://patch.msgid.link/20260312151817.32100-3-johan@kernel.org
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Lin Ruifeng <linruifeng4(a)huawei.com>
---
drivers/spi/spi.c | 17 ++++++++---------
1 file changed, 8 insertions(+), 9 deletions(-)
diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
index 66f694457a8b..2ca4ea45e3b2 100644
--- a/drivers/spi/spi.c
+++ b/drivers/spi/spi.c
@@ -2773,6 +2773,8 @@ static void spi_controller_release(struct device *dev)
struct spi_controller *ctlr;
ctlr = container_of(dev, struct spi_controller, dev);
+
+ free_percpu(ctlr->pcpu_statistics);
kfree(ctlr);
}
@@ -2924,6 +2926,12 @@ struct spi_controller *__spi_alloc_controller(struct device *dev,
if (!ctlr)
return NULL;
+ ctlr->pcpu_statistics = spi_alloc_pcpu_stats(NULL);
+ if (!ctlr->pcpu_statistics) {
+ kfree(ctlr);
+ return NULL;
+ }
+
device_initialize(&ctlr->dev);
INIT_LIST_HEAD(&ctlr->queue);
spin_lock_init(&ctlr->queue_lock);
@@ -3212,13 +3220,6 @@ int spi_register_controller(struct spi_controller *ctlr)
if (status)
goto del_ctrl;
}
- /* Add statistics */
- ctlr->pcpu_statistics = spi_alloc_pcpu_stats(dev);
- if (!ctlr->pcpu_statistics) {
- dev_err(dev, "Error allocating per-cpu statistics\n");
- status = -ENOMEM;
- goto destroy_queue;
- }
mutex_lock(&board_lock);
list_add_tail(&ctlr->list, &spi_controller_list);
@@ -3231,8 +3232,6 @@ int spi_register_controller(struct spi_controller *ctlr)
acpi_register_spi_devices(ctlr);
return status;
-destroy_queue:
- spi_destroy_queue(ctlr);
del_ctrl:
device_del(&ctlr->dev);
free_bus_id:
--
2.43.0
2
1
[PATCH OLK-6.6] netfilter: nf_tables: Fix null ptr dereference of nft_setelem_remove
by Dong Chenchen 09 Apr '26
by Dong Chenchen 09 Apr '26
09 Apr '26
hulk inclusion
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14141
CVE: CVE-2026-23272
--------------------------------
The initialization process of elem is missing in nft_add_set_elem(),
which lead to null-ptr-deref of nft_setelem_remove as below.
Unable to handle kernel NULL pointer dereference at virtual address 0000000000000014
Call trace:
nft_setelem_remove+0x28/0xe0 [nf_tables]
__nf_tables_abort+0x5f8/0xbe8 [nf_tables]
nf_tables_abort+0x64/0x1c8 [nf_tables]
nfnetlink_rcv_batch+0x2d8/0x850 [nfnetlink]
nfnetlink_rcv+0x168/0x1a8 [nfnetlink]
netlink_unicast_kernel+0x7c/0x160
netlink_unicast+0x1ac/0x250
netlink_sendmsg+0x21c/0x458
__sock_sendmsg+0x4c/0xa8
____sys_sendmsg+0x280/0x300
___sys_sendmsg+0x8c/0xf8
__sys_sendmsg+0x74/0xe0
__arm64_sys_sendmsg+0x2c/0x40
invoke_syscall+0x50/0x128
el0_svc_common.constprop.0+0xc8/0xf0
do_el0_svc+0x48/0x78
el0_slow_syscall+0x44/0x1b8
el0t_64_sync_handler+0x100/0x130
el0t_64_sync+0x188/0x190
Initialize elem to fix it.
Fixes: e7a6bffde0fe ("netfilter: nf_tables: unconditionally bump set->nelems before insertion")
Signed-off-by: Dong Chenchen <dongchenchen2(a)huawei.com>
---
net/netfilter/nf_tables_api.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
index d8057efc777d..ef4f6f8c7a3f 100644
--- a/net/netfilter/nf_tables_api.c
+++ b/net/netfilter/nf_tables_api.c
@@ -7144,6 +7144,7 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
goto err_element_clash;
}
+ nft_trans_elem(trans) = elem;
nft_trans_commit_list_add_tail(ctx->net, trans);
return set_full ? -ENFILE : 0;
--
2.25.1
2
1
[PATCH OLK-6.6] wifi: mac80211: always free skb on ieee80211_tx_prepare_skb() failure
by Yi Yang 09 Apr '26
by Yi Yang 09 Apr '26
09 Apr '26
From: Felix Fietkau <nbd(a)nbd.name>
mainline inclusion
from mainline-v7.0-rc5
commit d5ad6ab61cbd89afdb60881f6274f74328af3ee9
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14084
CVE: CVE-2026-23444
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
ieee80211_tx_prepare_skb() has three error paths, but only two of them
free the skb. The first error path (ieee80211_tx_prepare() returning
TX_DROP) does not free it, while invoke_tx_handlers() failure and the
fragmentation check both do.
Add kfree_skb() to the first error path so all three are consistent,
and remove the now-redundant frees in callers (ath9k, mt76,
mac80211_hwsim) to avoid double-free.
Document the skb ownership guarantee in the function's kdoc.
Signed-off-by: Felix Fietkau <nbd(a)nbd.name>
Link: https://patch.msgid.link/20260314065455.2462900-1-nbd@nbd.name
Fixes: 06be6b149f7e ("mac80211: add ieee80211_tx_prepare_skb() helper function")
Signed-off-by: Johannes Berg <johannes.berg(a)intel.com>
Conflicts:
drivers/net/wireless/mediatek/mt76/scan.c
[Commit 31083e38548f ("ifi: mt76: add code for emulating hardware
scanning") was not merged. No problematic function is introduced.]
include/net/mac80211.h
[Commit 0e9824e0d59b2 ("wifi: mac80211: Add missing return value
documentation") was not merged. Context conflicts.]
Signed-off-by: Yi Yang <yiyang13(a)huawei.com>
---
drivers/net/wireless/ath/ath9k/channel.c | 6 ++----
drivers/net/wireless/virtual/mac80211_hwsim.c | 1 -
include/net/mac80211.h | 4 ++++
net/mac80211/tx.c | 4 +++-
4 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/drivers/net/wireless/ath/ath9k/channel.c b/drivers/net/wireless/ath/ath9k/channel.c
index 571062f2e82a..ba8ec5112afe 100644
--- a/drivers/net/wireless/ath/ath9k/channel.c
+++ b/drivers/net/wireless/ath/ath9k/channel.c
@@ -1011,7 +1011,7 @@ static void ath_scan_send_probe(struct ath_softc *sc,
skb_set_queue_mapping(skb, IEEE80211_AC_VO);
if (!ieee80211_tx_prepare_skb(sc->hw, vif, skb, band, NULL))
- goto error;
+ return;
txctl.txq = sc->tx.txq_map[IEEE80211_AC_VO];
if (ath_tx_start(sc->hw, skb, &txctl))
@@ -1124,10 +1124,8 @@ ath_chanctx_send_vif_ps_frame(struct ath_softc *sc, struct ath_vif *avp,
skb->priority = 7;
skb_set_queue_mapping(skb, IEEE80211_AC_VO);
- if (!ieee80211_tx_prepare_skb(sc->hw, vif, skb, band, &sta)) {
- dev_kfree_skb_any(skb);
+ if (!ieee80211_tx_prepare_skb(sc->hw, vif, skb, band, &sta))
return false;
- }
break;
default:
return false;
diff --git a/drivers/net/wireless/virtual/mac80211_hwsim.c b/drivers/net/wireless/virtual/mac80211_hwsim.c
index 1214e7dcc812..bf12ff0ab06a 100644
--- a/drivers/net/wireless/virtual/mac80211_hwsim.c
+++ b/drivers/net/wireless/virtual/mac80211_hwsim.c
@@ -2892,7 +2892,6 @@ static void hw_scan_work(struct work_struct *work)
hwsim->tmp_chan->band,
NULL)) {
rcu_read_unlock();
- kfree_skb(probe);
continue;
}
diff --git a/include/net/mac80211.h b/include/net/mac80211.h
index adaa1b2323d2..85d785060e76 100644
--- a/include/net/mac80211.h
+++ b/include/net/mac80211.h
@@ -7032,6 +7032,10 @@ void ieee80211_report_wowlan_wakeup(struct ieee80211_vif *vif,
* @band: the band to transmit on
* @sta: optional pointer to get the station to send the frame to
*
+ * Return: %true if the skb was prepared, %false otherwise.
+ * On failure, the skb is freed by this function; callers must not
+ * free it again.
+ *
* Note: must be called under RCU lock
*/
bool ieee80211_tx_prepare_skb(struct ieee80211_hw *hw,
diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
index 7eddcb6f9645..2a708132320c 100644
--- a/net/mac80211/tx.c
+++ b/net/mac80211/tx.c
@@ -1911,8 +1911,10 @@ bool ieee80211_tx_prepare_skb(struct ieee80211_hw *hw,
struct ieee80211_tx_data tx;
struct sk_buff *skb2;
- if (ieee80211_tx_prepare(sdata, &tx, NULL, skb) == TX_DROP)
+ if (ieee80211_tx_prepare(sdata, &tx, NULL, skb) == TX_DROP) {
+ kfree_skb(skb);
return false;
+ }
info->band = band;
info->control.vif = vif;
--
2.25.1
2
1
---
tools/testing/sharepool/Makefile | 36 +
tools/testing/sharepool/libs/Makefile | 8 +
.../sharepool/libs/default_args_main.c | 87 ++
tools/testing/sharepool/libs/default_main.c | 85 ++
tools/testing/sharepool/libs/sem_use.c | 116 ++
tools/testing/sharepool/libs/sem_use.h | 13 +
tools/testing/sharepool/libs/sharepool_lib.c | 218 ++++
tools/testing/sharepool/libs/sharepool_lib.h | 534 ++++++++
tools/testing/sharepool/libs/test_lib.h | 324 +++++
tools/testing/sharepool/module/Makefile | 14 +
.../sharepool/module/check_sharepool_alloc.c | 132 ++
.../sharepool/module/check_sharepool_fault.c | 63 +
.../testing/sharepool/module/sharepool_dev.c | 1130 +++++++++++++++++
.../testing/sharepool/module/sharepool_dev.h | 149 +++
tools/testing/sharepool/test.sh | 55 +
tools/testing/sharepool/test_end.sh | 8 +
tools/testing/sharepool/test_loop.sh | 35 +
tools/testing/sharepool/test_prepare.sh | 8 +
tools/testing/sharepool/testcase/Makefile | 12 +
.../sharepool/testcase/api_test/Makefile | 14 +
.../sharepool/testcase/api_test/api_test.sh | 15 +
.../api_test/is_sharepool_addr/Makefile | 13 +
.../test_is_sharepool_addr.c | 90 ++
.../testcase/api_test/sp_alloc/Makefile | 13 +
.../api_test/sp_alloc/test_sp_alloc.c | 543 ++++++++
.../api_test/sp_alloc/test_sp_alloc2.c | 131 ++
.../api_test/sp_alloc/test_sp_alloc3.c | 147 +++
.../api_test/sp_alloc/test_spa_error.c | 109 ++
.../api_test/sp_alloc_nodemask/Makefile | 13 +
.../sp_alloc_nodemask/start_vm_test_16.sh | 49 +
.../sp_alloc_nodemask/start_vm_test_4.sh | 25 +
.../sp_alloc_nodemask/test_nodemask.c | 782 ++++++++++++
.../api_test/sp_config_dvpp_range/Makefile | 13 +
.../test_sp_config_dvpp_range.c | 367 ++++++
.../test_sp_multi_numa_node.c | 289 +++++
.../testcase/api_test/sp_free/Makefile | 13 +
.../testcase/api_test/sp_free/test_sp_free.c | 127 ++
.../api_test/sp_group_add_task/Makefile | 13 +
.../test_sp_group_add_task.c | 568 +++++++++
.../test_sp_group_add_task2.c | 254 ++++
.../test_sp_group_add_task3.c | 250 ++++
.../test_sp_group_add_task4.c | 148 +++
.../test_sp_group_add_task5.c | 113 ++
.../api_test/sp_group_del_task/Makefile | 13 +
.../test_sp_group_del_task.c | 1083 ++++++++++++++++
.../api_test/sp_group_id_by_pid/Makefile | 13 +
.../test_sp_group_id_by_pid.c | 179 +++
.../test_sp_group_id_by_pid2.c | 318 +++++
.../api_test/sp_id_of_current/Makefile | 13 +
.../sp_id_of_current/test_sp_id_of_current.c | 112 ++
.../api_test/sp_make_share_k2u/Makefile | 13 +
.../test_sp_make_share_k2u.c | 624 +++++++++
.../test_sp_make_share_k2u2.c | 361 ++++++
.../api_test/sp_make_share_u2k/Makefile | 13 +
.../test_sp_make_share_u2k.c | 307 +++++
.../testcase/api_test/sp_numa_maps/Makefile | 13 +
.../api_test/sp_numa_maps/test_sp_numa_maps.c | 164 +++
.../testcase/api_test/sp_reg_hpage/Makefile | 13 +
.../api_test/sp_reg_hpage/test_sp_hpage_reg.c | 44 +
.../test_sp_hpage_reg_after_alloc.c | 84 ++
.../sp_reg_hpage/test_sp_hpage_reg_exec.c | 82 ++
.../testcase/api_test/sp_unshare/Makefile | 13 +
.../api_test/sp_unshare/test_sp_unshare.c | 394 ++++++
.../sp_walk_page_range_and_free/Makefile | 13 +
.../test_sp_walk_page_range_and_free.c | 339 +++++
.../testcase/dts_bugfix_test/Makefile | 15 +
.../dts_bugfix_test/dts_bugfix_test.sh | 43 +
.../test_01_coredump_k2u_alloc.c | 603 +++++++++
.../dts_bugfix_test/test_02_spg_not_alive.c | 166 +++
.../dts_bugfix_test/test_03_hugepage_rsvd.c | 84 ++
.../dts_bugfix_test/test_04_spg_add_del.c | 100 ++
.../dts_bugfix_test/test_05_cgroup_limit.c | 76 ++
.../testcase/dts_bugfix_test/test_06_clone.c | 176 +++
.../dts_bugfix_test/test_08_addr_offset.c | 156 +++
.../dts_bugfix_test/test_09_spg_del_exit.c | 150 +++
.../test_10_walk_page_range_AA_lock.c | 124 ++
.../dts_bugfix_test/test_dvpp_readonly.c | 71 ++
.../sharepool/testcase/function_test/Makefile | 36 +
.../testcase/function_test/function_test.sh | 32 +
.../test_alloc_free_two_process.c | 303 +++++
.../function_test/test_alloc_readonly.c | 588 +++++++++
.../function_test/test_dvpp_multi_16G_alloc.c | 690 ++++++++++
.../test_dvpp_multi_16G_k2task.c | 604 +++++++++
.../function_test/test_dvpp_pass_through.c | 191 +++
.../function_test/test_dvpp_readonly.c | 147 +++
.../test_hugetlb_alloc_hugepage.c | 113 ++
.../testcase/function_test/test_k2u.c | 804 ++++++++++++
.../test_mm_mapped_to_multi_groups.c | 435 +++++++
.../function_test/test_non_dvpp_group.c | 167 +++
.../testcase/function_test/test_sp_ro.c | 719 +++++++++++
.../function_test/test_two_user_process.c | 626 +++++++++
.../testcase/function_test/test_u2k.c | 490 +++++++
.../sharepool/testcase/generate_list.sh | 46 +
.../testcase/performance_test/Makefile | 17 +
.../performance_test/performance_test.sh | 5 +
.../performance_test/test_perf_process_kill.c | 174 +++
.../performance_test/test_perf_sp_add_group.c | 375 ++++++
.../performance_test/test_perf_sp_alloc.c | 618 +++++++++
.../performance_test/test_perf_sp_k2u.c | 860 +++++++++++++
.../testcase/reliability_test/Makefile | 11 +
.../reliability_test/coredump/Makefile | 13 +
.../reliability_test/coredump/test_coredump.c | 581 +++++++++
.../coredump/test_coredump2.c | 202 +++
.../coredump/test_coredump_k2u_alloc.c | 562 ++++++++
.../reliability_test/fragment/Makefile | 13 +
.../fragment/test_external_fragmentation.c | 37 +
.../test_external_fragmentation_trigger.c | 58 +
.../reliability_test/k2u_u2k/Makefile | 13 +
.../k2u_u2k/test_k2u_and_kill.c | 276 ++++
.../k2u_u2k/test_k2u_unshare.c | 188 +++
.../k2u_u2k/test_malloc_u2k.c | 187 +++
.../k2u_u2k/test_u2k_and_kill.c | 155 +++
.../reliability_test/kthread/Makefile | 13 +
.../kthread/test_add_strange_task.c | 46 +
.../kthread/test_del_kthread.c | 61 +
.../testcase/reliability_test/others/Makefile | 13 +
.../reliability_test/others/test_judge_addr.c | 104 ++
.../others/test_kill_sp_process.c | 430 +++++++
.../reliability_test/others/test_kthread.c | 195 +++
.../others/test_mmap_sp_address.c | 223 ++++
.../others/test_notifier_block.c | 101 ++
.../reliability_test/reliability_test.sh | 51 +
.../reliability_test/sp_add_group/Makefile | 13 +
.../sp_add_group/test_add_exiting_task.c | 61 +
.../sp_add_group/test_add_group1.c | 118 ++
.../sp_add_group/test_add_strange_task.c | 46 +
.../reliability_test/sp_unshare/Makefile | 13 +
.../sp_unshare/test_unshare1.c | 325 +++++
.../sp_unshare/test_unshare2.c | 202 +++
.../sp_unshare/test_unshare3.c | 243 ++++
.../sp_unshare/test_unshare4.c | 516 ++++++++
.../sp_unshare/test_unshare5.c | 185 +++
.../sp_unshare/test_unshare6.c | 93 ++
.../sp_unshare/test_unshare7.c | 159 +++
.../sp_unshare/test_unshare_kill.c | 150 +++
.../testing/sharepool/testcase/remove_list.sh | 22 +
.../sharepool/testcase/scenario_test/Makefile | 15 +
.../testcase/scenario_test/scenario_test.sh | 45 +
.../test_auto_check_statistics.c | 338 +++++
.../scenario_test/test_dfx_heavy_load.c | 143 +++
.../scenario_test/test_dvpp_16g_limit.c | 68 +
.../testcase/scenario_test/test_failure.c | 630 +++++++++
.../testcase/scenario_test/test_hugepage.c | 231 ++++
.../scenario_test/test_hugepage_setting.sh | 51 +
.../scenario_test/test_max_50000_groups.c | 138 ++
.../testcase/scenario_test/test_oom.c | 135 ++
.../scenario_test/test_proc_sp_group_state.c | 170 +++
.../scenario_test/test_vmalloc_cgroup.c | 65 +
.../sharepool/testcase/stress_test/Makefile | 15 +
.../stress_test/sp_ro_fault_injection.sh | 21 +
.../testcase/stress_test/stress_test.sh | 47 +
.../stress_test/test_alloc_add_and_kill.c | 347 +++++
.../stress_test/test_alloc_free_two_process.c | 303 +++++
.../stress_test/test_concurrent_debug.c | 359 ++++++
.../testcase/stress_test/test_mult_u2k.c | 514 ++++++++
.../test_sharepool_enhancement_stress_cases.c | 692 ++++++++++
.../stress_test/test_u2k_add_and_kill.c | 358 ++++++
.../sharepool/testcase/test_all/Makefile | 8 +
.../sharepool/testcase/test_all/test_all.c | 285 +++++
.../testcase/test_mult_process/Makefile | 16 +
.../mult_add_group_test/Makefile | 13 +
.../test_add_multi_cases.c | 255 ++++
.../test_alloc_add_and_kill.c | 347 +++++
.../test_max_group_per_process.c | 94 ++
.../test_mult_alloc_and_add_group.c | 138 ++
.../test_mult_process_thread_exit.c | 498 ++++++++
.../test_mult_thread_add_group.c | 220 ++++
.../test_u2k_add_and_kill.c | 358 ++++++
.../mult_debug_test/Makefile | 13 +
.../test_add_group_and_print.c | 182 +++
.../mult_debug_test/test_concurrent_debug.c | 359 ++++++
.../mult_debug_test/test_debug_loop.c | 43 +
.../test_proc_interface_process.c | 636 ++++++++++
.../mult_debug_test/test_statistics_stress.c | 302 +++++
.../test_mult_process/mult_k2u_test/Makefile | 13 +
.../mult_k2u_test/test_mult_k2u.c | 855 +++++++++++++
.../mult_k2u_test/test_mult_pass_through.c | 405 ++++++
.../mult_k2u_test/test_mult_thread_k2u.c | 197 +++
.../test_mult_process/mult_u2k_test/Makefile | 13 +
.../mult_u2k_test/test_mult_u2k.c | 514 ++++++++
.../mult_u2k_test/test_mult_u2k3.c | 314 +++++
.../mult_u2k_test/test_mult_u2k4.c | 310 +++++
.../test_mult_process/stress_test/Makefile | 13 +
.../stress_test/test_alloc_free_two_process.c | 303 +++++
.../test_mult_process/stress_test/test_fuzz.c | 543 ++++++++
.../stress_test/test_mult_proc_interface.c | 701 ++++++++++
.../test_mult_process/test_mult_process.sh | 53 +
.../test_mult_process/test_proc_interface.sh | 19 +
188 files changed, 39522 insertions(+)
create mode 100644 tools/testing/sharepool/Makefile
create mode 100644 tools/testing/sharepool/libs/Makefile
create mode 100644 tools/testing/sharepool/libs/default_args_main.c
create mode 100644 tools/testing/sharepool/libs/default_main.c
create mode 100644 tools/testing/sharepool/libs/sem_use.c
create mode 100644 tools/testing/sharepool/libs/sem_use.h
create mode 100644 tools/testing/sharepool/libs/sharepool_lib.c
create mode 100644 tools/testing/sharepool/libs/sharepool_lib.h
create mode 100644 tools/testing/sharepool/libs/test_lib.h
create mode 100644 tools/testing/sharepool/module/Makefile
create mode 100644 tools/testing/sharepool/module/check_sharepool_alloc.c
create mode 100644 tools/testing/sharepool/module/check_sharepool_fault.c
create mode 100644 tools/testing/sharepool/module/sharepool_dev.c
create mode 100644 tools/testing/sharepool/module/sharepool_dev.h
create mode 100755 tools/testing/sharepool/test.sh
create mode 100755 tools/testing/sharepool/test_end.sh
create mode 100755 tools/testing/sharepool/test_loop.sh
create mode 100755 tools/testing/sharepool/test_prepare.sh
create mode 100644 tools/testing/sharepool/testcase/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/Makefile
create mode 100755 tools/testing/sharepool/testcase/api_test/api_test.sh
create mode 100644 tools/testing/sharepool/testcase/api_test/is_sharepool_addr/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/is_sharepool_addr/test_is_sharepool_addr.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_alloc/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc2.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc3.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_alloc/test_spa_error.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/Makefile
create mode 100755 tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/start_vm_test_16.sh
create mode 100755 tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/start_vm_test_4.sh
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/test_nodemask.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/test_sp_config_dvpp_range.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/test_sp_multi_numa_node.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_free/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_free/test_sp_free.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_add_task/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task2.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task3.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task4.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task5.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_del_task/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_del_task/test_sp_group_del_task.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/test_sp_group_id_by_pid.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/test_sp_group_id_by_pid2.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_id_of_current/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_id_of_current/test_sp_id_of_current.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/test_sp_make_share_k2u.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/test_sp_make_share_k2u2.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_make_share_u2k/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_make_share_u2k/test_sp_make_share_u2k.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_numa_maps/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_numa_maps/test_sp_numa_maps.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_reg_hpage/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg_after_alloc.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg_exec.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_unshare/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_unshare/test_sp_unshare.c
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_walk_page_range_and_free/Makefile
create mode 100644 tools/testing/sharepool/testcase/api_test/sp_walk_page_range_and_free/test_sp_walk_page_range_and_free.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/Makefile
create mode 100755 tools/testing/sharepool/testcase/dts_bugfix_test/dts_bugfix_test.sh
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_01_coredump_k2u_alloc.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_02_spg_not_alive.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_03_hugepage_rsvd.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_04_spg_add_del.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_05_cgroup_limit.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_06_clone.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_08_addr_offset.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_09_spg_del_exit.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_10_walk_page_range_AA_lock.c
create mode 100644 tools/testing/sharepool/testcase/dts_bugfix_test/test_dvpp_readonly.c
create mode 100644 tools/testing/sharepool/testcase/function_test/Makefile
create mode 100755 tools/testing/sharepool/testcase/function_test/function_test.sh
create mode 100644 tools/testing/sharepool/testcase/function_test/test_alloc_free_two_process.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_alloc_readonly.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_dvpp_multi_16G_alloc.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_dvpp_multi_16G_k2task.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_dvpp_pass_through.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_dvpp_readonly.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_hugetlb_alloc_hugepage.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_k2u.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_mm_mapped_to_multi_groups.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_non_dvpp_group.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_sp_ro.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_two_user_process.c
create mode 100644 tools/testing/sharepool/testcase/function_test/test_u2k.c
create mode 100755 tools/testing/sharepool/testcase/generate_list.sh
create mode 100644 tools/testing/sharepool/testcase/performance_test/Makefile
create mode 100755 tools/testing/sharepool/testcase/performance_test/performance_test.sh
create mode 100644 tools/testing/sharepool/testcase/performance_test/test_perf_process_kill.c
create mode 100644 tools/testing/sharepool/testcase/performance_test/test_perf_sp_add_group.c
create mode 100644 tools/testing/sharepool/testcase/performance_test/test_perf_sp_alloc.c
create mode 100644 tools/testing/sharepool/testcase/performance_test/test_perf_sp_k2u.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/Makefile
create mode 100644 tools/testing/sharepool/testcase/reliability_test/coredump/Makefile
create mode 100644 tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump2.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump_k2u_alloc.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/fragment/Makefile
create mode 100644 tools/testing/sharepool/testcase/reliability_test/fragment/test_external_fragmentation.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/fragment/test_external_fragmentation_trigger.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/k2u_u2k/Makefile
create mode 100644 tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_k2u_and_kill.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_k2u_unshare.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_malloc_u2k.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_u2k_and_kill.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/kthread/Makefile
create mode 100644 tools/testing/sharepool/testcase/reliability_test/kthread/test_add_strange_task.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/kthread/test_del_kthread.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/others/Makefile
create mode 100644 tools/testing/sharepool/testcase/reliability_test/others/test_judge_addr.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/others/test_kill_sp_process.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/others/test_kthread.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/others/test_mmap_sp_address.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/others/test_notifier_block.c
create mode 100755 tools/testing/sharepool/testcase/reliability_test/reliability_test.sh
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_add_group/Makefile
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_exiting_task.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_group1.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_strange_task.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_unshare/Makefile
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare1.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare2.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare3.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare4.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare5.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare6.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare7.c
create mode 100644 tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare_kill.c
create mode 100755 tools/testing/sharepool/testcase/remove_list.sh
create mode 100644 tools/testing/sharepool/testcase/scenario_test/Makefile
create mode 100755 tools/testing/sharepool/testcase/scenario_test/scenario_test.sh
create mode 100644 tools/testing/sharepool/testcase/scenario_test/test_auto_check_statistics.c
create mode 100644 tools/testing/sharepool/testcase/scenario_test/test_dfx_heavy_load.c
create mode 100644 tools/testing/sharepool/testcase/scenario_test/test_dvpp_16g_limit.c
create mode 100644 tools/testing/sharepool/testcase/scenario_test/test_failure.c
create mode 100644 tools/testing/sharepool/testcase/scenario_test/test_hugepage.c
create mode 100755 tools/testing/sharepool/testcase/scenario_test/test_hugepage_setting.sh
create mode 100644 tools/testing/sharepool/testcase/scenario_test/test_max_50000_groups.c
create mode 100644 tools/testing/sharepool/testcase/scenario_test/test_oom.c
create mode 100644 tools/testing/sharepool/testcase/scenario_test/test_proc_sp_group_state.c
create mode 100644 tools/testing/sharepool/testcase/scenario_test/test_vmalloc_cgroup.c
create mode 100644 tools/testing/sharepool/testcase/stress_test/Makefile
create mode 100644 tools/testing/sharepool/testcase/stress_test/sp_ro_fault_injection.sh
create mode 100755 tools/testing/sharepool/testcase/stress_test/stress_test.sh
create mode 100644 tools/testing/sharepool/testcase/stress_test/test_alloc_add_and_kill.c
create mode 100644 tools/testing/sharepool/testcase/stress_test/test_alloc_free_two_process.c
create mode 100644 tools/testing/sharepool/testcase/stress_test/test_concurrent_debug.c
create mode 100644 tools/testing/sharepool/testcase/stress_test/test_mult_u2k.c
create mode 100644 tools/testing/sharepool/testcase/stress_test/test_sharepool_enhancement_stress_cases.c
create mode 100644 tools/testing/sharepool/testcase/stress_test/test_u2k_add_and_kill.c
create mode 100644 tools/testing/sharepool/testcase/test_all/Makefile
create mode 100644 tools/testing/sharepool/testcase/test_all/test_all.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/Makefile
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/Makefile
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_add_multi_cases.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_alloc_add_and_kill.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_max_group_per_process.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_alloc_and_add_group.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_process_thread_exit.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_thread_add_group.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_u2k_add_and_kill.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/Makefile
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_add_group_and_print.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_concurrent_debug.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_debug_loop.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_proc_interface_process.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_statistics_stress.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/Makefile
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_k2u.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_pass_through.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_thread_k2u.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/Makefile
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k3.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k4.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/stress_test/Makefile
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/stress_test/test_alloc_free_two_process.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/stress_test/test_fuzz.c
create mode 100644 tools/testing/sharepool/testcase/test_mult_process/stress_test/test_mult_proc_interface.c
create mode 100755 tools/testing/sharepool/testcase/test_mult_process/test_mult_process.sh
create mode 100755 tools/testing/sharepool/testcase/test_mult_process/test_proc_interface.sh
diff --git a/tools/testing/sharepool/Makefile b/tools/testing/sharepool/Makefile
new file mode 100644
index 000000000000..aeb3181d1e69
--- /dev/null
+++ b/tools/testing/sharepool/Makefile
@@ -0,0 +1,36 @@
+ARCH?=arm64
+
+KERNEL_DIR?=
+
+TOOL_BIN_DIR?=$(shell realpath ./install)
+export ARCH KERNEL_DIR TOOL_BIN_DIR
+export KBUILD_MODPOST_WARN=1
+
+SHARELIB_DIR:=$(shell realpath libs)
+DEV_INC:=$(shell realpath module)
+MODULEDIR=module libs testcase
+
+sharepool_extra_ccflags:=-I$(SHARELIB_DIR) \
+ -I$(DEV_INC) \
+ -Wno-pointer-to-int-cast \
+ -Wno-int-to-pointer-cast \
+ -Wno-int-conversion
+
+sharepool_lib_ccflags:=-L$(SHARELIB_DIR) \
+ -lsharepool_lib
+
+export sharepool_extra_ccflags
+export sharepool_lib_ccflags
+
+.PHONY: all tooldir install clean
+
+all:tooldir
+tooldir:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n; done
+install:
+ mkdir -p $(TOOL_BIN_DIR)
+ cp test.sh test_loop.sh test_prepare.sh test_end.sh $(TOOL_BIN_DIR)
+ for n in $(MODULEDIR); do $(MAKE) -C $$n install; done
+clean:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n clean; done
+ rm -rf install
diff --git a/tools/testing/sharepool/libs/Makefile b/tools/testing/sharepool/libs/Makefile
new file mode 100644
index 000000000000..510fd0b4d919
--- /dev/null
+++ b/tools/testing/sharepool/libs/Makefile
@@ -0,0 +1,8 @@
+libsharepool_lib.so: sharepool_lib.c sharepool_lib.h sem_use.c sem_use.h test_lib.h
+ $(CROSS_COMPILE)gcc sharepool_lib.c sem_use.c -shared -fPIC -o libsharepool_lib.so -I../module
+
+install: libsharepool_lib.so
+ cp $^ $(TOOL_BIN_DIR)
+
+clean:
+ rm -rf libsharepool_lib.so
diff --git a/tools/testing/sharepool/libs/default_args_main.c b/tools/testing/sharepool/libs/default_args_main.c
new file mode 100644
index 000000000000..1395ed571295
--- /dev/null
+++ b/tools/testing/sharepool/libs/default_args_main.c
@@ -0,0 +1,87 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri May 14 02:15:29 2021
+ */
+
+/*
+ * 提供一个默认main函数实现,使用者需要定义两个变量
+ * - dev_fd 设备文件的fd
+ * - testcases 测试用例数组
+ */
+
+#define STRLENGTH 500
+
+static int run_testcase(struct testcase_s *tc)
+{
+ int ret = 0;
+
+ printf(">>>> start testcase: %s", tc->name);
+ if (!tc->expect_ret)
+ printf(", expecting error info\n");
+ else
+ printf("\n");
+
+ if (tc->run_as_child) {
+ pid_t pid;
+
+ FORK_CHILD_ARGS(pid, tc->func());
+
+ if (!tc->exit_signal_check)
+ WAIT_CHILD_STATUS(pid, out);
+ else
+ WAIT_CHILD_SIGNAL(pid, tc->signal, out);
+ } else
+ ret = tc->func();
+
+out:
+ printf("<<<< end testcase: %s, result: %s\n", tc->name, ret != 0 ? "failed" : "passed");
+
+ return ret;
+}
+
+int main(int argc, char *argv[])
+{
+ int num = -1;
+ int passed = 0, failed = 0;
+ int ret;
+
+ dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ ret = parse_opt(argc, argv);
+ if (ret) {
+ pr_info("parse opt failed!");
+ return -1;
+ } else
+ pr_info("parse opt finished.");
+
+ if (num >= 0 && num < ARRAY_SIZE(testcases)) {
+ if (run_testcase(testcases + num))
+ failed++;
+ else
+ passed++;
+ get_filename();
+ printf("-------------------------");
+ printf("%s testcase%d finished, %s", test_group.name, num + 1, passed ? "passed" : "failed" );
+ printf("-------------------------\n");
+ } else {
+ for (num = 0; num < ARRAY_SIZE(testcases); num++) {
+ if (testcases[num].manual)
+ continue;
+ if (run_testcase(testcases + num))
+ failed++;
+ else
+ passed++;
+ }
+ get_filename();
+ printf("-------------------------");
+ printf("%s All %d testcases finished, passing: %d, failing: %d", test_group.name, passed + failed, passed, failed);
+ printf("-------------------------\n");
+ }
+
+
+ close_device(dev_fd);
+ return failed ? -1 : 0;
+}
diff --git a/tools/testing/sharepool/libs/default_main.c b/tools/testing/sharepool/libs/default_main.c
new file mode 100644
index 000000000000..26f42ac39db9
--- /dev/null
+++ b/tools/testing/sharepool/libs/default_main.c
@@ -0,0 +1,85 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri May 14 02:15:29 2021
+ */
+
+/*
+ * 提供一个默认main函数实现,使用者需要定义两个变量
+ * - dev_fd 设备文件的fd
+ * - testcases 测试用例数组
+ */
+#include <stdlib.h>
+#define STRLENGTH 500
+
+static int run_testcase(struct testcase_s *tc)
+{
+ int ret = 0;
+
+ printf("\n======================================================\n");
+ printf(">>>> START TESTCASE: %s\n", tc->name);
+ printf("测试点:%s\n", tc->comment);
+
+ if (tc->run_as_child) {
+ pid_t pid;
+
+ FORK_CHILD_ARGS(pid, tc->func());
+
+ if (!tc->exit_signal_check)
+ WAIT_CHILD_STATUS(pid, out);
+ else
+ WAIT_CHILD_SIGNAL(pid, tc->signal, out);
+ } else
+ ret = tc->func();
+
+out:
+ printf("<<<< END TESTCASE: %s, RESULT: %s\n", tc->name, ret != 0 ? "FAILED" : "PASSED");
+ printf("======================================================\n");
+ return ret;
+}
+
+int main(int argc, char *argv[])
+{
+ int num = -1;
+ int passed = 0, failed = 0;
+
+ dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ if (argc > 1)
+ num = atoi(argv[1]) - 1;
+
+#ifdef pre_hook
+ pre_hook();
+#endif
+ if (num >= 0 && num < ARRAY_SIZE(testcases)) {
+ if (run_testcase(testcases + num))
+ failed++;
+ else
+ passed++;
+ get_filename();
+ printf("-------------------------");
+ printf("%s TESTCASE%d FINISHED, %s", test_group.name, num + 1, passed ? "passed" : "failed" );
+ printf("-------------------------\n\n");
+ } else {
+ for (num = 0; num < ARRAY_SIZE(testcases); num++) {
+ if (testcases[num].manual)
+ continue;
+ if (run_testcase(testcases + num))
+ failed++;
+ else
+ passed++;
+ }
+ get_filename();
+ printf("-------------------------");
+ printf("%s ALL %d TESTCASES FINISHED, passing: %d, failing: %d", test_group.name, passed + failed, passed, failed);
+ printf("-------------------------\n\n");
+ }
+#ifdef post_hook
+ post_hook();
+#endif
+
+ close_device(dev_fd);
+ return failed ? -1 : 0;
+}
diff --git a/tools/testing/sharepool/libs/sem_use.c b/tools/testing/sharepool/libs/sem_use.c
new file mode 100644
index 000000000000..da4dc98d8849
--- /dev/null
+++ b/tools/testing/sharepool/libs/sem_use.c
@@ -0,0 +1,116 @@
+#include "sem_use.h"
+#include <sys/sem.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <errno.h>
+#include <string.h>
+
+int sem_set_value(int semid, short val)
+{
+ int ret;
+ if (val) {
+ if (val < 0) {
+ printf("sem can not be set negative value.\n");
+ return -1;
+ }
+ union semun {
+ int val; /* Value for SETVAL */
+ struct semid_ds *buf; /* Buffer for IPC_STAT, IPC_SET */
+ unsigned short *array; /* Array for GETALL, SETALL */
+ struct seminfo *__buf; /* Buffer for IPC_INFO
+ (Linux-specific) */
+ };
+ union semun su;
+ su.val = val;
+ ret = semctl(semid, 0, SETVAL, su);
+ } else {
+ ret = semctl(semid, 0, SETVAL, 0);
+ }
+
+ return ret;
+}
+
+int sem_get_value(int semid)
+{
+ int ret;
+ ret = semctl(semid, 0, GETVAL);
+ return ret;
+}
+
+int sem_dec_by_one(int semid)
+{
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = -1,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+ return 0;
+}
+
+int sem_inc_by_one(int semid)
+{
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = 1,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+ return 0;
+}
+
+int sem_dec_by_val(int semid, short val)
+{
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = -val,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+ return 0;
+}
+
+int sem_inc_by_val(int semid, short val)
+{
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = val,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+ return 0;
+}
+
+int sem_check_zero(int semid)
+{
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = 0,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+ return 0;
+}
+
+int sem_create(key_t semkey, char *name)
+{
+ // I suppose a normal semid will not be -1
+ int semid = semget(semkey, 1, IPC_CREAT);
+ if (semid < 0) {
+ printf("open semaphore %s failed, errno: %s\n", name, strerror(errno));
+ return -1;
+ }
+ sem_set_value(semid, 0);
+
+ return semid;
+}
+
+int sem_close(int semid)
+{
+ if (semctl(semid, 0, IPC_RMID) != 0) {
+ printf("sem remove fail, errno: %s\n", strerror(errno));
+ return -1;
+ }
+ return 0;
+}
diff --git a/tools/testing/sharepool/libs/sem_use.h b/tools/testing/sharepool/libs/sem_use.h
new file mode 100644
index 000000000000..7109f4597586
--- /dev/null
+++ b/tools/testing/sharepool/libs/sem_use.h
@@ -0,0 +1,13 @@
+
+#include <sys/sem.h>
+
+int sem_set_value(int semid, short val);
+int sem_get_value(int semid);
+int sem_dec_by_one(int semid);
+int sem_inc_by_one(int semid);
+int sem_dec_by_val(int semid, short val);
+int sem_inc_by_val(int semid, short val);
+int sem_check_zero(int semid);
+int sem_create(key_t semkey, char *name);
+
+
diff --git a/tools/testing/sharepool/libs/sharepool_lib.c b/tools/testing/sharepool/libs/sharepool_lib.c
new file mode 100644
index 000000000000..4a0b1e0c01e5
--- /dev/null
+++ b/tools/testing/sharepool/libs/sharepool_lib.c
@@ -0,0 +1,218 @@
+/*
+ * compile: gcc sharepool_lib.c -shared -fPIC -o sharepool_lib.so
+ */
+
+#include <fcntl.h>
+#include <sys/ioctl.h>
+#include <sys/types.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <errno.h>
+
+#include "sharepool_lib.h"
+
+int open_device()
+{
+ int fd;
+
+ fd = open(DEVICE_FILE_NAME, O_RDWR);
+ if (fd < 0) {
+ printf("open device %s failed\n", DEVICE_FILE_NAME);
+ }
+
+ return fd;
+}
+
+void close_device(int fd)
+{
+ if (fd > 0) {
+ close(fd);
+ }
+}
+
+int ioctl_add_group(int fd, struct sp_add_group_info *info)
+{
+ int ret;
+
+ /* Do not allow invalid flags, because the original testcase define the info in stack
+ * and may not initialize the new flag members */
+ if (info->flag & ~SPG_FLAG_NON_DVPP)
+ info->flag = 0;
+
+ ret = ioctl(fd, SP_IOCTL_ADD_GROUP, info);
+ if (ret < 0) {
+ //printf("ioctl SP_IOCTL_ADD_GROUP failed, errno is %d\n", errno);
+ }
+
+ return ret;
+}
+
+int ioctl_alloc(int fd, struct sp_alloc_info *info)
+{
+ int ret;
+
+ ret = ioctl(fd, SP_IOCTL_ALLOC, info);
+ if (ret < 0) {
+ printf("ioctl SP_IOCTL_ALLOC failed, errno is %d\n", errno);
+ }
+
+ return ret;
+}
+
+int ioctl_free(int fd, struct sp_alloc_info *info)
+{
+ int ret;
+
+ ret = ioctl(fd, SP_IOCTL_FREE, info);
+ if (ret < 0) {
+ //printf("ioctl SP_IOCTL_FREE failed, errno is %d\n", errno);
+ }
+
+ return ret;
+}
+
+int ioctl_u2k(int fd, struct sp_make_share_info *info)
+{
+ int ret;
+
+ ret = ioctl(fd, SP_IOCTL_U2K, info);
+ if (ret < 0) {
+ //printf("ioctl SP_IOCTL_U2K failed, errno is %d\n", errno);
+ }
+
+ return ret;
+}
+
+int ioctl_k2u(int fd, struct sp_make_share_info *info)
+{
+ int ret;
+
+ ret = ioctl(fd, SP_IOCTL_K2U, info);
+ if (ret < 0) {
+ //printf("ioctl SP_IOCTL_K2U failed, errno is %d\n", errno);
+ }
+
+ return ret;
+}
+
+int ioctl_unshare(int fd, struct sp_make_share_info *info)
+{
+ int ret;
+
+ ret = ioctl(fd, SP_IOCTL_UNSHARE, info);
+ if (ret < 0) {
+ //printf("ioctl SP_IOCTL_UNSHARE failed, errno is %d\n", errno);
+ }
+
+ return ret;
+}
+
+int ioctl_find_group_by_pid(int fd, struct sp_group_id_by_pid_info *info)
+{
+ int ret;
+
+ ret = ioctl(fd, SP_IOCTL_FIND_GROUP_BY_PID, info);
+ if (ret < 0) {
+ //printf("ioctl SP_IOCTL_FIND_GROUP_BY_PID failed, errno is %d\n", errno);
+ }
+
+ return ret;
+}
+
+int ioctl_judge_addr(int fd, unsigned long addr)
+{
+ /* return true or false */
+ return ioctl(fd, SP_IOCTL_JUDGE_ADDR, &addr);
+}
+
+int ioctl_vmalloc(int fd, struct vmalloc_info *info)
+{
+ int ret;
+
+ ret = ioctl(fd, SP_IOCTL_VMALLOC, info);
+ if (ret < 0) {
+ //printf("ioctl SP_IOCTL_VMALLOC failed, errno is %d\n", errno);
+ }
+
+ return ret;
+}
+
+int ioctl_vmalloc_hugepage(int fd, struct vmalloc_info *info)
+{
+ int ret;
+
+ ret = ioctl(fd, SP_IOCTL_VMALLOC_HUGEPAGE, info);
+ if (ret < 0) {
+ //printf("ioctl SP_IOCTL_VMALLOC failed, errno is %d\n", errno);
+ }
+
+ return ret;
+}
+
+int ioctl_vfree(int fd, struct vmalloc_info *info)
+{
+ /* in fact, no return value */
+ return ioctl(fd, SP_IOCTL_VFREE, info);
+}
+
+int ioctl_karea_access(int fd, struct karea_access_info *info)
+{
+ return ioctl(fd, SP_IOCTL_KACCESS, info);
+}
+
+int ioctl_walk_page_range(int fd, struct sp_walk_page_range_info *info)
+{
+ return ioctl(fd, SP_IOCTL_WALK_PAGE_RANGE, info);
+}
+
+int ioctl_walk_page_free(int fd, struct sp_walk_page_range_info *info)
+{
+ return ioctl(fd, SP_IOCTL_WALK_PAGE_FREE, info);
+}
+
+int ioctl_config_dvpp_range(int fd, struct sp_config_dvpp_range_info *info)
+{
+ // return ioctl(fd, SP_IOCTL_CONFIG_DVPP_RANGE, info);
+ return 0;
+}
+
+int ioctl_register_notifier_block(int fd, struct sp_notifier_block_info *info)
+{
+ return ioctl(fd, SP_IOCTL_REGISTER_NOTIFIER_BLOCK, info);
+}
+
+int ioctl_unregister_notifier_block(int fd, struct sp_notifier_block_info *info)
+{
+ return ioctl(fd, SP_IOCTL_UNREGISTER_NOTIFIER_BLOCK, info);
+}
+
+int ioctl_del_from_group(int fd, struct sp_del_from_group_info *info)
+{
+ return ioctl(fd, SP_IOCTL_DEL_FROM_GROUP, info);
+}
+
+int ioctl_id_of_current(int fd, struct sp_id_of_curr_info *info)
+{
+ return ioctl(fd, SP_IOCTL_ID_OF_CURRENT, info);
+}
+
+/*test for sp_walk_data == NULL*/
+int ioctl_walk_page_range_null(int fd, struct sp_walk_page_range_info *info)
+{
+ return ioctl(fd, SP_IOCTL_WALK_PAGE_RANGE_NULL, info);
+}
+
+int ioctl_hpage_reg_test_suite(int fd, void *info)
+{
+ return ioctl(fd, SP_IOCTL_HPAGE_REG_TESTSUITE, info);
+}
+
+int ioctl_hpage_reg_after_alloc(int fd, void *info)
+{
+ return ioctl(fd, SP_IOCTL_HPAGE_REG_AFTER_ALLOC, info);
+}
+
+int ioctl_hpage_reg_test_exec(int fd, void *info)
+{
+ return ioctl(fd, SP_IOCTL_HPAGE_REG_EXEC, info);
+}
\ No newline at end of file
diff --git a/tools/testing/sharepool/libs/sharepool_lib.h b/tools/testing/sharepool/libs/sharepool_lib.h
new file mode 100644
index 000000000000..0754353d4962
--- /dev/null
+++ b/tools/testing/sharepool/libs/sharepool_lib.h
@@ -0,0 +1,534 @@
+#include <sys/mman.h> // for PROT_READ and PROT_WRITE
+#include <sys/types.h>
+#include <sys/wait.h> // for wait
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <sys/resource.h>
+#include <string.h>
+#include <sys/ioctl.h>
+
+#include "sharepool_dev.h"
+#include "test_lib.h"
+
+#define SP_HUGEPAGE (1 << 0)
+#define SP_HUGEPAGE_ONLY (1 << 1)
+#define SP_DVPP (1 << 2)
+#define SP_PROT_RO (1 << 16)
+#define SP_PROT_FOCUS (1 << 17)
+#define SP_SPEC_NODE_ID (1 << 3)
+
+#define NODES_SHIFT 10UL
+#define DEVICE_ID_BITS 4UL
+#define DEVICE_ID_MASK ((1UL << DEVICE_ID_BITS) - 1UL)
+#define DEVICE_ID_SHIFT 32UL
+#define NODE_ID_BITS NODES_SHIFT
+#define NODE_ID_MASK ((1UL << NODE_ID_BITS) - 1UL)
+#define NODE_ID_SHIFT (DEVICE_ID_SHIFT + DEVICE_ID_BITS)
+
+#define SPG_ID_DEFAULT 0
+#define SPG_ID_MIN 1
+#define SPG_ID_MAX 99999
+#define SPG_ID_AUTO_MIN 100000
+#define SPG_ID_AUTO_MAX 199999
+#define SPG_ID_AUTO 200000
+
+#define SPG_ID_NONE (-1)
+#define SPG_FLAG_NON_DVPP 1UL
+
+#define DVPP_16G 0x400000000UL
+#define DVPP_BASE 0x100000000000ULL
+#define DVPP_END (DVPP_BASE + DVPP_16G * 64)
+
+#define MMAP_SHARE_POOL_NORMAL_START 0xe80000000000UL
+#define MMAP_SHARE_POOL_RO_SIZE 0x1000000000UL
+#define MMAP_SHARE_POOL_DVPP_START 0xf00000000000UL
+#define MMAP_SHARE_POOL_RO_START (MMAP_SHARE_POOL_DVPP_START - MMAP_SHARE_POOL_RO_SIZE)
+
+int open_device();
+void close_device(int fd);
+
+int ioctl_add_group(int fd, struct sp_add_group_info *info);
+int ioctl_alloc(int fd, struct sp_alloc_info *info);
+int ioctl_free(int fd, struct sp_alloc_info *info);
+int ioctl_u2k(int fd, struct sp_make_share_info *info);
+int ioctl_k2u(int fd, struct sp_make_share_info *info);
+int ioctl_unshare(int fd, struct sp_make_share_info *info);
+int ioctl_find_group_by_pid(int fd, struct sp_group_id_by_pid_info *info);
+int ioctl_judge_addr(int fd, unsigned long addr);
+int ioctl_vmalloc(int fd, struct vmalloc_info *info);
+int ioctl_vmalloc_hugepage(int fd, struct vmalloc_info *info);
+int ioctl_vfree(int fd, struct vmalloc_info *info);
+int ioctl_karea_access(int fd, struct karea_access_info *info);
+int ioctl_walk_page_range(int fd, struct sp_walk_page_range_info *info);
+int ioctl_walk_page_free(int fd, struct sp_walk_page_range_info *info);
+int ioctl_config_dvpp_range(int fd, struct sp_config_dvpp_range_info *info);
+int ioctl_register_notifier_block(int fd, struct sp_notifier_block_info *info);
+int ioctl_unregister_notifier_block(int fd, struct sp_notifier_block_info *info);
+int ioctl_del_from_group(int fd, struct sp_del_from_group_info *info);
+/*for error handling path test*/
+int ioctl_walk_page_range_null(int fd, struct sp_walk_page_range_info *info);
+
+static int inline ioctl_find_first_group(int fd, int pid)
+{
+ int spg_id, num = 1, ret;
+ struct sp_group_id_by_pid_info info = {
+ .pid = pid,
+ .spg_ids = &spg_id,
+ .num = &num,
+ };
+ ret = ioctl_find_group_by_pid(fd, &info);
+
+ return ret < 0 ? ret : spg_id;
+}
+
+#define KAREA_ACCESS_CHECK(val, address, sz, out)\
+ do { \
+ struct karea_access_info __karea_info = { \
+ .mod = KAREA_CHECK, \
+ .value = val, \
+ .addr = address, \
+ .size = sz, \
+ }; \
+ ret = ioctl_karea_access(dev_fd, &__karea_info);\
+ if (ret < 0) { \
+ pr_info("karea check failed, errno%d", errno);\
+ goto out; \
+ } \
+ } while (0)
+
+#define KAREA_ACCESS_SET(val, address, sz, out) \
+ do { \
+ struct karea_access_info __karea_info = { \
+ .mod = KAREA_SET, \
+ .value = val, \
+ .addr = address, \
+ .size = sz, \
+ }; \
+ ret = ioctl_karea_access(dev_fd, &__karea_info);\
+ if (ret < 0) { \
+ pr_info("karea set failed, errno%d", errno);\
+ goto out; \
+ } \
+ } while (0)
+
+static int dev_fd;
+
+static inline unsigned long wrap_vmalloc(unsigned long size, bool ishuge)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = size,
+ };
+ if (ishuge) {
+ ret = ioctl_vmalloc_hugepage(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc_hugepage failed, errno: %d", errno);
+ return 0;
+ }
+ } else {
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return 0;
+ }
+ }
+ return ka_info.addr;
+}
+
+static inline void wrap_vfree(unsigned long addr)
+{
+ struct vmalloc_info ka_info = {
+ .addr = addr,
+ };
+ ioctl_vfree(dev_fd, &ka_info);
+}
+
+static inline int wrap_add_group_flag(int pid, int prot, int spg_id, unsigned long flag)
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = prot,
+ .spg_id = spg_id,
+ .flag = flag,
+ };
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ ret = ag_info.spg_id;
+out:
+ return ret;
+}
+
+static inline int wrap_add_group(int pid, int prot, int spg_id)
+{
+ return wrap_add_group_flag(pid, prot, spg_id, 0);
+}
+
+static inline int wrap_add_group_non_dvpp(int pid, int prot, int spg_id)
+{
+ return wrap_add_group_flag(pid, prot, spg_id, SPG_FLAG_NON_DVPP);
+}
+
+static inline unsigned long wrap_k2u(unsigned long kva, unsigned long size, int spg_id, unsigned long sp_flags)
+{
+ int ret;
+ unsigned long uva = 0;
+ struct sp_make_share_info k2u_infos = {
+ .kva = kva,
+ .size = size,
+ .spg_id = spg_id,
+ .sp_flags = sp_flags,
+ };
+ TEST_CHECK(ioctl_k2u(dev_fd, &k2u_infos), out);
+ uva = k2u_infos.addr;
+
+out:
+ return uva;
+}
+
+static inline unsigned long wrap_u2k(unsigned long uva, unsigned long size)
+{
+ int ret;
+
+ struct sp_make_share_info u2k_info = {
+ .uva = uva,
+ .size = size,
+ };
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return 0;
+ }
+
+ return u2k_info.addr;
+}
+
+static inline int wrap_walk_page_range(unsigned long uva, unsigned long size)
+{
+ int ret = 0;
+ struct sp_walk_page_range_info wpr_info = {
+ .uva = uva,
+ .size = size,
+ };
+ TEST_CHECK(ioctl_walk_page_range(dev_fd, &wpr_info), out);
+out:
+ return ret;
+}
+
+static inline int wrap_unshare(unsigned long addr, unsigned long size)
+{
+ int ret;
+ struct sp_make_share_info info = {
+ .addr = addr,
+ .size = size,
+ };
+
+ TEST_CHECK(ioctl_unshare(dev_fd, &info), out);
+out:
+ return ret;
+}
+
+/*
+ * return:
+ * 0: alloc fails, see errno for reason.
+ * non-zero value: success.
+ */
+static inline void *wrap_sp_alloc(int spg_id, unsigned long size, unsigned long flag)
+{
+ int ret;
+ unsigned long addr = 0;
+ struct sp_alloc_info info = {
+ .spg_id = spg_id,
+ .flag = flag,
+ .size = size,
+ };
+ TEST_CHECK(ioctl_alloc(dev_fd, &info), out);
+ addr = info.addr;
+
+out:
+ /* if fail, return 0. because we cannot tell a errno from a normal
+ * addr. On the other side, the va will never be 0.
+ */
+ return ret < 0? (void *)-1: (void *)addr;
+}
+
+static inline int wrap_sp_free(void *addr)
+{
+ int ret;
+ struct sp_alloc_info info = {
+ .addr = (unsigned long)addr,
+ .spg_id = SPG_ID_DEFAULT,
+ };
+ TEST_CHECK(ioctl_free(dev_fd, &info), out);
+
+out:
+ return ret;
+}
+
+static inline int wrap_sp_free_by_id(void *addr, int spg_id)
+{
+ int ret = 0;
+ struct sp_alloc_info info = {
+ .addr = (unsigned long)addr,
+ .spg_id = spg_id,
+ };
+ TEST_CHECK(ioctl_free(dev_fd, &info), out);
+
+out:
+ return ret;
+}
+
+static inline int wrap_sp_group_id_by_pid(pid_t pid, int spg_id[], int *num)
+{
+ int ret;
+ struct sp_group_id_by_pid_info info = {
+ .pid = pid,
+ .spg_ids = spg_id,
+ .num = num,
+ };
+ TEST_CHECK(ioctl_find_group_by_pid(dev_fd, &info), out);
+
+out:
+ return ret;
+}
+
+static inline int wrap_del_from_group(pid_t pid, int spg_id)
+{
+ int ret;
+ struct sp_del_from_group_info info = {
+ .pid = pid,
+ .spg_id = spg_id,
+ };
+ TEST_CHECK(ioctl_del_from_group(dev_fd, &info), out);
+
+out:
+ return ret;
+}
+
+static inline int wrap_sp_id_of_current()
+{
+ int ret;
+ struct sp_id_of_curr_info info;
+ ret = ioctl(dev_fd, SP_IOCTL_ID_OF_CURRENT, &info);
+ if (ret < 0)
+ return -errno;
+
+ return info.spg_id;
+}
+
+static inline int sharepool_log(char *log_title)
+{
+ printf("%s", log_title);
+
+ char *logname[2] = {"sp_proc_log", "sp_spa_log"};
+ char *procname[2] = {"/proc/sharepool/proc_stat", "/proc/sharepool/spa_stat"};
+
+ read_proc(procname[0], logname[0], SIZE, 0);
+ read_proc(procname[1], logname[1], SIZE, 0);
+
+ return 0;
+}
+
+static inline int sharepool_print()
+{
+ printf("\n%20s", " ****** ");
+ printf("sharepool_print");
+ printf("%-20s\n", " ****** ");
+
+ char *logname[5] = {
+ "1.log",
+ "2.log",
+ "3.log",
+ };
+
+ char *procname[5] = {
+ "/proc/sharepool/proc_stat",
+ "/proc/sharepool/proc_overview",
+ "/proc/sharepool/spa_stat"
+ };
+
+ read_proc(procname[0], logname[0], SIZE, 1);
+ read_proc(procname[1], logname[1], SIZE, 1);
+ read_proc(procname[2], logname[2], SIZE, 1);
+
+ return 0;
+}
+
+static inline int spa_stat()
+{
+ printf("\n%20s", " ****** ");
+ printf("cat /proc/sharepool/spa_stat");
+ printf("%-20s\n", " ****** ");
+
+ char *logname = "spa_stat_log";
+ char *procname = "/proc/sharepool/spa_stat";
+
+ read_proc(procname, logname, SIZE, 1);
+
+ return 0;
+}
+
+static inline int proc_stat()
+{
+ printf("\n%20s", " ****** ");
+ printf("cat /proc/sharepool/proc_stat");
+ printf("%-20s\n", " ****** ");
+
+ char *logname = "proc_stat_log";
+ char *procname = "/proc/sharepool/proc_stat";
+
+ read_proc(procname, logname, SIZE, 1);
+
+ return 0;
+}
+
+static inline int proc_overview()
+{
+ printf("\n%20s", " ****** ");
+ printf("cat /proc/sharepool/proc_overview");
+ printf("%-20s\n", " ****** ");
+
+ char *logname = "proc_overview_log";
+ char *procname = "/proc/sharepool/proc_overview";
+
+ read_proc(procname, logname, SIZE, 1);
+
+ return 0;
+}
+
+static inline int cat_attr(char *attr)
+{
+ printf("\n%20s", " ****** ");
+ printf("cat %s", attr);
+ printf("%-20s\n", " ****** ");
+
+ char *logname = "attr_log";
+
+ read_proc(attr, logname, SIZE, 1);
+
+ return 0;
+}
+
+static int create_multi_groups(int pid, int group_num, int *group_ids)
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ };
+ for (int i = 0; i < group_num; i++) {
+ group_ids[i] = i + 1;
+ ag_info.spg_id = group_ids[i];
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("proc%d add group%d failed", pid, group_ids[i]);
+ return -1;
+ }
+ }
+
+ return ret;
+}
+
+static int add_multi_groups(int pid, int group_num, int *group_ids)
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ };
+ for (int i = 0; i < group_num; i++) {
+ ag_info.spg_id = group_ids[i];
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("proc%d add group%d failed", pid, group_ids[i]);
+ return -1;
+ }
+ }
+
+ return ret;
+}
+
+static inline void *ioctl_alloc_huge_memory(int nid, int flags, unsigned long addr, unsigned long size)
+{
+ int ret;
+ struct alloc_huge_memory alloc_info = {
+ .nid = nid,
+ .flags = flags,
+ .addr = addr,
+ .size = size,
+ };
+
+ ret = ioctl(dev_fd, SP_IOCTL_ALLOC_HUGE_MEMORY, &alloc_info);
+ if (ret < 0)
+ return NULL;
+
+ return (void *)alloc_info.addr;
+}
+
+static inline int ioctl_check_memory_node(unsigned long uva, unsigned long len, int node)
+{
+ int ret;
+ struct check_memory_node info = {
+ .uva = uva,
+ .len = len,
+ .node = node,
+ };
+
+ ret = ioctl(dev_fd, SP_IOCTL_CHECK_MEMORY_NODE, &info);
+ if (ret < 0)
+ return -errno;
+
+ return 0;
+}
+
+static inline int ioctl_kthread_start(int fd, struct sp_kthread_info *info)
+{
+ int ret;
+
+ pr_info("we are kthread");
+ ret = ioctl(fd, SP_IOCTL_KTHREAD_START, info);
+ if (ret < 0) {
+ pr_info("ioctl ktread failed errno: %d", ret);
+ }
+ return ret;
+}
+
+static inline int ioctl_kthread_end(int fd, struct sp_kthread_info *info)
+{
+ int ret;
+
+ pr_info("we are kthread end");
+ ret = ioctl(fd, SP_IOCTL_KTHREAD_END, &info);
+ if (ret < 0) {
+ pr_info("ioctl ktread failed errno: %d", ret);
+ }
+ return ret;
+}
+
+static inline int ioctl_kmalloc(int fd, struct vmalloc_info *info)
+{
+ int ret;
+
+ pr_info("we are kmalloc");
+ ret = ioctl(fd, SP_IOCTL_KMALLOC, info);
+ if (ret < 0) {
+ pr_info("ioctl ktread failed errno: %d", ret);
+ }
+ return ret;
+}
+
+static inline int ioctl_kfree(int fd, struct vmalloc_info *info)
+{
+ int ret;
+
+ pr_info("we are kfree");
+ ret = ioctl(fd, SP_IOCTL_KFREE, info);
+ if (ret < 0) {
+ pr_info("ioctl ktread failed errno: %d", ret);
+ }
+ return ret;
+}
diff --git a/tools/testing/sharepool/libs/test_lib.h b/tools/testing/sharepool/libs/test_lib.h
new file mode 100644
index 000000000000..161390c73380
--- /dev/null
+++ b/tools/testing/sharepool/libs/test_lib.h
@@ -0,0 +1,324 @@
+#include <string.h>
+#include <errno.h>
+#include <sys/mman.h> // for PROT_READ and PROT_WRITE
+#include <sys/types.h>
+#include <sys/wait.h> // for wait
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+#include <stdbool.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <sys/resource.h>
+#include <stdlib.h>
+
+#define PMD_SIZE 0x200000
+#define PAGE_SIZE 4096UL
+
+#define MAX_ERRNO 4095
+#define IS_ERR_VALUE(x) ((unsigned long)(void *)(x) >= (unsigned long)-MAX_ERRNO)
+#define COMMENT_LEN 1000 // testcase cmt
+#define SIZE 5000 // size for cat output
+
+struct testcase_s {
+ int (*func)(void);
+ const char *name;
+ bool run_as_child;
+ bool exit_signal_check;
+ bool manual;
+ const char *expect_ret;
+ int signal;
+ const char *comment;
+};
+
+#define TEST_ADD(_num, cmt, expc) \
+ struct testcase_s test##_num = { \
+ .name = "tc"#_num, \
+ .func = testcase##_num, \
+ .comment = cmt, \
+ .expect_ret = expc, \
+ }
+
+#define STRLENGTH 500
+struct test_group {
+ char name[STRLENGTH];
+ struct testcase_s *testcases;
+};
+
+char* extract_filename(char *filename, char filepath[])
+{
+
+ int i = 0, last_slash = -1, real_len = 0;
+ while (i < STRLENGTH && filepath[i] != '\0') {
+ if (filepath[i] == '/')
+ last_slash = i;
+ i++;
+ real_len++;
+ };
+
+ if (real_len >= STRLENGTH) {
+ printf("file path too long");
+ return NULL;
+ }
+
+ for (int j = last_slash + 1; j <= real_len; j++)
+ filename[j - last_slash - 1] = filepath[j];
+
+ return filename;
+}
+
+static int inline testcase_stub_pass(void) { return 0; }
+
+#define TESTCASE(tc, cmt) { .func = tc, .name = #tc, .manual = false, .run_as_child = false, .exit_signal_check = false, .comment = cmt},
+#define TESTCASE_CHILD(tc, cmt) { .func = tc, .name = #tc, .manual = false, .run_as_child = true, .exit_signal_check = false, .comment = cmt},
+#define TESTCASE_CHILD_MANUAL(tc, cmt) { .func = tc, .name = #tc, .manual = true, .run_as_child = true, .exit_signal_check = false, .comment = cmt},
+#define TESTCASE_CHILD_SIGNAL(tc, sig, cmt) { .func = tc, .name = #tc, .manual = false, .run_as_child = true, .exit_signal_check = true, .signal = sig, .comment = cmt},
+#define TESTCASE_STUB(tc, cmt) { .func = testcase_stub_pass, .name = #tc, .manual = false, .run_as_child = false, .exit_signal_check = false, .comment = cmt},
+
+
+#define ARRAY_SIZE(array) (sizeof(array) / sizeof(array[0]))
+
+#define pr_info(fmt, args...) \
+ printf("[file:%s, func:%s, line:%d] " fmt "\n", __FILE__, __func__, __LINE__, ##args)
+
+#define SEM_INIT(sync, idx) \
+ do { \
+ char sem_name[256]; \
+ snprintf(sem_name, 256, "/%s%d_%d", __FILE__, __LINE__, idx); \
+ sync = sem_open(sem_name, O_CREAT, O_RDWR, 0); \
+ if (sync == SEM_FAILED) { \
+ pr_info("sem_open failed"); \
+ return -1; \
+ } \
+ sem_unlink(sem_name); \
+ } while (0)
+
+#define SEM_WAIT(sync) \
+ do { \
+ do { \
+ ret = sem_wait(sync); \
+ } while (ret && errno == EINTR); \
+ } while (0)
+
+#define FORK_CHILD_ARGS(pid, child) \
+ do { \
+ pid = fork(); \
+ if (pid < 0) { \
+ pr_info("fork failed"); \
+ return -1; \
+ } else if (pid == 0) \
+ exit(child); \
+ } while (0)
+
+static inline int deadloop_child(void)
+{
+ while (1);
+ return -1;
+}
+
+static inline int sleep_child(void)
+{
+ pause();
+ return -1;
+}
+
+#define FORK_CHILD_DEADLOOP(pid) FORK_CHILD_ARGS(pid, deadloop_child())
+#define FORK_CHILD_SLEEP(pid) FORK_CHILD_ARGS(pid, sleep_child())
+
+#define TEST_CHECK(fn, out) \
+ do { \
+ ret = fn; \
+ if (ret < 0) { \
+ pr_info(#fn " failed, errno: %d", errno); \
+ goto out; \
+ } \
+ } while (0)
+
+#define TEST_CHECK_FAIL(fn, err, out) \
+ do { \
+ ret = fn; \
+ if (!(ret < 0 && errno == err)) { \
+ pr_info(#fn " failed, ret: %d, errno: %d", ret, errno); \
+ ret = -1; \
+ goto out; \
+ } else \
+ ret = 0; \
+ } while (0)
+
+#define KILL_CHILD(pid) \
+ do { \
+ kill(pid, SIGKILL); \
+ waitpid(pid, NULL, 0); \
+ } while (0)
+
+#define CHECK_CHILD_STATUS(pid) \
+ do { \
+ int __status; \
+ waitpid(pid, &__status, 0);\
+ if (!(WIFEXITED(__status) && !WEXITSTATUS(__status))) { \
+ if (WIFSIGNALED(__status)) { \
+ pr_info("child pid %d, killed by signal %d", pid, WTERMSIG(__status)); \
+ } else { \
+ pr_info("child, pid: %d, exit unexpected, status: %d", pid, __status);\
+ } \
+ ret = -1; \
+ } else { \
+ pr_info("child pid %d exit normal.", pid); \
+ } \
+ } while (0)
+
+#define WAIT_CHILD_STATUS(pid, out) \
+ do { \
+ int __status; \
+ waitpid(pid, &__status, 0);\
+ if (!(WIFEXITED(__status) && !WEXITSTATUS(__status))) { \
+ if (WIFSIGNALED(__status)) { \
+ pr_info("child pid %d, killed by signal %d", pid, WTERMSIG(__status)); \
+ } else { \
+ pr_info("child pid: %d, exit unexpected, status: %d", pid, WEXITSTATUS(__status));\
+ } \
+ ret = -1; \
+ goto out; \
+ } \
+ } while (0)
+
+#define WAIT_CHILD_SIGNAL(pid, sig, out)\
+ do { \
+ int __status; \
+ waitpid(pid, &__status, 0); \
+ if (!WIFSIGNALED(__status)) { \
+ pr_info("child, pid: %d, exit unexpected, status: %d", pid, __status);\
+ ret = -1; \
+ goto out; \
+ } else if (WTERMSIG(__status) != sig) { \
+ pr_info("child, pid: %d, killed by unexpected sig:%d, expected:%d ", pid, WTERMSIG(__status), sig);\
+ ret = -1; \
+ goto out; \
+ } \
+ } while (0)
+
+
+static inline int setCore()
+{
+ struct rlimit core_lim;
+ if (getrlimit(RLIMIT_CORE, &core_lim)) {
+ printf("getrlimit failed, err: %s\n", strerror(errno));
+ return -1;
+ } else
+ printf("current rlimit for RLIMIT_CORE is: %lx, %lx\n", core_lim.rlim_cur, core_lim.rlim_max);
+ core_lim.rlim_cur = RLIM_INFINITY;
+ if (setrlimit(RLIMIT_CORE, &core_lim)) {
+ printf("setrlimit failed, err: %s\n", strerror(errno));
+ return -1;
+ } else
+ printf("setrlimit for RLIMIT_CORE to unlimited\n");
+ return 0;
+}
+
+static inline int generateCoredump()
+{
+ char *a = NULL;
+ *a = 5; /* SIGSEGV */
+ return 0;
+}
+
+static inline void read_proc(char procname[], char logname[], int size, int print_or_log)
+{
+ FILE *proc, *log;
+ char str[SIZE];
+
+ log = fopen(logname, "a");
+ if (!log) {
+ printf("open %s failed.\n", logname);
+ return;
+ }
+
+ proc = fopen(procname, "r");
+ if (!proc) {
+ printf("open %s failed.\n", procname);
+ return;
+ }
+
+ // read information into a string
+ if (print_or_log)
+ printf("\n ----- %s -----\n", procname);
+
+ while (fgets(str, size, proc) != NULL) {
+ if (print_or_log)
+ printf("%s", str);
+ fputs(str, log);
+ }
+
+ fclose(proc);
+ fclose(log);
+
+ return;
+}
+
+static inline void read_attr(char procname[], char result[], int size)
+{
+ FILE *proc;
+ char str[SIZE];
+
+ proc = fopen(procname, "r");
+ if (!proc) {
+ printf("open %s failed.\n", procname);
+ return;
+ }
+
+ while (fgets(str, size, proc) != NULL)
+ strcpy(result, str);
+
+ fclose(proc);
+ return;
+}
+
+static int get_attr(char *attr, char** result, int row_size, int col_size,
+ int *row_real)
+{
+ // get attr, put result into result array
+ FILE *proc;
+ char str[SIZE];
+ int row = 0;
+
+ *row_real = 0;
+
+ proc = fopen(attr, "r");
+ if (!proc) {
+ printf("open %s failed.\n", attr);
+ return -1;
+ }
+
+ while (fgets(str, SIZE, proc) != NULL) {
+ if (strlen(str) > col_size) {
+ printf("get_attr %s failed, column size %d < strlen %d\n",
+ attr, col_size, strlen(str));
+ fclose(proc);
+ return -1;
+ }
+
+ if (row >= row_size) {
+ printf("get_attr %s failed, row limit %d too small\n",
+ attr, row_size);
+ fclose(proc);
+ return -1;
+ }
+
+ strcat(result[row++], str);
+ (*row_real)++;
+ }
+
+ fclose(proc);
+ return 0;
+}
+
+static inline int mem_show()
+{
+ char *meminfo = "/proc/meminfo";
+ char *logname = "meminfo_log";
+
+ read_proc(meminfo, logname, SIZE, 1);
+
+ return 0;
+}
+
diff --git a/tools/testing/sharepool/module/Makefile b/tools/testing/sharepool/module/Makefile
new file mode 100644
index 000000000000..aaf49e10112d
--- /dev/null
+++ b/tools/testing/sharepool/module/Makefile
@@ -0,0 +1,14 @@
+ifneq ($(KERNELRELEASE),)
+obj-m += sharepool_dev.o
+obj-m += check_sharepool_fault.o
+obj-m += check_sharepool_alloc.o
+else
+all:
+ make -C $(KERNEL_DIR) M=$$PWD modules
+
+clean:
+ make -C $(KERNEL_DIR) M=$$PWD clean
+
+install:
+ cp *.ko $(TOOL_BIN_DIR)
+endif
diff --git a/tools/testing/sharepool/module/check_sharepool_alloc.c b/tools/testing/sharepool/module/check_sharepool_alloc.c
new file mode 100644
index 000000000000..e6e9f7e806a2
--- /dev/null
+++ b/tools/testing/sharepool/module/check_sharepool_alloc.c
@@ -0,0 +1,132 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2019.All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Jan 25 07:38:03 2021
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/kprobes.h>
+#include <linux/version.h>
+#include <linux/namei.h>
+#include <linux/stacktrace.h>
+#include <linux/delay.h>
+
+static void check(unsigned long addr) {
+ unsigned long long pfn;
+ struct mm_struct *mm;
+ struct vm_area_struct *vma;
+ unsigned long long val;
+ unsigned long long vf;
+ pgd_t *pgdp;
+ pgd_t pgd;
+
+ p4d_t *p4dp, p4d;
+ pud_t *pudp, pud;
+ pmd_t *pmdp, pmd;
+ pte_t *ptep, pte;
+
+ mm = current->active_mm;
+ vma = find_vma(mm, addr);
+
+ vf = VM_NORESERVE | VM_SHARE_POOL | VM_DONTCOPY | VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
+
+ WARN((vma->vm_flags & vf) != vf, "vma->flags of sharepool is not expected");
+ pgdp = pgd_offset(mm, addr);
+ pgd = READ_ONCE(*pgdp);
+
+ do {
+ if (pgd_none(pgd) || pgd_bad(pgd))
+ break;
+
+ p4dp = p4d_offset(pgdp, addr);
+ p4d = READ_ONCE(*p4dp);
+ if (p4d_none(p4d) || p4d_bad(p4d))
+ break;
+
+ pudp = pud_offset(p4dp, addr);
+ pud = READ_ONCE(*pudp);
+ if (pud_none(pud) || pud_bad(pud))
+ break;
+
+ pmdp = pmd_offset(pudp, addr);
+ pmd = READ_ONCE(*pmdp);
+ val = pmd_val(pmd);
+ pfn = pmd_pfn(pmd);
+ if (pmd_none(pmd) || pmd_bad(pmd))
+ break;
+
+ ptep = pte_offset_map(pmdp, addr);
+ pte = READ_ONCE(*ptep);
+ val = pte_val(pte);
+ pfn = pte_pfn(pte);
+ pte_unmap(ptep);
+ } while(0);
+
+ if (vma->vm_flags & VM_MAYWRITE) {
+ if (val & PTE_RDONLY)
+ WARN(1, "Pte(0x%llx) has PTE_RDONLY(0x%llx)\n",
+ val, PTE_RDONLY);
+ if (!(val & PTE_DIRTY))
+ WARN(1, "Pte(0x%llx) has no PTE_DIRTY(0x%llx)\n",
+ val, PTE_DIRTY);
+ }
+
+ if (!(val & PTE_AF))
+ WARN(1, "Pte(0x%llx) no has PTE_AF(0x%llx)\n",
+ val, PTE_AF);
+/* 静态大页lru会指向hstat->activelist
+ struct folio *folio;
+ folio = pfn_folio(pfn);
+ if (list_empty(&folio->lru)) {
+ WARN(1, "folio->lru is not empty\n");
+ }
+
+ if (folio_test_ksm(folio) || folio_test_anon(folio) || folio->mapping) {
+ WARN(1, "folio has rmap\n");
+ }
+*/
+}
+
+static int ret_handler(struct kretprobe_instance *ri, struct pt_regs *regs)
+{
+ unsigned long addr = regs->regs[0];
+ if (!IS_ERR_OR_NULL((void *)addr))
+ check(addr);
+
+ return 0;
+}
+
+static struct kretprobe krp = {
+ .handler =ret_handler,
+ // .entry_handler = entry_handler,
+};
+
+static int kretprobe_init(void)
+{
+ int ret;
+ krp.kp.symbol_name = "__mg_sp_alloc_nodemask";
+ ret = register_kretprobe(&krp);
+ pr_err("register_kretprobe\n");
+
+ if (ret < 0) {
+ printk(KERN_INFO "register_kretprobe failed, returned %d\n", ret);
+ }
+
+ return ret;
+}
+
+int mg_sp_alloc_nodemask_init(void)
+{
+ kretprobe_init();
+ return 0;
+}
+
+void mg_sp_alloc_nodemask_exit(void)
+{
+ unregister_kretprobe(&krp);
+}
+
+module_init(mg_sp_alloc_nodemask_init);
+module_exit(mg_sp_alloc_nodemask_exit);
+MODULE_LICENSE("GPL");
diff --git a/tools/testing/sharepool/module/check_sharepool_fault.c b/tools/testing/sharepool/module/check_sharepool_fault.c
new file mode 100644
index 000000000000..b4729c326ccd
--- /dev/null
+++ b/tools/testing/sharepool/module/check_sharepool_fault.c
@@ -0,0 +1,63 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2019.All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Jan 25 07:38:03 2021
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/kprobes.h>
+#include <linux/version.h>
+#include <linux/namei.h>
+#include <linux/stacktrace.h>
+
+static int handler_pre(struct kprobe *p, struct pt_regs *regs)
+{
+ struct vm_area_struct *vma = (void *)regs->regs[0];
+ unsigned long address = regs->regs[1];
+
+ if (vma->vm_flags & VM_SHARE_POOL) {
+ WARN(1, "fault of sharepool at 0x%lx\n", address);
+ }
+
+ return 0;
+}
+
+static struct kprobe kp = {
+ .pre_handler = handler_pre,
+};
+
+static int kprobe_init(void)
+{
+ int ret;
+ kp.symbol_name = "handle_mm_fault";
+ ret = register_kprobe(&kp);
+
+ if (ret < 0) {
+ printk(KERN_INFO "%d register_kprobe failed, returned %d\n",
+ __LINE__, ret);
+ }
+
+ return ret;
+}
+
+static void kprobe_uninit(void)
+{
+ unregister_kprobe(&kp);
+}
+
+int sharepool_fault_debug_init(void)
+{
+ kprobe_init();
+
+ return 0;
+}
+
+void sharepool_fault_debug_exit(void)
+{
+ kprobe_uninit();
+}
+
+module_init(sharepool_fault_debug_init);
+module_exit(sharepool_fault_debug_exit);
+MODULE_LICENSE("GPL");
diff --git a/tools/testing/sharepool/module/sharepool_dev.c b/tools/testing/sharepool/module/sharepool_dev.c
new file mode 100644
index 000000000000..d7eac14cd52c
--- /dev/null
+++ b/tools/testing/sharepool/module/sharepool_dev.c
@@ -0,0 +1,1130 @@
+/*
+ * sharepool_dev.c - Create an input/output character device for share pool
+ */
+#define pr_fmt(fmt) "sp_test: " fmt
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/uaccess.h>
+#include <linux/vmalloc.h>
+#include <linux/printk.h>
+#include <linux/ratelimit.h>
+#include <linux/notifier.h>
+#include <linux/mm.h>
+#include <linux/hugetlb.h>
+#include <linux/kthread.h>
+#include <linux/share_pool.h>
+
+#include "sharepool_dev.h"
+
+static int dev_open(struct inode *inode, struct file *file)
+{
+ return 0;
+}
+struct task_struct *ktask;
+struct sp_kthread_info kt_info;
+int sp_kthread(void *index)
+{
+ int ret;
+ unsigned size = 4096;
+ unsigned long flag = 0;
+ int spg_id = 1;
+ void *addr;
+
+ addr = mg_sp_alloc(size, flag, spg_id);
+ if (IS_ERR(addr)) {
+ pr_info("alloc failed as expected\n");
+ }
+
+ ret = mg_sp_free(0, 1);
+ if (ret < 0) {
+ pr_info("free failed as expected\n");
+ }
+
+ ret = mg_sp_unshare(0, 0, 1);
+ if (ret < 0) {
+ pr_info("unshare failed as expected\n");
+ }
+
+ addr = mg_sp_make_share_k2u(0, 0, 0, 1, 1);
+ if (IS_ERR(addr)) {
+ pr_info("k2u failed as expected\n");
+ }
+
+ addr = mg_sp_make_share_u2k(0, 0, 1);
+ if (IS_ERR(addr)) {
+ pr_info("k2u failed as expected\n");
+ }
+
+ return 0;
+}
+
+int sp_free_kthread(void *arg)
+{
+ int ret;
+ struct sp_kthread_info *info = (struct sp_kthread_info *)arg;
+
+ pr_info("in sp_free_kthread\n");
+ current->flags &= (~PF_KTHREAD);
+ ret = mg_sp_free(info->addr, info->spg_id);
+ if (ret < 0) {
+ pr_info("kthread free failed\n");
+ return ret;
+ }
+ return 0;
+}
+
+int sp_unshare_kthread(void *arg)
+{
+ int ret;
+ struct sp_kthread_info *info = (struct sp_kthread_info *)arg;
+
+ pr_info("in sp_unshare_kthread\n");
+ current->flags &= (~PF_KTHREAD);
+ ret = mg_sp_unshare(info->addr, info->size, info->spg_id);
+ if (ret < 0) {
+ pr_info("kthread unshare failed\n");
+ return ret;
+ }
+ return 0;
+}
+
+static int dev_sp_kthread_start(unsigned long __user *arg)
+{
+ int ret = 0;
+ pr_info("dev_sp_kthread\n");
+
+ ret = copy_from_user(&kt_info, (void __user *)arg, sizeof(struct sp_kthread_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ if (kt_info.type == 0) {
+ ktask = kthread_run(sp_kthread, NULL, "kthread test");
+ if (!ktask) {
+ pr_info("kthread run fail\n");
+ return -ECHILD;
+ }
+ }
+
+ if (kt_info.type == 1) {
+ ktask = kthread_run(sp_free_kthread, &kt_info, "kthread free test");
+ if (!ktask) {
+ pr_info("kthread run fail\n");
+ return -ECHILD;
+ }
+
+ }
+
+ if (kt_info.type == 2) {
+ ktask = kthread_run(sp_unshare_kthread, &kt_info, "kthread unshare test");
+ if (!ktask) {
+ pr_info("kthread run fail\n");
+ return -ECHILD;
+ }
+ }
+
+ return 0;
+}
+
+static int dev_sp_kthread_end(unsigned long __user *arg)
+{
+ if (ktask) {
+ pr_info("we are going to end the kthread\n");
+ kthread_stop(ktask);
+ ktask = NULL;
+ }
+
+ return 0;
+}
+
+static long dev_sp_add_group(unsigned long __user *arg)
+{
+ struct sp_add_group_info info;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct sp_add_group_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ ret = mg_sp_group_add_task(info.pid, info.prot, info.spg_id);
+ if (ret < 0) {
+ pr_err_ratelimited("%s sp_group_add_task failed: %d, pid is %d, spg_id is %d\n",
+ __func__, ret, info.pid, info.spg_id);
+ return ret;
+ }
+
+ info.spg_id = ret;
+ ret = copy_to_user((void __user *)arg, &info, sizeof(struct sp_add_group_info));
+ if (ret) {
+ pr_err("%s copy to user failed: %d\n", __func__, ret);
+ }
+
+ return ret;
+}
+
+static long dev_sp_alloc(unsigned long __user *arg)
+{
+ struct sp_alloc_info info;
+ void *addr;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct sp_alloc_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ addr = mg_sp_alloc(info.size, info.flag, info.spg_id);
+ if (IS_ERR(addr)) {
+ pr_err_ratelimited("%s sp_alloc failed: %ld\n", __func__, PTR_ERR(addr));
+ return PTR_ERR(addr);
+ }
+
+ info.addr = (unsigned long)addr;
+ ret = copy_to_user((void __user *)arg, &info, sizeof(struct sp_alloc_info));
+ if (ret) {
+ pr_err("%s copy to user failed: %d\n", __func__, ret);
+ ret = mg_sp_free(info.addr, SPG_ID_DEFAULT);
+ if (ret < 0)
+ pr_err("%s sp_free failed: %d\n", __func__, ret);
+ }
+
+ return ret;
+}
+
+static long dev_sp_free(unsigned long __user *arg)
+{
+ struct sp_alloc_info info;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct sp_alloc_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ ret = mg_sp_free(info.addr, info.spg_id);
+ if (ret < 0)
+ pr_err_ratelimited("%s sp_free failed: %d\n", __func__, ret);
+
+ return ret;
+}
+
+static long dev_sp_u2k(unsigned long __user *arg)
+{
+ struct sp_make_share_info info;
+ void *addr;
+ char *kva;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct sp_make_share_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ addr = mg_sp_make_share_u2k(info.uva, info.size, info.pid);
+ if (IS_ERR(addr)) {
+ pr_err_ratelimited("%s u2k failed: %ld\n", __func__, PTR_ERR(addr));
+ return PTR_ERR(addr);
+ }
+
+ /* a small, easy checker */
+ if (info.u2k_checker) {
+ kva = (char *)addr;
+ if (kva[0] != 'd' || kva[PAGE_SIZE - 1] != 'c' ||
+ kva[PAGE_SIZE] != 'b' || kva[PAGE_SIZE * 2 - 1] != 'a') {
+ pr_err("%s u2k check normal page failed\n", __func__);
+ return -EFAULT;
+ }
+ }
+ if (info.u2k_hugepage_checker) {
+ kva = (char *)addr;
+ if (kva[0] != 'd' || kva[PMD_SIZE - 1] != 'c' ||
+ kva[PMD_SIZE] != 'b' || kva[PMD_SIZE * 2 - 1] != 'a') {
+ pr_err("%s u2k check hugepage failed\n", __func__);
+ return -EFAULT;
+ }
+ }
+
+ info.addr = (unsigned long)addr;
+ ret = copy_to_user((void __user *)arg, &info, sizeof(struct sp_make_share_info));
+ if (ret) {
+ pr_err("%s copy to user failed: %d\n", __func__, ret);
+ ret = mg_sp_unshare(info.addr, info.size, SPG_ID_DEFAULT);
+ if (ret < 0)
+ pr_err("%s sp_unshare failed: %d\n", __func__, ret);
+ }
+
+ return ret;
+}
+
+static long dev_sp_k2u(unsigned long __user *arg)
+{
+ struct sp_make_share_info info;
+ void *addr;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct sp_make_share_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ addr = mg_sp_make_share_k2u(info.kva, info.size, info.sp_flags, info.pid, info.spg_id);
+ if (IS_ERR(addr)) {
+ pr_err_ratelimited("%s k2u failed: %ld\n", __func__, PTR_ERR(addr));
+ return PTR_ERR(addr);
+ }
+
+ info.addr = (unsigned long)addr;
+ ret = copy_to_user((void __user *)arg, &info, sizeof(struct sp_make_share_info));
+ if (ret) {
+ pr_err("%s copy to user failed: %d\n", __func__, ret);
+ ret = mg_sp_unshare(info.addr, info.size, SPG_ID_DEFAULT);
+ if (ret < 0)
+ pr_err("%s sp_unshare failed: %d\n", __func__, ret);
+ }
+
+ return ret;
+}
+
+static long dev_sp_unshare(unsigned long __user *arg)
+{
+ struct sp_make_share_info info;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct sp_make_share_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ ret = mg_sp_unshare(info.addr, info.size, info.spg_id);
+
+ if (ret < 0)
+ pr_err_ratelimited("%s sp_unshare failed: %d\n", __func__, ret);
+
+ return ret;
+}
+
+static long dev_sp_find_group_by_pid(unsigned long __user *arg)
+{
+ struct sp_group_id_by_pid_info info;
+ int ret = 0, num;
+ int spg_ids[1000]; // 假设用户只加不超过1000个组
+ int *pspg_ids = spg_ids;
+ int *pnum = #
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ if (info.num) {
+ if (get_user(num, info.num)) {
+ pr_err("get num from user failed\n");
+ return -EFAULT;
+ }
+ } else
+ pnum = NULL;
+
+ if (!info.spg_ids)
+ pspg_ids = NULL;
+
+ ret = mg_sp_group_id_by_pid(info.pid, pspg_ids, pnum);
+ if (ret < 0) {
+ pr_err_ratelimited("%s sp_group_id_by_pid failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ ret = copy_to_user(info.spg_ids, spg_ids, sizeof(int) * num);
+ if (ret) {
+ pr_err("copy spg_ids to user failed\n");
+ return -EFAULT;
+ }
+
+ if (put_user(num, info.num)) {
+ pr_err("put num to user failed\n");
+ return -EFAULT;
+ }
+
+ return ret;
+}
+
+static long dev_sp_walk_page_range(unsigned long __user *arg)
+{
+ int ret = 0;
+ struct sp_walk_data wdata;
+ struct sp_walk_page_range_info wpinfo;
+
+ ret = copy_from_user(&wpinfo, arg, sizeof(wpinfo));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ ret = mg_sp_walk_page_range(wpinfo.uva, wpinfo.size, current, &wdata);
+ if (ret < 0) {
+ pr_err_ratelimited("%s sp_walk_page_range failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ wpinfo.pages = wdata.pages;
+ wpinfo.page_count = wdata.page_count;
+ wpinfo.uva_aligned = wdata.uva_aligned;
+ wpinfo.page_size = wdata.page_size;
+ wpinfo.is_hugepage = wdata.is_hugepage;
+ ret = copy_to_user(arg, &wpinfo, sizeof(wpinfo));
+ if (ret < 0) {
+ pr_err("%s copy to user failed, %d", __func__, ret);
+ mg_sp_walk_page_free(&wdata);
+ }
+
+ return ret;
+}
+
+static long dev_sp_walk_page_free(unsigned long __user *arg)
+{
+ struct sp_walk_page_range_info wpinfo;
+ struct sp_walk_data wdata;
+ int ret = copy_from_user(&wpinfo, arg, sizeof(wpinfo));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ wdata.pages = wpinfo.pages,
+ wdata.page_count = wpinfo.page_count,
+ wdata.uva_aligned = wpinfo.uva_aligned,
+ wdata.page_size = wpinfo.page_size,
+ wdata.is_hugepage = wpinfo.is_hugepage,
+ mg_sp_walk_page_free(&wdata);
+
+ return 0;
+}
+
+static long dev_check_memory_node(unsigned long arg)
+{
+ int ret, i;
+ struct sp_walk_data wdata;
+ struct check_memory_node info;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ ret = mg_sp_walk_page_range(info.uva, info.len, current, &wdata);
+ if (ret < 0) {
+ pr_err_ratelimited("%s sp_walk_page_range failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ for (i = 0; i < wdata.page_count; i++) {
+ struct page *page = wdata.pages[i];
+
+ if (page_to_nid(page) != info.node) {
+ pr_err("check nid failed, i:%d, expect:%d, actual:%d\n",
+ i, info.node, page_to_nid(page));
+ ret = -EINVAL;
+ goto out;
+ }
+ }
+
+out:
+ mg_sp_walk_page_free(&wdata);
+
+ return ret;
+}
+
+static long dev_sp_judge_addr(unsigned long __user *arg)
+{
+ unsigned long addr;
+ int ret = 0;
+
+ ret = copy_from_user(&addr, (void __user *)arg, sizeof(unsigned long));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ ret = mg_is_sharepool_addr(addr);
+
+ return ret;
+}
+
+static long dev_vmalloc(unsigned long __user *arg)
+{
+ struct vmalloc_info info;
+ char *addr;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct vmalloc_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ addr = vmalloc_user(info.size);
+ if (IS_ERR(addr)) {
+ pr_err("%s k2u failed: %ld\n", __func__, PTR_ERR(addr));
+ return PTR_ERR(addr);
+ }
+ /* be convenient for k2u, we set some values in the first two page */
+ if (info.size >= PAGE_SIZE) {
+ addr[0] = 'a';
+ addr[PAGE_SIZE - 1] = 'b';
+ }
+ if (info.size >= 2 * PAGE_SIZE) {
+ addr[PAGE_SIZE] = 'c';
+ addr[2 * PAGE_SIZE - 1] = 'd';
+ }
+
+ info.addr = (unsigned long)addr;
+ ret = copy_to_user((void __user *)arg, &info, sizeof(struct vmalloc_info));
+ if (ret) {
+ pr_err("%s copy to user failed: %d\n", __func__, ret);
+ vfree(addr);
+ }
+
+ return ret;
+}
+
+static long dev_kmalloc(unsigned long __user *arg)
+{
+ struct vmalloc_info info;
+ char *addr;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct vmalloc_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ addr = kmalloc(info.size, GFP_KERNEL);
+ if (IS_ERR(addr)) {
+ pr_err("%s kmalloc failed: %ld\n", __func__, PTR_ERR(addr));
+ return PTR_ERR(addr);
+ }
+
+ info.addr = (unsigned long)addr;
+ ret = copy_to_user((void __user *)arg, &info, sizeof(struct vmalloc_info));
+ if (ret) {
+ pr_err("%s copy to user failed: %d\n", __func__, ret);
+ vfree(addr);
+ }
+
+ return ret;
+}
+
+static long dev_vmalloc_hugepage(unsigned long __user *arg)
+{
+ struct vmalloc_info info;
+ char *addr;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct vmalloc_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ addr = vmalloc_hugepage_user(info.size);
+ if (IS_ERR(addr)) {
+ pr_err("%s k2u failed: %ld\n", __func__, PTR_ERR(addr));
+ return PTR_ERR(addr);
+ }
+ /* be convenient for k2u, we set some values in the first two hugepage */
+ if (info.size >= PMD_SIZE) {
+ addr[0] = 'a';
+ addr[PMD_SIZE - 1] = 'b';
+ }
+ if (info.size >= 2 * PMD_SIZE) {
+ addr[PMD_SIZE] = 'c';
+ addr[2 * PMD_SIZE - 1] = 'd';
+ }
+
+
+ info.addr = (unsigned long)addr;
+ ret = copy_to_user((void __user *)arg, &info, sizeof(struct vmalloc_info));
+ if (ret) {
+ pr_err("%s copy to user failed: %d\n", __func__, ret);
+ vfree(addr);
+ }
+
+ return ret;
+}
+
+static long dev_vfree(unsigned long __user *arg)
+{
+ struct vmalloc_info info;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct vmalloc_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ vfree((void *)info.addr);
+
+ return 0;
+}
+
+static long dev_kfree(unsigned long __user *arg)
+{
+ struct vmalloc_info info;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct vmalloc_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ kfree((void *)info.addr);
+
+ return 0;
+}
+
+static int kern_addr_check(unsigned long addr)
+{
+ /* TODO: */
+ if (!addr)
+ return 0;
+
+ return 1;
+}
+
+static long dev_karea_access(const void __user *arg)
+{
+ int ret, i;
+ struct karea_access_info info;
+
+ ret = copy_from_user(&info, arg, sizeof(info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ if (!kern_addr_check(info.addr)) {
+ pr_err("%s kaddr check failed\n", __func__);
+ return -EFAULT;
+ }
+
+ if (info.mod == KAREA_CHECK) {
+ for (i = 0; i < info.size; i++)
+ if (*(char *)info.addr != info.value)
+ return -EINVAL;
+ } else
+ memset((void *)info.addr, info.value, info.size);
+
+ return 0;
+}
+
+static long dev_sp_config_dvpp_range(const void __user *arg)
+{
+ struct sp_config_dvpp_range_info info;
+
+ int ret = copy_from_user(&info, arg, sizeof(info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ return mg_sp_config_dvpp_range(info.start, info.size, info.device_id, info.pid) ? 0 : 1;
+}
+
+int func1(struct notifier_block *nb, unsigned long action, void *data)
+{
+ pr_info("%s is triggered", __FUNCTION__);
+ return 0;
+}
+
+int func2(struct notifier_block *nb, unsigned long action, void *data)
+{
+ pr_info("%s is triggered", __FUNCTION__);
+
+ /*
+ if (spg->id == 2)
+ pr_info("sp group 2 exits.");
+ else
+ pr_info("sp group %d exits, skipped by %s", spg->id, __FUNCTION__);
+*/
+ return 0;
+}
+
+static struct notifier_block nb1 = {
+ .notifier_call = func1,
+};
+static struct notifier_block nb2 = {
+ .notifier_call = func2,
+};
+
+static long dev_register_notifier_block(unsigned long __user *arg)
+{
+ return 0;
+#if 0
+ // 注册一个notifier block chain
+ struct sp_notifier_block_info info;
+
+ int ret = copy_from_user(&info, arg, sizeof(info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ if (info.i == 1) {
+ ret = sp_register_notifier(&nb1);
+ if (ret != 0)
+ pr_info("register func1 failed, errno is %d", ret);
+ else
+ pr_info("register notifier for func 1 success.");
+ }
+ else if (info.i == 2) {
+ ret = sp_register_notifier(&nb2);
+ if (ret != 0)
+ pr_info("register func2 failed, errno is %d", ret);
+ else
+ pr_info("register notifier for func 2 success.");
+ } else {
+ pr_info("not valid user arg");
+ ret = -1;
+ }
+
+ return ret;
+#endif
+}
+
+static long dev_unregister_notifier_block(unsigned long __user *arg)
+{
+ return 0;
+#if 0
+ // 取消注册一个notifier block chain
+ struct sp_notifier_block_info info;
+
+ int ret = copy_from_user(&info, arg, sizeof(info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+ if (info.i == 1) {
+ ret = sp_unregister_notifier(&nb1);
+ if (ret != 0)
+ pr_info("unregister func1 failed, errno is %d", ret);
+ else
+ pr_info("unregister func1 success.");
+ }
+ else if (info.i == 2) {
+ ret = sp_unregister_notifier(&nb2);
+ if (ret != 0)
+ pr_info("unregister func2 failed, errno is %d", ret);
+ else
+ pr_info("unregister func2 success.");
+ }
+ else {
+ pr_info("not valid user arg");
+ ret = -1;
+ }
+
+ return ret;
+#endif
+}
+
+static long dev_sp_del_from_group(unsigned long __user *arg)
+{
+ return 0;
+#if 0
+ struct sp_del_from_group_info info;
+ int ret = 0;
+
+ ret = copy_from_user(&info, (void __user *)arg, sizeof(struct sp_del_from_group_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ ret = mg_sp_group_del_task(info.pid, info.spg_id);
+ if (ret < 0) {
+ pr_err_ratelimited("%s sp_group_del_task failed: %d, pid is %d, spg_id is %d\n",
+ __func__, ret, info.pid, info.spg_id);
+ return ret;
+ }
+
+ ret = copy_to_user((void __user *)arg, &info, sizeof(struct sp_del_from_group_info));
+ if (ret) {
+ pr_err("%s copy to user failed: %d\n", __func__, ret);
+ }
+
+ return ret;
+#endif
+}
+
+static long dev_sp_id_of_curr(unsigned long __user *arg)
+{
+ int ret, spg_id;
+ struct sp_id_of_curr_info info;
+
+ spg_id = mg_sp_id_of_current();
+ if (spg_id <= 0) {
+ pr_err("get id of current failed %d\n", spg_id);
+ return spg_id;
+ }
+
+ info.spg_id = spg_id;
+ ret = copy_to_user((void __user *)arg, &info, sizeof(struct sp_id_of_curr_info));
+ if (ret)
+ pr_err("%s copy to user failed: %d\n", __func__, ret);
+
+ return ret;
+}
+
+/*
+ * 底软调用场景:
+ * 1. A进程调用mmap,申请地址空间,指定驱动的设备文件
+ * 2. B进程(host进程)下发任务给A进程
+ * 3. 由B触发,在device侧用一worker线程给A分配内存和建立页表(mm是在mmap内核回调中保存的),没有
+ * 在mmap中申请内存是因为到这一点才会真正使用到内存。
+ * 4. 内存的释放时单独提供了一个ioctl接口
+ *
+ * 申请大页并映射到用户态进程
+ * 1. 如果没有指定addr(NULL),则调用vm_mmap为进程申请内存,否则用指定的addr,用户自己指定地址时需要保证2M对齐
+ * 2. remote,是否在当前进程中调用内存申请和建立页表
+ */
+static long dev_alloc_huge_memory(struct file *file, void __user *arg)
+{
+ return -EFAULT;
+#if 0
+ int ret;
+ unsigned long addr, off, size;
+ struct vm_area_struct *vma;
+ struct alloc_huge_memory alloc_info;
+ struct page **pages;
+
+ ret = copy_from_user(&alloc_info, arg, sizeof(alloc_info));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return -EFAULT;
+ }
+
+ size = ALIGN(alloc_info.size, PMD_SIZE);
+ // TODO: 入参检查
+ if (!alloc_info.addr) {
+ /*
+ * 对用户态做大页映射的时候必须要2M对齐,所以对申请的虚拟地址多申请了2M,方便后面做对齐处理
+ */
+ addr = vm_mmap(file, 0, size + PMD_SIZE, PROT_READ | PROT_WRITE,
+ MAP_SHARED, 0);
+ if ((long)addr < 0) {
+ pr_err("vm_mmap failed, %ld\n", (long)addr);
+ return addr;
+ }
+ addr = ALIGN(addr, PMD_SIZE);
+ } else
+ addr = alloc_info.addr;
+
+ vma = find_vma(current->mm, addr);
+ if (!range_in_vma(vma, addr, addr + size)) {
+ pr_err("invalid address\n");
+ return -EINVAL;
+ }
+
+ pages = kzalloc((size / PMD_SIZE + 1) * sizeof(*pages), GFP_KERNEL);
+ if (!pages) {
+ pr_err("alloc vma private pages array failed\n");
+ return -ENOMEM;
+ }
+ vma->vm_private_data = pages;
+
+ for (off = 0; off < size; off += PMD_SIZE) {
+ struct page *page;
+
+ page = hugetlb_alloc_hugepage(alloc_info.nid, alloc_info.flags);
+ if (!page) {
+ pr_err("alloc hugepage failed, nid:%d, flags:%d\n", alloc_info.nid, alloc_info.flags);
+ return -ENOMEM; // TODO: 资源清理
+ }
+ *(pages++) = page;
+
+ ret = hugetlb_insert_hugepage_pte_by_pa(current->mm, addr + off,
+ __pgprot(pgprot_val(vma->vm_page_prot) & ~PTE_RDONLY), page_to_phys(page));
+ if (ret < 0) {
+ pr_err("insert hugepage failed, %d\n", ret);
+ return ret; // TODO: 资源清理
+ }
+ // 没有下面一行会导致用户进程退出的时候报BadRss错误,原因是走进程默认的释放页表
+ // 的流程会减去此处建立的大页映射,加上下面这一行会导致用例在5.10上面ko插入失败,
+ // 因为有符号没有导出
+ //add_mm_counter(current->mm, mm_counter_file(page), HPAGE_PMD_NR);
+ }
+
+ if (!alloc_info.addr) {
+ alloc_info.addr = addr;
+ if (copy_to_user(arg, &alloc_info, sizeof(alloc_info))) {
+ pr_err("copy_to_user failed\n");
+ return -EFAULT; // TODO: 资源清理
+ }
+ }
+
+ return 0;
+#endif
+}
+
+static unsigned long test_alloc_hugepage(unsigned long size, int nid,
+ nodemask_t *nodemask)
+{
+ pr_err_ratelimited("test_alloc_hugepage: execute succ.\n");
+ return -ENOMEM;
+}
+
+static long dev_hpage_reg_test_suite(void *arg)
+{
+ int ret;
+
+ ret = sp_register_hugepage_allocator(NULL);
+ if (ret != -EINVAL) {
+ pr_err_ratelimited("%s expect return -EINVAL, but return %d\n", __func__, ret);
+ return -EINVAL;
+ }
+
+ /* First call succeeds */
+ ret = sp_register_hugepage_allocator(test_alloc_hugepage);
+ if (ret != 0) {
+ pr_err_ratelimited("%s expect return 0, but return %d\n", __func__, ret);
+ return -EINVAL;
+ }
+
+ /* Second call fails with -EBUSY */
+ ret = sp_register_hugepage_allocator(test_alloc_hugepage);
+ if (ret != -EBUSY) {
+ pr_err_ratelimited("%s expect return -EBUSY, but return %d\n", __func__, ret);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static long dev_hpage_reg_after_alloc(void *arg)
+{
+ int ret;
+
+ ret = sp_register_hugepage_allocator(test_alloc_hugepage);
+ if (ret != -EBUSY) {
+ pr_err_ratelimited("%s expect return -EBUSY, but return %d\n", __func__, ret);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static long dev_hpage_reg_exec(void *arg)
+{
+ int ret;
+
+ ret = sp_register_hugepage_allocator(test_alloc_hugepage);
+ if (ret != 0) {
+ pr_err_ratelimited("%s expect return 0, but return %d\n", __func__, ret);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/*int this func wdata == NULL*/
+static long dev_sp_walk_page_range_null(unsigned long __user *arg)
+{
+ int ret = 0;
+ struct sp_walk_page_range_info wpinfo;
+
+ ret = copy_from_user(&wpinfo, arg, sizeof(wpinfo));
+ if (ret) {
+ pr_err("%s copy from user failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ ret = mg_sp_walk_page_range(wpinfo.uva, wpinfo.size, current, NULL);
+ if (ret < 0) {
+ pr_err_ratelimited("%s sp_walk_page_range failed: %d\n", __func__, ret);
+ return ret;
+ }
+
+ return ret;
+}
+
+long dev_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+{
+ int ret = 0;
+
+ if (!arg)
+ return -EINVAL;
+
+ switch (cmd) {
+ case SP_IOCTL_ADD_GROUP:
+ ret = dev_sp_add_group((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_ALLOC:
+ ret = dev_sp_alloc((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_FREE:
+ ret = dev_sp_free((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_U2K:
+ ret = dev_sp_u2k((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_K2U:
+ ret = dev_sp_k2u((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_UNSHARE:
+ ret = dev_sp_unshare((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_FIND_GROUP_BY_PID:
+ ret = dev_sp_find_group_by_pid((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_WALK_PAGE_RANGE:
+ ret = dev_sp_walk_page_range((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_WALK_PAGE_FREE:
+ ret = dev_sp_walk_page_free((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_CHECK_MEMORY_NODE:
+ ret = dev_check_memory_node(arg);
+ break;
+ case SP_IOCTL_JUDGE_ADDR:
+ ret = dev_sp_judge_addr((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_VMALLOC:
+ ret = dev_vmalloc((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_VMALLOC_HUGEPAGE:
+ ret = dev_vmalloc_hugepage((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_VFREE:
+ ret = dev_vfree((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_KACCESS:
+ ret = dev_karea_access((void __user *)arg);
+ break;
+ case SP_IOCTL_CONFIG_DVPP_RANGE:
+ ret = dev_sp_config_dvpp_range((void __user *)arg);
+ break;
+ case SP_IOCTL_REGISTER_NOTIFIER_BLOCK:
+ ret = dev_register_notifier_block((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_UNREGISTER_NOTIFIER_BLOCK:
+ ret = dev_unregister_notifier_block((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_DEL_FROM_GROUP:
+ ret = dev_sp_del_from_group((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_ID_OF_CURRENT:
+ ret = dev_sp_id_of_curr((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_ALLOC_HUGE_MEMORY:
+ ret = dev_alloc_huge_memory(file, (unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_WALK_PAGE_RANGE_NULL:
+ ret = dev_sp_walk_page_range_null((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_KTHREAD_START:
+ ret = dev_sp_kthread_start((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_KTHREAD_END:
+ ret = dev_sp_kthread_end((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_KMALLOC:
+ ret = dev_kmalloc((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_KFREE:
+ ret = dev_kfree((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_HPAGE_REG_TESTSUITE:
+ ret = dev_hpage_reg_test_suite((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_HPAGE_REG_AFTER_ALLOC:
+ ret = dev_hpage_reg_after_alloc((unsigned long __user *)arg);
+ break;
+ case SP_IOCTL_HPAGE_REG_EXEC:
+ ret = dev_hpage_reg_exec((unsigned long __user *)arg);
+ break;
+ default:
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+#if 0
+static void sp_vm_close(struct vm_area_struct *vma)
+{
+ struct page **pages = vma->vm_private_data;
+ if (!pages)
+ return;
+
+ for (; *pages; pages++)
+ put_page(*pages);
+
+ kfree(vma->vm_private_data);
+ vma->vm_private_data = NULL;
+}
+
+static struct vm_operations_struct sp_vm_ops = {
+ .close = sp_vm_close,
+};
+
+#define VM_HUGE_SPECIAL 0x800000000
+static int dev_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ pr_info("wws: vma range: [%#lx, %#lx], vm_mm:%pK\n", vma->vm_start, vma->vm_end, vma->vm_mm);
+ // 标记这个VMA建立的是大页映射,在4.19上面使用,在u2k流程中walk_page_range的时候使用到
+ vma->vm_flags |= VM_HUGE_SPECIAL;
+ vma->vm_flags |= VM_DONTCOPY; // mmap 的设备内存不需要fork,否则需要特殊处理
+ vma->vm_ops = &sp_vm_ops;
+ return 0;
+}
+#endif
+
+struct file_operations fops = {
+ .owner = THIS_MODULE,
+ .open = dev_open,
+ .unlocked_ioctl = dev_ioctl,
+// .mmap = dev_mmap,
+};
+
+/*
+ * before userspace use: mknod DEVICE_FILE_NAME c MAJOR_NUM 0
+ * e.g. insmod sharepool_dev.ko && mknod sharepool_dev c 100 0
+ */
+static int __init dev_init_module(void)
+{
+ int ret;
+
+ ret = register_chrdev(MAJOR_NUM, DEVICE_FILE_NAME, &fops);
+
+ if (ret < 0) {
+ pr_err("error in sharepool_init_module: %d\n", ret);
+ return ret;
+ }
+
+ pr_info("register share pool device success. the major device number is %d\n", MAJOR_NUM);
+ return 0;
+}
+
+static void __exit dev_cleanup_module(void)
+{
+ unregister_chrdev(MAJOR_NUM, DEVICE_FILE_NAME);
+}
+
+module_init(dev_init_module);
+module_exit(dev_cleanup_module);
+
+MODULE_DESCRIPTION("share pool device driver");
+MODULE_LICENSE("GPL v2");
+
diff --git a/tools/testing/sharepool/module/sharepool_dev.h b/tools/testing/sharepool/module/sharepool_dev.h
new file mode 100644
index 000000000000..769d1cc12f57
--- /dev/null
+++ b/tools/testing/sharepool/module/sharepool_dev.h
@@ -0,0 +1,149 @@
+/*
+ * sharepool_dev.h - the header file with the ioctl definitions.
+ *
+ * The declarations here have to be in a header file, because
+ * they need to be known both to the kernel module
+ * (in sharepool_dev.c) and the process / shared lib calling ioctl.
+ */
+
+#ifndef SHAREPOOL_DEV_H
+#define SHAREPOOL_DEV_H
+
+#include <linux/ioctl.h>
+
+/* major num can be changed when necessary */
+#define MAJOR_NUM 100
+
+#define DEVICE_FILE_NAME "sharepool_dev"
+struct sp_kthread_info {
+ unsigned long addr;
+ unsigned long size;
+ int spg_id;
+ int type;
+};
+
+struct sp_add_group_info {
+ int pid;
+ int prot;
+ int spg_id;
+ unsigned long flag;
+};
+
+struct sp_alloc_info {
+ unsigned long addr; /* return value */
+ unsigned long size;
+ unsigned long flag;
+ int spg_id;
+};
+
+struct sp_make_share_info {
+ unsigned long addr; /* return value */
+ unsigned long uva; /* for u2k */
+ unsigned long kva; /* for k2u */
+ unsigned long size;
+ unsigned long sp_flags; /* for k2u */
+ int pid;
+ int spg_id;
+ bool u2k_checker;
+ bool u2k_hugepage_checker;
+};
+
+struct sp_walk_page_range_info {
+ unsigned long uva;
+ unsigned long size;
+ struct page **pages;
+ unsigned int page_count;
+ unsigned long uva_aligned;
+ unsigned long page_size;
+ bool is_hugepage;
+};
+
+/* for vmalloc_user and vmalloc_hugepage_user */
+struct vmalloc_info {
+ unsigned long addr;
+ unsigned long size;
+};
+
+#define KAREA_CHECK 0
+#define KAREA_SET 1
+struct karea_access_info {
+ int mod;
+ char value;
+ unsigned long addr;
+ unsigned long size;
+};
+
+struct sp_config_dvpp_range_info {
+ unsigned long start;
+ unsigned long size;
+ int device_id; // must be zero
+ int pid;
+};
+
+struct sp_group_id_by_pid_info {
+ int pid;
+ int *spg_ids;
+ int *num;
+};
+
+struct sp_notifier_block_info {
+ int i;
+};
+
+struct sp_del_from_group_info {
+ int pid;
+ int spg_id;
+};
+
+struct sp_id_of_curr_info {
+ int spg_id;
+};
+
+#define HUGETLB_ALLOC_NONE 0x00
+#define HUGETLB_ALLOC_NORMAL 0x01 /* normal hugepage */
+#define HUGETLB_ALLOC_BUDDY 0x02 /* buddy hugepage */
+
+struct alloc_huge_memory {
+ int nid; // nid
+ int flags; // 申请大页的类型,0,1,2
+ unsigned long addr; // 返回地址
+ unsigned long size; // 申请大小
+};
+
+struct check_memory_node {
+ unsigned long uva;
+ unsigned long len;
+ int node;
+};
+
+#define SP_IOCTL_ADD_GROUP _IOWR(MAJOR_NUM, 0, struct sp_add_group_info *)
+#define SP_IOCTL_ALLOC _IOWR(MAJOR_NUM, 1, struct sp_alloc_info *)
+#define SP_IOCTL_FREE _IOW(MAJOR_NUM, 2, struct sp_alloc_info *)
+#define SP_IOCTL_U2K _IOWR(MAJOR_NUM, 3, struct sp_u2k_info *)
+#define SP_IOCTL_K2U _IOWR(MAJOR_NUM, 4, struct sp_k2u_info *)
+#define SP_IOCTL_UNSHARE _IOW(MAJOR_NUM, 5, struct sp_unshare_info *)
+#define SP_IOCTL_FIND_GROUP_BY_PID _IOWR(MAJOR_NUM, 6, struct sp_group_id_by_pid_info *)
+#define SP_IOCTL_WALK_PAGE_RANGE _IOWR(MAJOR_NUM, 7, struct sp_walk_page_range_info *)
+#define SP_IOCTL_WALK_PAGE_FREE _IOW(MAJOR_NUM, 8, struct sp_walk_page_range_info *)
+#define SP_IOCTL_JUDGE_ADDR _IOW(MAJOR_NUM, 9, unsigned long)
+#define SP_IOCTL_VMALLOC _IOWR(MAJOR_NUM, 10, struct vmalloc_info *)
+#define SP_IOCTL_VMALLOC_HUGEPAGE _IOWR(MAJOR_NUM, 11, struct vmalloc_info *)
+#define SP_IOCTL_VFREE _IOW(MAJOR_NUM, 12, struct vmalloc_info *)
+#define SP_IOCTL_KACCESS _IOW(MAJOR_NUM, 13, struct karea_access_info *)
+#define SP_IOCTL_CONFIG_DVPP_RANGE _IOW(MAJOR_NUM, 14, struct sp_config_dvpp_range_info *)
+#define SP_IOCTL_REGISTER_NOTIFIER_BLOCK _IOWR(MAJOR_NUM, 15, struct sp_notifier_block_info *)
+#define SP_IOCTL_DEL_FROM_GROUP _IOWR(MAJOR_NUM, 16, struct sp_del_from_group_info *)
+#define SP_IOCTL_ID_OF_CURRENT _IOW(MAJOR_NUM, 17, struct sp_id_of_curr_info *)
+#define SP_IOCTL_ALLOC_HUGE_MEMORY _IOWR(MAJOR_NUM, 18, struct alloc_huge_memory)
+#define SP_IOCTL_CHECK_MEMORY_NODE _IOW(MAJOR_NUM, 19, struct check_memory_node)
+#define SP_IOCTL_UNREGISTER_NOTIFIER_BLOCK _IOWR(MAJOR_NUM, 20, struct sp_notifier_block_info *)
+#define SP_IOCTL_WALK_PAGE_RANGE_NULL _IOWR(MAJOR_NUM, 21, struct sp_walk_page_range_info *)
+#define SP_IOCTL_KTHREAD_START _IOWR(MAJOR_NUM, 22, struct sp_kthread_info *)
+#define SP_IOCTL_KTHREAD_END _IOWR(MAJOR_NUM, 23, struct sp_kthread_info *)
+#define SP_IOCTL_KMALLOC _IOWR(MAJOR_NUM, 24, struct vmalloc_info *)
+#define SP_IOCTL_KFREE _IOW(MAJOR_NUM, 25, struct vmalloc_info *)
+#define SP_IOCTL_HPAGE_REG_TESTSUITE _IOW(MAJOR_NUM, 26, void *)
+#define SP_IOCTL_HPAGE_REG_AFTER_ALLOC _IOW(MAJOR_NUM, 27, void *)
+#define SP_IOCTL_HPAGE_REG_EXEC _IOW(MAJOR_NUM, 28, void *)
+#endif
+
diff --git a/tools/testing/sharepool/test.sh b/tools/testing/sharepool/test.sh
new file mode 100755
index 000000000000..0f00c8b21248
--- /dev/null
+++ b/tools/testing/sharepool/test.sh
@@ -0,0 +1,55 @@
+#!/bin/sh
+
+set -x
+
+trap "rmmod sharepool_dev && rm -f sharepool_dev && echo 0 > /proc/sys/vm/nr_overcommit_hugepages" EXIT
+
+insmod sharepool_dev.ko
+if [ $? -ne 0 ] ;then
+ echo "insmod failed"
+ exit 1
+fi
+mknod sharepool_dev c 100 0
+
+if uname -r | grep "^6.6" > /dev/null ; then
+ echo 10000000 > /proc/sys/vm/nr_overcommit_hugepages
+fi
+export LD_LIBRARY_PATH=.
+
+# 运行维测监控进程
+./test_mult_process/test_debug_loop > debug.log &
+
+testlist="test_all api_test.sh function_test.sh scenario_test.sh dts_bugfix_test.sh test_mult_process.sh"
+
+while getopts "as" opt; do
+ case $opt in
+ a)
+ # -a 全量测试(含压力测试)
+ testlist="$testlist stress_test.sh"
+ ;;
+ s)
+ # -s 仅压力测试
+ testlist="stress_test.sh"
+ ;;
+ \?)
+ echo "用法: $0 [-a] [-s]"
+ exit 1
+ ;;
+ esac
+done
+
+for line in $testlist
+do
+ ./$line
+ if [ $? -ne 0 ] ;then
+ echo "testcase $line failed"
+ killall test_debug_loop
+ exit 1
+ fi
+done
+
+killall test_debug_loop
+echo ">>>> SHAREPOOL ALL TESTCASES FINISH <<<<"
+
+# 异常调用场景用例
+#./reliability_test.sh
diff --git a/tools/testing/sharepool/test_end.sh b/tools/testing/sharepool/test_end.sh
new file mode 100755
index 000000000000..6231046db778
--- /dev/null
+++ b/tools/testing/sharepool/test_end.sh
@@ -0,0 +1,8 @@
+#!/bin/sh
+set -x
+rmmod sharepool_dev.ko
+rm -rf sharepool_dev
+
+if uname -r | grep "^6.6" > /dev/null ; then
+ echo 0 > /proc/sys/vm/nr_overcommit_hugepages
+fi
diff --git a/tools/testing/sharepool/test_loop.sh b/tools/testing/sharepool/test_loop.sh
new file mode 100755
index 000000000000..94789c720755
--- /dev/null
+++ b/tools/testing/sharepool/test_loop.sh
@@ -0,0 +1,35 @@
+#!/bin/sh
+
+let i=1
+
+while true
+do
+ echo ================= TEST NO. $i ===============
+ let i++
+
+ ./test.sh
+ if [ $? -ne 0 ]
+ then
+ echo test failed
+ exit 1
+ fi
+
+ sleep 3 # dropping spa and spg may have latency
+ free -m
+
+ ret=`cat /proc/sharepool/spa_stat | wc -l`
+ if [ $ret -ge 15 ]
+ then
+ cat /proc/sharepool/spa_stat
+ echo spa_stat not clean
+ exit 1
+ fi
+
+ ret=`cat /proc/sharepool/proc_stat | wc -l`
+ if [ $ret -ge 15 ]
+ then
+ cat /proc/sharepool/proc_stat
+ echo proc_stat not clean
+ exit 1
+ fi
+done
diff --git a/tools/testing/sharepool/test_prepare.sh b/tools/testing/sharepool/test_prepare.sh
new file mode 100755
index 000000000000..72392a5b0aff
--- /dev/null
+++ b/tools/testing/sharepool/test_prepare.sh
@@ -0,0 +1,8 @@
+#!/bin/sh
+set -x
+mknod sharepool_dev c 100 0
+insmod sharepool_dev.ko
+
+if uname -r | grep "^6.6" > /dev/null ; then
+ echo 10000000 > /proc/sys/vm/nr_overcommit_hugepages
+fi
diff --git a/tools/testing/sharepool/testcase/Makefile b/tools/testing/sharepool/testcase/Makefile
new file mode 100644
index 000000000000..d40d1b325142
--- /dev/null
+++ b/tools/testing/sharepool/testcase/Makefile
@@ -0,0 +1,12 @@
+MODULEDIR:=test_all test_mult_process api_test function_test reliability_test performance_test dts_bugfix_test scenario_test stress_test
+
+export CC:=$(CROSS_COMPILE)gcc $(sharepool_extra_ccflags)
+
+all:tooldir
+
+tooldir:
+ for n in $(MODULEDIR); do $(MAKE) CC="$(CC) $(sharepool_extra_ccflags)" -C $$n; done
+install:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n install; done
+clean:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n clean; done
diff --git a/tools/testing/sharepool/testcase/api_test/Makefile b/tools/testing/sharepool/testcase/api_test/Makefile
new file mode 100644
index 000000000000..664007daa9e6
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/Makefile
@@ -0,0 +1,14 @@
+MODULEDIR:=is_sharepool_addr sp_alloc sp_config_dvpp_range sp_free sp_group_add_task sp_group_id_by_pid sp_make_share_k2u sp_make_share_u2k sp_unshare sp_walk_page_range_and_free sp_id_of_current sp_numa_maps
+
+manual_test:=test_sp_config_dvpp_range
+
+all:tooldir
+
+tooldir:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n; done
+install:
+ mkdir -p $(TOOL_BIN_DIR)/api_test && cp api_test.sh $(TOOL_BIN_DIR)
+ for n in $(MODULEDIR); do $(MAKE) -C $$n install; done
+ mkdir -p $(TOOL_BIN_DIR)/api_test/manual_test && cd $(TOOL_BIN_DIR)/api_test/ && mv $(manual_test) manual_test && cd -
+clean:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n clean; done
diff --git a/tools/testing/sharepool/testcase/api_test/api_test.sh b/tools/testing/sharepool/testcase/api_test/api_test.sh
new file mode 100755
index 000000000000..29db2e449dea
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/api_test.sh
@@ -0,0 +1,15 @@
+#!/bin/sh
+
+set -x
+
+echo 0 > /proc/sys/vm/sharepool_ac_mode
+
+ls ./api_test | grep -v manual_test | while read line
+do
+ ./api_test/$line
+ if [ $? -ne 0 ] ;then
+ echo "testcase api_test/$line failed"
+ exit 1
+ fi
+ cat /proc/meminfo
+done
diff --git a/tools/testing/sharepool/testcase/api_test/is_sharepool_addr/Makefile b/tools/testing/sharepool/testcase/api_test/is_sharepool_addr/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/is_sharepool_addr/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/is_sharepool_addr/test_is_sharepool_addr.c b/tools/testing/sharepool/testcase/api_test/is_sharepool_addr/test_is_sharepool_addr.c
new file mode 100644
index 000000000000..3ba63605ed51
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/is_sharepool_addr/test_is_sharepool_addr.c
@@ -0,0 +1,90 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Sat Dec 19 11:29:06 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdbool.h>
+
+#include "sharepool_lib.h"
+
+#define MMAP_SHARE_POOL_START 0xe80000000000UL
+#define MMAP_SHARE_POOL_END 0xf80000000000UL
+
+enum TEST_RESULT {
+ SUCCESS,
+ FAIL
+};
+
+/*
+ * testcase1: 边界值测试:MMAP_SHARE_POOL_START,MMAP_SHARE_POOL_END,预期成功。
+ */
+
+static int testcase1(void)
+{
+ bool judge_ret = 0;
+ judge_ret = ioctl_judge_addr(dev_fd, MMAP_SHARE_POOL_START);
+ if (judge_ret != true)
+ return -1;
+
+ judge_ret = ioctl_judge_addr(dev_fd, MMAP_SHARE_POOL_END);
+ if (judge_ret != false)
+ return -1;
+
+ return 0;
+}
+
+/*
+ * testcase1: DVPP地址测试:dvpp_start, dvpp_start + 10,预期成功。
+ */
+
+static int testcase2(void)
+{
+ struct sp_config_dvpp_range_info cdr_info = {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = DVPP_16G - 1,
+ .device_id = 0,
+ .pid = getpid(),
+ };
+ int ret;
+ bool judge_ret = 0;
+
+ ret = ioctl_config_dvpp_range(dev_fd, &cdr_info);
+
+ if (ret < 0) {
+ pr_info("dvpp config failed. errno: %d", errno);
+ return ret;
+ } else
+ pr_info("dvpp config success.");
+
+ judge_ret = ioctl_judge_addr(dev_fd, cdr_info.start);
+ if (judge_ret != true)
+ return -1;
+
+ judge_ret = ioctl_judge_addr(dev_fd, cdr_info.start + cdr_info.size - 1);
+ if (judge_ret != true)
+ return -1;
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "边界值测试:MMAP_SHARE_POOL_START,MMAP_SHARE_POOL_END,预期成功。")
+ TESTCASE(testcase2, "DVPP地址测试:dvpp_start, dvpp_end - 1,预期成功")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_alloc/Makefile b/tools/testing/sharepool/testcase/api_test/sp_alloc/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_alloc/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc.c b/tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc.c
new file mode 100644
index 000000000000..a6cc430f18e6
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc.c
@@ -0,0 +1,543 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+#define ALIGN_UP(x, align_to) (((x) + ((align_to)-1)) & ~((align_to)-1))
+#define CMD_LEN 100
+#define UNIT 1024
+#define PAGE_NUM 100
+#define HGPAGE_NUM 10
+#define LARGE_PAGE_NUM 1000000
+#define ATOMIC_TEST_SIZE (1024UL * 1024UL * 1024UL) // 1G
+/*
+ * 前置条件:进程先加组。
+ * testcase1: 申请共享组内存,flag为0。预期申请成功。
+ * testcase2: 申请共享组内存,flag为HP,size不对齐。预期申请到对齐大页大小size的内存。
+ * testcase3: 申请共享组内存,flag为HPONLY。预期申请成功,或空间不足时返回ENOMEM。
+ * testcase4: 申请共享组内存,flag为DVPP。预期申请成功,虚拟地址空间范围在dvpp地址空间。
+ * testcase5: 申请共享组内存,flag为DVPP|HP。预期申请到对齐大页大小size的内存,虚拟地址空间范围在dvpp地址空间。
+ * testcase6: 申请共享组内存,flag为DVPP|ONLY。预期申请成功,虚拟地址空间范围在dvpp地址空间,或空间不足时返回ENOMEM。
+ * testcase7: 申请共享组内存,flag非法。预期申请失败,返回EINVAL。
+ * testcase8: size指定为0,远超出物理内存大小。预期申请失败。
+ * testcase9: spg_id为NONE,未使用过的正常范围内数值,DVPP,或超出范围。预期申请失败。
+ * testcase10: 申请指定memory node节点内存,flag中包含用户传入的device id,对应memory node
+ * testcase11: 单进程申请8G,观察维测结构打印是否正常
+ */
+
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int cleanup(struct sp_alloc_info *alloc_info)
+{
+ int ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ //pr_info("testcase1 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+
+ return cleanup(&alloc_info);
+}
+
+static int testcase2(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_HUGEPAGE,
+ .size = (HGPAGE_NUM * PMD_SIZE - 1),
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ pr_info("testcase2 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ if (alloc_info.addr != ALIGN_UP(alloc_info.addr, PMD_SIZE)) {
+ pr_info("testcase2 ioctl_alloc addr = %p is not aligned", alloc_info.addr);
+ }
+
+ return cleanup(&alloc_info);
+}
+
+static int testcase3(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_HUGEPAGE,
+ .size = HGPAGE_NUM * PMD_SIZE,
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0 && errno == ENOMEM) {
+ //pr_info("testcase3 ioctl_alloc failed as expected, errno: ENOMEM");
+ return 0;
+ } else if (ret != 0) {
+ //pr_info("testcase3 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+
+ return cleanup(&alloc_info);
+}
+
+static int testcase4(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_DVPP,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ };
+
+ unsigned long dvpp_size1;
+ unsigned long dvpp_size2;
+ char cmd[CMD_LEN] = "cat /proc/sharepool/spa_stat | grep \"dvpp size\" | awk '{print $4}'";
+ FILE *p_file1 = NULL;
+ FILE *p_file2 = NULL;
+ if ((p_file1 = popen(cmd, "r")) == NULL) {
+ pr_info("testcase4 popen1 failed");
+ return -1;
+ }
+ fscanf(p_file1, "%lu", &dvpp_size1);
+ pclose(p_file1);
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ pr_info("testcase4 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+
+ if ((p_file2 = popen(cmd, "r")) == NULL) {
+ pr_info("testcase4 popen2 failed");
+ return -1;
+ }
+ fscanf(p_file2, "%lu", &dvpp_size2);
+ pclose(p_file2);
+ if ((dvpp_size2 - dvpp_size1) * UNIT != alloc_info.size) {
+ pr_info("testcase4 dvpp_size check failed, dvpp_size1 %lu, dvpp_size2 %lu", dvpp_size1, dvpp_size2);
+ return -1;
+ }
+
+ return cleanup(&alloc_info);
+}
+
+static int testcase5(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_DVPP | SP_HUGEPAGE,
+ .size = (HGPAGE_NUM * PMD_SIZE - 1),
+ .spg_id = 1,
+ };
+
+ unsigned long dvpp_size1;
+ unsigned long dvpp_size2;
+ char cmd[CMD_LEN] = "cat /proc/sharepool/spa_stat | grep \"dvpp size\" | awk '{print $4}'";
+ FILE *p_file1 = NULL;
+ FILE *p_file2 = NULL;
+ if ((p_file1 = popen(cmd, "r")) == NULL) {
+ pr_info("testcase5 popen1 failed");
+ return -1;
+ }
+ fscanf(p_file1, "%lu", &dvpp_size1);
+ pclose(p_file1);
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ pr_info("testcase5 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ if (alloc_info.addr != ALIGN_UP(alloc_info.addr, PMD_SIZE)) {
+ pr_info("testcase5 ioctl_alloc addr = %p is not aligned", alloc_info.addr);
+ }
+
+ if ((p_file2 = popen(cmd, "r")) == NULL) {
+ pr_info("testcase5 popen2 failed");
+ return -1;
+ }
+ fscanf(p_file2, "%lu", &dvpp_size2);
+ pclose(p_file2);
+ if ((dvpp_size2 - dvpp_size1) * UNIT != (alloc_info.size + 1)) {
+ pr_info("testcase5 dvpp_size check failed, dvpp_size1 %lu, dvpp_size2 %lu", dvpp_size1, dvpp_size2);
+ return -1;
+ }
+
+ return cleanup(&alloc_info);
+}
+
+static int testcase6(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_DVPP,
+ .size = HGPAGE_NUM * PMD_SIZE,
+ .spg_id = 1,
+ };
+
+ unsigned long dvpp_size1;
+ unsigned long dvpp_size2;
+ char cmd[CMD_LEN] = "cat /proc/sharepool/spa_stat | grep \"dvpp size\" | awk '{print $4}'";
+ FILE *p_file1 = NULL;
+ FILE *p_file2 = NULL;
+ if ((p_file1 = popen(cmd, "r")) == NULL) {
+ pr_info("testcase6 popen1 failed");
+ return -1;
+ }
+ fscanf(p_file1, "%lu", &dvpp_size1);
+ pclose(p_file1);
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0 && errno == ENOMEM) {
+ pr_info("testcase6 ioctl_alloc failed as expected, errno: ENOMEM");
+ return 0;
+ } else if (ret != 0) {
+ pr_info("testcase6 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ if ((p_file2 = popen(cmd, "r")) == NULL) {
+ pr_info("testcase6 popen2 failed");
+ return -1;
+ }
+ fscanf(p_file2, "%lu", &dvpp_size2);
+ pclose(p_file2);
+ if ((dvpp_size2 - dvpp_size1) * UNIT != alloc_info.size) {
+ pr_info("testcase6 dvpp_size check failed, dvpp_size1 %lu, dvpp_size2 %lu", dvpp_size1, dvpp_size2);
+ return -1;
+ }
+
+ return cleanup(&alloc_info);
+}
+
+static int testcase7(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = (1 << 3),
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .flag = -1,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ };
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_infos[i]);
+ if (ret != 0 && errno == EINVAL) {
+ pr_info("testcase7 ioctl_alloc %d failed as expected", i);
+ ret = 0;
+ } else if (ret != 0) {
+ pr_info("testcase7 ioctl_alloc %d failed unexpected, errno: %d", i, errno);
+ } else {
+ if (i == 0)
+ continue;
+ pr_info("testcase7 ioctl_alloc %d success unexpected", i);
+ cleanup(&alloc_infos[i]);
+ ret = -1;
+ }
+ }
+
+ return ret;
+}
+
+static int testcase8(void)
+{
+ // 用例会触发oom,暂时注掉
+ return 0;
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = 0,
+ .size = 0,
+ .spg_id = 1,
+ },
+ {
+ .flag = 0,
+ .size = LARGE_PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ };
+ int errs[] = {EINVAL, EOVERFLOW};
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_infos[i]);
+ if (ret != 0 && errno == errs[i]) {
+ //pr_info("testcase8 ioctl_alloc %d failed as expected", i);
+ } else if (ret != 0) {
+ //pr_info("testcase8 ioctl_alloc %d failed unexpected, errno: %d", i, errno);
+ return ret;
+ } else {
+ //pr_info("testcase8 ioctl_alloc %d success unexpected", i);
+ cleanup(&alloc_infos[i]);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase9(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = 0,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = -1,
+ },
+ {
+ .flag = 0,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = SPG_ID_AUTO_MIN,
+ },
+ };
+ int errs[] = {EINVAL, ENODEV};
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ errno = 0;
+ ret = ioctl_alloc(dev_fd, &alloc_infos[i]);
+ if (ret != 0 && errno == errs[i]) {
+ pr_info("testcase9 ioctl_alloc %d failed as expected", i);
+ } else if (ret != 0) {
+ pr_info("testcase9 ioctl_alloc %d failed unexpected, errno: %d", i, errno);
+ return ret;
+ } else {
+ pr_info("testcase9 ioctl_alloc %d success unexpected", i);
+ cleanup(&alloc_infos[i]);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase10(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = 0,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .flag = 2,
+ .size = PAGE_NUM * PMD_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .flag = 0x100000000,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .flag = 0x100000002,
+ .size = PAGE_NUM * PMD_SIZE,
+ .spg_id = 1,
+ },
+ };
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_infos[i]);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ } else {
+ sleep(3);
+ cleanup(&alloc_infos[i]);
+ }
+ }
+ return 0;
+}
+
+static int alloc_large_repeat(bool hugepage)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = hugepage ? 1 : 0,
+ .size = ATOMIC_TEST_SIZE,
+ .spg_id = 1,
+ };
+
+ sharepool_print();
+
+ pr_info("start to alloc...");
+ for (int i = 0; i < 5; i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ pr_info("alloc %s failed. errno %d",
+ hugepage ? "huge page" : "normal page",
+ errno);
+ return ret;
+ } else {
+ pr_info("alloc %s success %d time.",
+ hugepage ? "huge page" : "normal page",
+ i + 1);
+ }
+ sharepool_print();
+ mem_show();
+ }
+ return 0;
+}
+
+static int testcase11(void)
+{
+ return alloc_large_repeat(false);
+}
+
+static int testcase12(void)
+{
+ return alloc_large_repeat(true);
+}
+
+int semid;
+static int testcase13(void)
+{
+ int ret = 0;
+ int friend = 0;
+ int spg_id = 1;
+ void *addr;
+
+ semid = sem_create(1234, "sem");
+ friend = fork();
+ if (friend == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id);
+ if (ret < 0) {
+ pr_info("add group failed!");
+ exit(-1);
+ }
+ sleep(1);
+ sem_inc_by_one(semid);
+ while (1) {
+
+ }
+ }
+
+ sem_dec_by_one(semid);
+ addr = wrap_sp_alloc(spg_id, 4096, 0);
+ if (addr == (void *)-1) {
+ pr_info("alloc from other group failed as expected.");
+ ret = 0;
+ } else {
+ pr_info("alloc from other group success unexpected.");
+ ret = -1;
+ }
+
+ KILL_CHILD(friend);
+ sem_close(semid);
+ return ret;
+}
+
+/* testcase1: 申请共享组内存,flag为0。预期申请成功。
+ * testcase2: 申请共享组内存,flag为HP,size不对齐。预期申请到对齐大页大小size的内存。
+ * testcase3: 申请共享组内存,flag为HPONLY。预期申请成功,或空间不足时返回ENOMEM。
+ * testcase4: 申请共享组内存,flag为DVPP。预期申请成功,虚拟地址空间范围在dvpp地址空间。
+ * testcase5: 申请共享组内存,flag为DVPP|HP。预期申请到对齐大页大小size的内存,虚拟地址空间范围在dvpp地址空间。
+ * testcase6: 申请共享组内存,flag为DVPP|ONLY。预期申请成功,虚拟地址空间范围在dvpp地址空间,或空间不足时返回ENOMEM。
+ * testcase7: 申请共享组内存,flag非法。预期申请失败,返回EINVAL。
+ * testcase8: size指定为0,远超出物理内存大小。预期申请失败。
+ * testcase9: spg_id为NONE,未使用过的正常范围内数值,DVPP,或超出范围。预期申请失败。
+ * testcase10: 申请指定memory node节点内存,flag中包含用户传入的device id,对应memory node
+ * testcase11: 单进程申请8G,观察维测结构打印是否正常
+*/
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "申请共享组内存,flag为0。预期申请成功。")
+ TESTCASE_CHILD(testcase2, "申请共享组内存,flag为HP,size不对齐。预期申请到对齐大页大小size的内存。")
+ TESTCASE_CHILD(testcase3, "申请共享组内存,flag为HPONLY。预期申请成功,或空间不足时返回ENOMEM。")
+ TESTCASE_CHILD(testcase4, "申请共享组内存,flag为DVPP。预期申请成功,虚拟地址空间范围在dvpp地址空间。")
+ TESTCASE_CHILD(testcase5, "申请共享组内存,flag为DVPP|HP。预期申请到对齐大页大小size的内存,虚拟地址空间范围在dvpp地址空间。")
+ TESTCASE_CHILD(testcase6, "申请共享组内存,flag为DVPP|ONLY。预期申请成功,虚拟地址空间范围在dvpp地址空间,或空间不足时返回ENOMEM。")
+ TESTCASE_CHILD(testcase7, "申请共享组内存,flag非法。预期申请失败,返回EINVAL。")
+ TESTCASE_CHILD(testcase8, "size指定为0,远超出物理内存大小。预期申请失败。")
+ TESTCASE_CHILD(testcase9, "spg_id为NONE,未使用过的正常范围内数值,DVPP,或超出范围。预期申请失败。")
+ TESTCASE_CHILD_MANUAL(testcase10, "申请指定memory node节点内存,flag中包含用户传入的device id,对应memory node")
+ TESTCASE_CHILD(testcase11, "单进程申请1G普通页,重复5次,然后退出。观察维测结构打印是否正常")
+ TESTCASE_CHILD(testcase12, "单进程申请1G大页,重复5次,然后退出。观察维测结构打印是否正常")
+ TESTCASE_CHILD(testcase13, "sp_alloc尝试从未加入的组申请内存,预期失败")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc2.c b/tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc2.c
new file mode 100644
index 000000000000..451cb987934d
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc2.c
@@ -0,0 +1,131 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon May 31 07:19:22 2021
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <sys/mman.h>
+
+#include "sharepool_lib.h"
+
+
+#define ADD_GROUP \
+ struct sp_add_group_info ag_info = { \
+ .pid = getpid(), \
+ .prot = PROT_READ | PROT_WRITE, \
+ .spg_id = SPG_ID_AUTO, \
+ }; \
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+
+/*
+ * size = 0
+ * 错误码,EINVAL
+ */
+static int testcase1(void)
+{
+ int ret;
+ pid_t pid;
+
+ ADD_GROUP
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 0,
+ .spg_id = ag_info.spg_id,
+ };
+ TEST_CHECK_FAIL(ioctl_alloc(dev_fd, &alloc_info), EINVAL, out);
+
+out:
+ return ret;
+}
+
+/*
+ * size = -1
+ * 错误码,EINVAL
+ */
+static int testcase2(void)
+{
+ int ret;
+ pid_t pid;
+
+ ADD_GROUP
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = -1,
+ .spg_id = ag_info.spg_id,
+ };
+ TEST_CHECK_FAIL(ioctl_alloc(dev_fd, &alloc_info), EINVAL, out);
+
+out:
+ return ret;
+}
+
+/*
+ * size = 1 << 48
+ * 错误码,EINVAL
+ */
+static int testcase3(void)
+{
+ int ret;
+ pid_t pid;
+
+ ADD_GROUP
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 1UL << 48,
+ .spg_id = ag_info.spg_id,
+ };
+ TEST_CHECK_FAIL(ioctl_alloc(dev_fd, &alloc_info), EINVAL, out);
+
+out:
+ return ret;
+}
+
+/*
+ * size = 1 << 36
+ * 会oom,手动执行,需要无内存泄漏等异常
+ */
+static int testcase4(void)
+{
+ int ret;
+ pid_t pid;
+
+ ADD_GROUP
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 1UL << 36, // 64G
+ .spg_id = ag_info.spg_id,
+ };
+ TEST_CHECK(ioctl_alloc(dev_fd, &alloc_info), out);
+
+out:
+ return ret;
+}
+
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "size = 0,错误码EINVAL")
+ TESTCASE_CHILD(testcase2, "size = -1,错误码EINVAL")
+ TESTCASE_CHILD(testcase3, "size = 1 << 48,错误码EINVAL")
+ TESTCASE_CHILD_MANUAL(testcase4, "size = 1 << 36,会oom,手动执行,预期无内存泄漏")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc3.c b/tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc3.c
new file mode 100644
index 000000000000..4334b1d365bc
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_alloc/test_sp_alloc3.c
@@ -0,0 +1,147 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Jun 07 02:10:29 2021
+ */
+#include <stdio.h>
+#include <string.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <sys/mman.h>
+#include <errno.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+/*
+ * 申请直调内存,写入,释放
+ * 预期申请内存成功,写入、释放无异常
+ */
+
+#define SIZE_1K (1024UL)
+#define SIZE_PAGE (4 * 1024UL)
+#define SIZE_HUGEPAGE (2 * 1024 * 1024UL)
+
+static int alloc_from_spg_none(unsigned long flag)
+{
+ void *buf;
+ char *cbuf;
+
+ unsigned long size[3] = {
+ SIZE_1K,
+ SIZE_PAGE,
+ SIZE_HUGEPAGE
+ };
+
+ for ( int i = 0; i < sizeof(size) / sizeof(size[0]); i++) {
+ buf = (void *)wrap_sp_alloc(SPG_ID_DEFAULT, size[i], flag);
+ if (!buf) {
+ pr_info("alloc failed by size %d, flag %d, errno: %d",
+ size[i], flag, errno);
+ return -1;
+ } else {
+ pr_info("alloc success by size %d, flag %d. va: 0x%lx",
+ size[i], flag, buf);
+ // 写入
+ cbuf = (char *)buf;
+ for (int j = 0; j < size[i]; j++)
+ *(cbuf + i) = 'A';
+
+ // 释放
+ wrap_sp_free(buf);
+ }
+ }
+
+ return 0;
+}
+
+/* testcase4
+ * N个组并发加入spg_none, 并发申请内存,然后并发退出
+ */
+#define PROC_NUM 10
+#define REPEAT 10
+#define ALLOC_SIZE SIZE_HUGEPAGE
+#define PRT PROT_READ | PROT_WRITE
+int sem_tc4;
+static int testcase_child(bool hugepage)
+{
+ int ret;
+ sem_dec_by_one(sem_tc4);
+
+ if (wrap_sp_alloc(SPG_ID_DEFAULT, ALLOC_SIZE, hugepage? 2 : 0) == (void *)-1) {
+ pr_info("child %d alloc hugepage failed.", getpid());
+ return -1;
+ }
+ pr_info("alloc success. child %d", getpid());
+
+ sem_check_zero(sem_tc4);
+ sleep(3);
+ sem_dec_by_one(sem_tc4);
+ pr_info("exit success. child %d", getpid());
+
+ return 0;
+}
+
+static int testcase(bool hugepage)
+{
+ int ret = 0;
+ int child[PROC_NUM];
+
+ sem_tc4 = sem_create(1234, "sem for testcase4");
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ FORK_CHILD_ARGS(child[i], testcase_child(hugepage));
+ }
+
+ // 并发申请内存
+ pr_info("all start to allocate...");
+ sem_inc_by_val(sem_tc4, PROC_NUM);
+ sem_check_zero(sem_tc4);
+ sleep(5);
+
+ // 并发退出组
+ pr_info("all start to exit...");
+ sem_inc_by_val(sem_tc4, PROC_NUM);
+ sleep(5);
+ sem_check_zero(sem_tc4);
+
+ for (int i = 0; i < PROC_NUM; i++)
+ WAIT_CHILD_STATUS(child[i], out);
+
+ sem_close(sem_tc4);
+ return 0;
+
+out:
+ for (int i = 0; i < PROC_NUM; i++)
+ KILL_CHILD(child[i]);
+ sem_close(sem_tc4);
+ return -1;
+}
+
+
+static int testcase1(void) { return alloc_from_spg_none(0); }
+static int testcase2(void) { return alloc_from_spg_none(SP_HUGEPAGE);}
+static int testcase3(void) { return alloc_from_spg_none(SP_HUGEPAGE_ONLY); }
+static int testcase4(void) { return testcase(1); }
+static int testcase5(void) { return testcase(0); }
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "申请直调内存 普通页")
+ TESTCASE_CHILD(testcase2, "申请直调内存 大页")
+ TESTCASE_CHILD(testcase3, "申请直调内存 大页only")
+ TESTCASE_CHILD(testcase4, "并发申请直调内存大页,写入,释放。预期申请内存成功,写入、释放无异常")
+ TESTCASE_CHILD(testcase5, "并发申请直调内存普通页,写入,释放。预期申请内存成功,写入、释放无异常")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_alloc/test_spa_error.c b/tools/testing/sharepool/testcase/api_test/sp_alloc/test_spa_error.c
new file mode 100644
index 000000000000..d0fec0bfbb23
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_alloc/test_spa_error.c
@@ -0,0 +1,109 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <sys/ioctl.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sharepool_lib.h"
+
+#define ALIGN_UP(x, align_to) (((x) + ((align_to)-1)) & ~((align_to)-1))
+#define CMD_LEN 100
+#define UNIT 1024
+#define PAGE_NUM 1
+#define HGPAGE_NUM 10
+#define LARGE_PAGE_NUM 1000000
+#define ATOMIC_TEST_SIZE (1024UL * 1024UL * 1024UL) // 1G
+#define SPG_ID_AUTO 200000
+#define DAVINCI_IOCTL_VA_TO_PA 0xfff9
+#define DVPP_START (0x100000000000UL)
+#define DVPP_SIZE 0x400000000UL
+
+
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ } else {
+ ret = ag_info.spg_id;
+ }
+ return ret;
+}
+
+static int cleanup(struct sp_alloc_info *alloc_info)
+{
+ int ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ int fd;
+ long err;
+ unsigned long phys_addr;
+ int spg_id = 0;
+
+ spg_id = addgroup();
+ if (spg_id <= 0) {
+ pr_info("spgid <= 0, value: %d", spg_id);
+ return -1;
+ } else {
+ pr_info("spg id %d", spg_id);
+ }
+
+ struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = ((10UL << 32) | 0x7), // 普通大页
+ .size = PAGE_NUM * PMD_SIZE,
+ .spg_id = spg_id,
+ },
+ };
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_infos[i]);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_alloc failed as expected, errno: %d", errno);
+ } else {
+ pr_info("alloc success unexpected, type, va: %lx", alloc_infos[i].addr);
+ return -1;
+ }
+ }
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++)
+ cleanup(&alloc_infos[i]);
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "sp_alloc传入非法flag,预期sp_area返回错误值,sp_alloc失败")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/Makefile b/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/Makefile
new file mode 100644
index 000000000000..ef670b48300a
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $< -o $@ -L$(SHARELIB_DIR) -lsharepool_lib -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(INSTALL_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/start_vm_test_16.sh b/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/start_vm_test_16.sh
new file mode 100755
index 000000000000..64b92606094b
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/start_vm_test_16.sh
@@ -0,0 +1,49 @@
+qemu-system-aarch64 \
+ -m \
+ 16G \
+ -object memory-backend-ram,size=1G,id=mem0 \
+ -object memory-backend-ram,size=1G,id=mem1 \
+ -object memory-backend-ram,size=1G,id=mem2 \
+ -object memory-backend-ram,size=1G,id=mem3 \
+ -object memory-backend-ram,size=1G,id=mem4 \
+ -object memory-backend-ram,size=1G,id=mem5 \
+ -object memory-backend-ram,size=1G,id=mem6 \
+ -object memory-backend-ram,size=1G,id=mem7 \
+ -object memory-backend-ram,size=1G,id=mem8 \
+ -object memory-backend-ram,size=1G,id=mem9 \
+ -object memory-backend-ram,size=1G,id=mem10 \
+ -object memory-backend-ram,size=1G,id=mem11 \
+ -object memory-backend-ram,size=1G,id=mem12 \
+ -object memory-backend-ram,size=1G,id=mem13 \
+ -object memory-backend-ram,size=1G,id=mem14 \
+ -object memory-backend-ram,size=1G,id=mem15 \
+ -numa node,memdev=mem0,nodeid=0 \
+ -numa node,memdev=mem1,nodeid=1 \
+ -numa node,memdev=mem2,nodeid=2 \
+ -numa node,memdev=mem3,nodeid=3 \
+ -numa node,memdev=mem4,nodeid=4 \
+ -numa node,memdev=mem5,nodeid=5 \
+ -numa node,memdev=mem6,nodeid=6 \
+ -numa node,memdev=mem7,nodeid=7 \
+ -numa node,memdev=mem8,nodeid=8 \
+ -numa node,memdev=mem9,nodeid=9 \
+ -numa node,memdev=mem10,nodeid=10 \
+ -numa node,memdev=mem11,nodeid=11 \
+ -numa node,memdev=mem12,nodeid=12 \
+ -numa node,memdev=mem13,nodeid=13 \
+ -numa node,memdev=mem14,nodeid=14 \
+ -numa node,memdev=mem15,nodeid=15 \
+ -kernel \
+ /home/data/qemu/images/openEuler-22.03-arm64/Image \
+ -drive file=/home/data/qemu/images/openEuler-22.03-arm64/rootfs.qcow2,if=none,format=qcow2,cache=none,id=root -device virtio-blk,drive=root,id=d_root \
+ -device virtio-scsi-pci -drive file=/home/data/qemu/images/disk/disk.img,if=none,format=raw,id=dd_1 -device scsi-hd,drive=dd_1,id=disk_1 \
+ -M virt -cpu cortex-a57 \
+ -smp \
+ 8 \
+ -net nic,model=virtio-net-pci \
+ -net \
+ user,host=10.0.2.2,hostfwd=tcp::10022-:22 \
+ -fsdev local,security_model=passthrough,id=fsdev0,path=/tmp -device virtio-9p-pci,id=fs0,fsdev=fsdev0,mount_tag=hostshare \
+ -append \
+ "console=ttyAMA0 root=/dev/vda2 rw printk.time=y oops=panic panic_on_oops=1 panic_on_warn=1 panic=-1 net.ifnames=0 ftrace_dump_on_oops=orig_cpu debug earlyprintk=serial slub_debug=UZ selinux=0 highres=off earlycon systemd.default_timeout_start_sec=600 crashkernel=256M enable_ascend_share_pool" \
+ -nographic
diff --git a/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/start_vm_test_4.sh b/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/start_vm_test_4.sh
new file mode 100755
index 000000000000..b4d95a5857f4
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/start_vm_test_4.sh
@@ -0,0 +1,25 @@
+qemu-system-aarch64 \
+ -m \
+ 16G \
+ -object memory-backend-ram,size=4G,id=mem0 \
+ -object memory-backend-ram,size=4G,id=mem1 \
+ -object memory-backend-ram,size=4G,id=mem2 \
+ -object memory-backend-ram,size=4G,id=mem3 \
+ -numa node,memdev=mem0,nodeid=0 \
+ -numa node,memdev=mem1,nodeid=1 \
+ -numa node,memdev=mem2,nodeid=2 \
+ -numa node,memdev=mem3,nodeid=3 \
+ -kernel \
+ /home/data/qemu/images/openEuler-22.03-arm64/Image \
+ -drive file=/home/data/qemu/images/openEuler-22.03-arm64/rootfs.qcow2,if=none,format=qcow2,cache=none,id=root -device virtio-blk,drive=root,id=d_root \
+ -device virtio-scsi-pci -drive file=/home/data/qemu/images/disk/disk.img,if=none,format=raw,id=dd_1 -device scsi-hd,drive=dd_1,id=disk_1 \
+ -M virt -cpu cortex-a57 \
+ -smp \
+ 8 \
+ -net nic,model=virtio-net-pci \
+ -net \
+ user,host=10.0.2.2,hostfwd=tcp::10022-:22 \
+ -fsdev local,security_model=passthrough,id=fsdev0,path=/tmp -device virtio-9p-pci,id=fs0,fsdev=fsdev0,mount_tag=hostshare \
+ -append \
+ "console=ttyAMA0 root=/dev/vda2 rw printk.time=y oops=panic panic_on_oops=1 panic_on_warn=1 panic=-1 net.ifnames=0 ftrace_dump_on_oops=orig_cpu debug earlyprintk=serial slub_debug=UZ selinux=0 highres=off earlycon systemd.default_timeout_start_sec=600 crashkernel=256M enable_ascend_share_pool" \
+ -nographic
diff --git a/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/test_nodemask.c b/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/test_nodemask.c
new file mode 100644
index 000000000000..1c1fbf023327
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_alloc_nodemask/test_nodemask.c
@@ -0,0 +1,782 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+#include <string.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+#define ALIGN_UP(x, align_to) (((x) + ((align_to)-1)) & ~((align_to)-1))
+#define CMD_LEN 100
+#define UNIT 1024
+#define PAGE_NUM 100
+#define HGPAGE_NUM 10
+#define LARGE_PAGE_NUM 1000000
+#define ATOMIC_TEST_SIZE (1024UL * 1024UL * 1024UL) // 1G
+
+static int check_mem_in_node(unsigned long addr, int huge, unsigned long *mask, int max_node)
+{
+ FILE *fp;
+ char buffer[512];
+ char *cmd;
+ char *p;
+ int ret;
+
+ cmd = malloc(64 + 32 * max_node);
+
+ p = cmd;
+ p += sprintf(p, "cat /proc/%d/numa_maps | grep %lx | grep kernelpagesize_kB=4", getpid(), addr);
+ fp = popen(cmd, "r");
+ if (fgets(buffer, sizeof(buffer), fp))
+ ret = !huge;
+ else
+ ret = huge;
+ fclose(fp);
+
+ p = cmd;
+ p += sprintf(p, "cat /proc/%d/numa_maps | grep %lx ", getpid(), addr);
+
+ for (int i = 0; i < max_node; i++) {
+ if (mask[i / 64] & (1UL << (i % 64))) {
+ p += sprintf(p, "| sed -r \"s/N%d=[0-9]+ //g\" ", i);
+ }
+ }
+
+ p += sprintf(p, "| sed -nr \"/N[0-9]+=[0-9]+ /p\"");
+
+ fp = popen(cmd, "r");
+ if (fgets(buffer, sizeof(buffer), fp))
+ ret = 0;
+ else
+ ret = 1;
+
+ printf("cmd: %s\n", cmd);
+ printf("%s\n", buffer);
+
+ fclose(fp);
+ free(cmd);
+ return ret;
+}
+
+static int check_mem_not_in_node(unsigned long addr, int huge, unsigned long *mask, int max_node)
+{
+ FILE *fp;
+ char buffer[256];
+ char *cmd;
+ char *p;
+ int ret;
+ int tmp = 0;
+
+ cmd = malloc(64 + 32 * max_node);
+
+ p = cmd;
+ p += sprintf(p, "cat /proc/%d/numa_maps | grep %lx | grep kernelpagesize_kB=4", getpid(), addr);
+ fp = popen(cmd, "r");
+ if (fgets(buffer, sizeof(buffer), fp))
+ ret = !huge;
+ else
+ ret = huge;
+ fclose(fp);
+
+ for (int i = 0; i < max_node / 64 + 1; i++) {
+ tmp |= mask[i];
+ }
+
+ if (!tmp) /* no node provided */
+ goto out;
+
+ p = cmd;
+ p += sprintf(p, "cat /proc/%d/numa_maps | grep %lx | ", getpid(), addr);
+ p += sprintf(p, "sed -nr \"/");
+ for (int i = 0; i < max_node; i++) {
+ if (mask[i / 64] & (1UL << (i % 64))) {
+ printf("mask[%d / 64]: %lx\n", i, mask[i / 64]);
+ p += sprintf(p, "(N%d)|", i);
+ }
+ }
+ p += sprintf(p, "(XXX)=[0-9]+ /p\"\n");
+
+ fp = popen(cmd, "r");
+ if (fgets(buffer, sizeof(buffer), fp))
+ ret = 1;
+ else
+ ret = 0;
+
+ printf("cmd: %s\n", cmd);
+ printf("%s\n", buffer);
+
+ fclose(fp);
+out:
+ free(cmd);
+
+ return ret;
+}
+
+/*
+ * mg_sp_alloc & mg_sp_alloc_nodemask 内存申请混合操作场景
+ * 同时混合sharepool用户态接口打印操作
+ * 测试环境需求 numa节点数 4
+ */
+static int testcase1_1(void)
+{
+ int default_id = 1;
+ int page_size = 4096;
+ int ret = 0;
+
+ int proc_num = 10;
+ int childs[proc_num];
+ int prints_num = 3;
+ int prints[prints_num];
+
+ unsigned long node_id = 1;
+ unsigned long flags = node_id << 36 | SP_SPEC_NODE_ID;
+ unsigned long nodemask[2];
+ unsigned long max_node = 128;
+ nodemask[0] = 0xFUL;
+ nodemask[1] = 0x0UL;
+
+ // 持续打印维测接口
+ for(int i = 0; i < prints_num; i++){
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ sharepool_print();
+ usleep(10000);
+ }
+ } else {
+ prints[i] = pid;
+ }
+ }
+
+ for(int i=0; i<proc_num; i++){
+ int pid = fork();
+ if (pid == 0){
+ int ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ while(1){
+ void *addr;
+ int ret;
+ addr = wrap_sp_alloc(default_id, page_size, 0);
+ if (addr == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ }
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d free failed.", getpid());
+ } else {
+ pr_info("process %d free success.", getpid());
+ }
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, page_size, flags, nodemask, max_node);
+ if (addr == (void *)-1) {
+ pr_info("process %d multinode alloc failed.", getpid());
+ } else {
+ pr_info("process %d multinode alloc success.", getpid());
+ }
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d multinode free failed.", getpid());
+ } else {
+ pr_info("process %d multinode free success.", getpid());
+ }
+ }
+ }
+ else{
+ childs[i] = pid;
+ }
+ }
+
+ sleep(10);
+
+ int status;
+
+ for (int i = 0; i < proc_num; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ }
+ for (int i = 0; i < prints_num; i++) {
+ kill(prints[i], SIGKILL);
+ waitpid(prints[i], &status, 0);
+ }
+
+ return 0;
+}
+
+/*
+ * mg_sp_alloc & mg_sp_alloc_nodemask 内存申请混合操作场景
+ * 测试环境需求 numa节点数 4 每个node配置4GB内存
+ * 申请内存量增加 确保申请到不同的node
+ * 附加内存读写验证操作
+ */
+static int testcase1_2(void)
+{
+ int default_id = 1;
+ unsigned long mem_size = 1UL * 1024 * 1024 * 1024;
+ int page_size = 4096;
+ int ret = 0;
+
+ int proc_num = 5;
+ int childs[proc_num];
+ int prints_num = 3;
+ int prints[prints_num];
+
+ unsigned long flags = 0;
+ unsigned long nodemask[2];
+ unsigned long max_node = 128;
+ nodemask[0] = 0xFUL;
+ nodemask[1] = 0x0UL;
+
+ for(int i=0; i<proc_num; i++){
+ int pid = fork();
+ if (pid == 0){
+ int ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ for(int j=0; j<2; j++){
+ char *addr;
+ int ret;
+ char ori_c = 'a';
+ addr = wrap_sp_alloc(default_id, page_size, 0);
+ if (addr == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ }
+
+ memset(addr, ori_c, page_size);
+ for (size_t i = 0; i < page_size; i = i + 2)
+ {
+ char c = addr[i];
+ if (c != ori_c) {
+ wrap_sp_free(addr);
+ perror("memory contect check error \n");
+ return 1;
+ }
+ }
+
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d free failed.", getpid());
+ } else {
+ pr_info("process %d free success.", getpid());
+ }
+
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, mem_size, flags, nodemask, max_node);
+ if (addr == (void *)-1) {
+ pr_info("process %d multinode alloc failed.", getpid());
+ } else {
+ pr_info("process %d multinode alloc success.", getpid());
+ }
+
+ memset(addr, ori_c, mem_size);
+ for (size_t i = 0; i < mem_size; i = i + 2) {
+ char c = addr[i];
+ if (c != ori_c) {
+ wrap_sp_free(addr);
+ perror("memory contect check error \n");
+ return 1;
+ }
+ }
+
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d multinode free failed.", getpid());
+ } else {
+ pr_info("process %d multinode free success.", getpid());
+ }
+ }
+ }
+ else{
+ childs[i] = pid;
+ }
+ }
+
+ sleep(10);
+
+ int status;
+
+ for (int i = 0; i < proc_num; i++) {
+ waitpid(childs[i], &status, 0);
+ }
+
+ return status;
+}
+/*
+ * mg_sp_alloc_nodemask 并发申请内存小页 大页场景
+ * 同时混合sharepool用户态接口打印操作
+ * 测试环境需求 numa节点数 4
+ */
+static int testcase2(void)
+{
+ int default_id = 1;
+ int page_size = 4096;
+ int ret = 0;
+
+ int proc_num = 10;
+ int childs[proc_num];
+ int prints_num = 3;
+ int prints[prints_num];
+
+ unsigned long node_id = 1;
+ unsigned long flags_p = node_id << 36 | SP_SPEC_NODE_ID;
+ unsigned long flags_hp = node_id << 36 | SP_SPEC_NODE_ID | SP_HUGEPAGE | SP_HUGEPAGE_ONLY;
+ unsigned long nodemask[2];
+ unsigned long max_node = 128;
+ nodemask[0] = 0x2UL;
+ nodemask[1] = 0x0UL;
+
+ // 持续打印维测接口
+ for(int i = 0; i < prints_num; i++){
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ sharepool_print();
+ usleep(10000);
+ }
+ } else {
+ prints[i] = pid;
+ }
+ }
+
+ // 创建子进程申请和释放4K内存
+ for(int i=0; i<proc_num; i++){
+ int pid = fork();
+ if (pid == 0){
+ int ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ while(1){
+ void *addr;
+ int ret;
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, page_size, flags_p, nodemask, max_node);
+ if (addr == (void *)-1) {
+ pr_info("process %d multinode alloc failed.", getpid());
+ } else {
+ pr_info("process %d multinode alloc success.", getpid());
+ }
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d multinode free failed.", getpid());
+ } else {
+ pr_info("process %d multinode free success.", getpid());
+ }
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, page_size, flags_hp, nodemask, max_node);
+ if (addr == (void *)-1) {
+ pr_info("process %d multinode hugepage alloc failed.", getpid());
+ } else {
+ pr_info("process %d multinode hugepage alloc success.", getpid());
+ }
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d multinode hugepage free failed.", getpid());
+ } else {
+ pr_info("process %d multinode hugepage free success.", getpid());
+ }
+ }
+ }
+ else{
+ childs[i] = pid;
+ }
+ }
+
+ sleep(10);
+
+ int status;
+
+ for (int i = 0; i < proc_num; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ }
+ for (int i = 0; i < prints_num; i++) {
+ kill(prints[i], SIGKILL);
+ waitpid(prints[i], &status, 0);
+ }
+
+ return 0;
+}
+
+/*
+ * 多组使用 mg_sp_alloc_nodemask 并发内存申请场景
+ * 同时混合sharepool用户态接口打印操作
+ * 测试环境需求 numa节点数 4
+ */
+static int testcase3(void)
+{
+ int group_num = 10;
+ int page_size = 4096;
+ int ret = 0;
+ int childs[group_num];
+ int prints_num = 3;
+ int prints[prints_num];
+
+ unsigned long node_id = 1;
+ unsigned long flags_p = node_id << 36 | SP_SPEC_NODE_ID;
+ unsigned long flags_hp = node_id << 36 | SP_SPEC_NODE_ID | SP_HUGEPAGE | SP_HUGEPAGE_ONLY;
+ unsigned long nodemask[2];
+ unsigned long max_node = 128;
+ nodemask[0] = 0x2UL;
+ nodemask[1] = 0x0UL;
+
+ // 持续打印维测接口
+ for(int i = 0; i < prints_num; i++){
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ sharepool_print();
+ usleep(10000);
+ }
+ } else {
+ prints[i] = pid;
+ }
+ }
+
+ // 创建子进程申请和释放4K内存
+ for(int i=0; i<group_num; i++){
+ int pid = fork();
+ if (pid == 0){
+ int ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, i);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), i);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), i);
+ }
+ while(1){
+ void *addr;
+ int ret;
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, page_size, flags_p, nodemask, max_node);
+ if (addr == (void *)-1) {
+ pr_info("process %d multinode alloc failed.", getpid());
+ } else {
+ pr_info("process %d multinode alloc success.", getpid());
+ }
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d multinode free failed.", getpid());
+ } else {
+ pr_info("process %d multinode free success.", getpid());
+ }
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, page_size, flags_hp, nodemask, max_node);
+ if (addr == (void *)-1) {
+ pr_info("process %d multinode hugepage alloc failed.", getpid());
+ } else {
+ pr_info("process %d multinode hugepage alloc success.", getpid());
+ }
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d multinode hugepage free failed.", getpid());
+ } else {
+ pr_info("process %d multinode hugepage free success.", getpid());
+ }
+ }
+ }
+ else{
+ childs[i] = pid;
+ }
+ }
+
+ sleep(10);
+
+ int status;
+
+ for (int i = 0; i < group_num; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ }
+ for (int i = 0; i < prints_num; i++) {
+ kill(prints[i], SIGKILL);
+ waitpid(prints[i], &status, 0);
+ }
+
+ return 0;
+}
+
+/*
+ * mg_sp_alloc_nodemask 内存读写验证
+ * 同时混合sharepool用户态接口打印操作
+ * 测试环境需求 numa节点数 4
+ */
+static int testcase4(void)
+{
+ int default_id = 1;
+ int page_size = 4096;
+ int ret = 0;
+
+ unsigned long node_id = 1;
+ unsigned long flags_p = node_id << 36 | SP_SPEC_NODE_ID;
+ unsigned long nodemask[2];
+ unsigned long max_node = 128;
+ nodemask[0] = 0xFUL;
+ nodemask[1] = 0x0UL;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ char *addr;
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, page_size, flags_p, nodemask, max_node);
+ if (addr == (void *)-1) {
+ pr_info("process %d multinode alloc failed.", getpid());
+ } else {
+ pr_info("process %d multinode alloc success.", getpid());
+ }
+
+ char ori_c = 'a';
+ memset(addr, ori_c, page_size);
+
+ for (size_t i = 0; i < page_size; i = i + 2)
+ {
+ char c = addr[i];
+ if (c != ori_c) {
+ wrap_sp_free(addr);
+ perror("memory contect check error \n");
+ return 1;
+ }
+ }
+
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d multinode free failed.", getpid());
+ } else {
+ pr_info("process %d multinode free success.", getpid());
+ }
+
+ unsigned long nodemask_c[2];
+ nodemask[0] = 0x1UL;
+ nodemask[1] = 0x0UL;
+
+ if (!check_mem_not_in_node(addr, 0, nodemask, max_node)) {
+ pr_info("page is not expected");
+ ret = -1;
+ }
+
+ return 0;
+}
+
+/*
+ * 虚拟机配置 numa节点数16 每个节点配置1GB内存 共计16GB
+ */
+static int testcase5(void)
+{
+ int default_id = 1;
+ int mem_size = 1UL * 1024 * 1024 * 1024;
+ int ret = 0;
+ int proc_num = 10;
+ int childs[proc_num];
+
+ unsigned long node_id = 1;
+ unsigned long flags_p = node_id << 36 | SP_SPEC_NODE_ID;
+ unsigned long nodemask[2];
+ unsigned long max_node = 128;
+ nodemask[0] = 0xFFFFUL;
+ nodemask[1] = 0x0UL;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ for(int i=0; i<proc_num; i++){
+ int pid = fork();
+ if (pid == 0){
+ int ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ for(int j=0; j<3; j++){
+ void *addr;
+ int ret;
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, mem_size, flags_p, nodemask, max_node);
+ if (addr == (void *)-1) {
+ pr_info("process %d multinode alloc failed.", getpid());
+ } else {
+ pr_info("process %d multinode alloc success.", getpid());
+ }
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d multinode free failed.", getpid());
+ } else {
+ pr_info("process %d multinode free success.", getpid());
+ }
+ }
+ return ret;
+ }
+ else{
+ childs[i] = pid;
+ }
+ }
+
+ int status;
+ for (int i = 0; i < proc_num; i++) {
+ waitpid(childs[i], &status, 0);
+ }
+
+ return 0;
+}
+
+/*
+ * 虚拟机配置 numa节点数4 每个节点配置4GB内存 共计16GB
+ */
+static int testcase6_1(void)
+{
+ system("echo 0 > /proc/sys/vm/enable_oom_killer");
+ int default_id = 1;
+ unsigned long mem_size = 9UL * 1024 * 1024 *1024;
+ int ret = 0;
+
+ unsigned long flags_p = SP_SPEC_NODE_ID;
+ unsigned long nodemask[2];
+ unsigned long max_node = 128;
+ nodemask[0] = 0x3UL;
+ nodemask[1] = 0x0UL;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ char *addr;
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, mem_size, flags_p, nodemask, max_node);
+ if (addr == MAP_FAILED) {
+ if (errno == ENOMEM || errno == EINTR) { /* EINTR for OOM */
+ ret = 0;
+ } else {
+ printf("errno[%d] is not expected\n", errno);
+ ret = -1;
+ }
+ } else {
+ printf("alloc should fail\n");
+ wrap_sp_free(addr);
+ ret = -1;
+ }
+
+ return ret;
+}
+
+/*
+ * 虚拟机配置 numa节点数4 每个节点配置4GB内存 共计16GB
+ */
+static int testcase6_2(void)
+{
+ int default_id = 1;
+ unsigned long mem_size = 12UL * 1024 * 1024 *1024;
+ int ret = 0;
+
+ unsigned long flags_p = SP_SPEC_NODE_ID;
+ unsigned long nodemask[2];
+ unsigned long max_node = 128;
+ nodemask[0] = 0xFUL;
+ nodemask[1] = 0x0UL;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ char *addr;
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, mem_size, flags_p, nodemask, max_node);
+ if (addr == MAP_FAILED) {
+ pr_info("alloc failed, errno:%d", errno);
+ return -1;
+ }else{
+ pr_info("alloc success");
+ }
+
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d multinode free failed.", getpid());
+ } else {
+ pr_info("process %d multinode free success.", getpid());
+ }
+
+ return 0;
+}
+
+/*
+ * 虚拟机配置 numa节点数4 每个节点配置4GB内存 共计16GB
+ * cgroup限制内存的总量为1G
+ * echo 0 > /proc/sys/vm/enable_oom_killer
+ */
+static int testcase6_3(void)
+{
+ int default_id = 1;
+ unsigned long mem_size = 2UL * 1024 * 1024 *1024;
+ int ret = 0;
+
+ unsigned long node_id = 2;
+ unsigned long flags_p = node_id << 36 | SP_SPEC_NODE_ID;
+ unsigned long nodemask[2];
+ unsigned long max_node = 128;
+ nodemask[0] = 0xFUL;
+ nodemask[1] = 0x0UL;
+
+ char *addr;
+ addr = wrap_sp_alloc_nodemask(SPG_ID_DEFAULT, mem_size, 0, nodemask, max_node);
+ if (addr == MAP_FAILED) {
+ if (errno == ENOMEM || errno == EINTR) { /* EINTR for OOM */
+ ret = 0;
+ } else {
+ printf("errno[%d] is not expected\n", errno);
+ ret = -1;
+ }
+ } else {
+ printf("alloc should fail\n");
+ ret = -1;
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1_1, "mg_sp_alloc & mg_sp_alloc_nodemask 内存申请混合操作场景")
+ TESTCASE_CHILD(testcase1_2, "mg_sp_alloc & mg_sp_alloc_nodemask 内存申请混合操作场景 内存大量申请 混合读写验证操作")
+ TESTCASE_CHILD(testcase2, "mg_sp_alloc_nodemask 并发申请内存小页 大页场景")
+ TESTCASE_CHILD(testcase3, "多组使用 mg_sp_alloc_nodemask 并发内存申请场景")
+ TESTCASE_CHILD(testcase4, "mg_sp_alloc_nodemask 内存读写验证")
+ //TESTCASE_CHILD(testcase5, "极大数量node测试")
+ TESTCASE_CHILD(testcase6_1, "mg_sp_alloc_nodemask 两个numa node内存不足")
+ TESTCASE_CHILD(testcase6_2, "mg_sp_alloc_nodemask 使用4个numa node申请足量内存")
+ //TESTCASE_CHILD(testcase6_3, "mg_sp_alloc_nodemask cgroup限制内存的使用")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/Makefile b/tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/test_sp_config_dvpp_range.c b/tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/test_sp_config_dvpp_range.c
new file mode 100644
index 000000000000..251e28618d61
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/test_sp_config_dvpp_range.c
@@ -0,0 +1,367 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 08:38:18 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include "sem_use.h"
+#include "sharepool_lib.h"
+#define PAGE_NUM 100
+
+static int addgroup(int group_id)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int cleanup(struct sp_alloc_info *alloc_info)
+{
+ int ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+/* 进程未加组,配置dvpp地址空间,预期成功 */
+static int testcase1(void)
+{
+ struct sp_config_dvpp_range_info cdr_info = {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = DVPP_16G / 2,
+ .device_id = 0,
+ .pid = getpid(),
+ };
+
+ if (ioctl_config_dvpp_range(dev_fd, &cdr_info)) {
+ pr_info("ioctl_config_dvpp_range failed unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* 进程加组,但是size、pid、device_id非法 */
+static int testcase2(void)
+{
+ if (addgroup(10))
+ return -1;
+
+ struct sp_config_dvpp_range_info cdr_infos[] = {
+ {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = DVPP_16G / 2,
+ .device_id = 100, // 非法
+ .pid = getpid(),
+ },
+ {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = DVPP_16G / 2,
+ .device_id = 0,
+ .pid = -1, // 非法
+ },
+ {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = DVPP_16G / 2,
+ .device_id = 0,
+ .pid = 32769, // 非法
+ },
+ {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = 0xfffffffff, // 超过16G,非法
+ .device_id = 0,
+ .pid = getpid(),
+ },
+ };
+
+ for (int i = 0; i < sizeof(cdr_infos) / sizeof(cdr_infos[0]); i++) {
+ if (!ioctl_config_dvpp_range(dev_fd, cdr_infos + i)) {
+ pr_info("ioctl_config_dvpp_range success unexpected");
+ // return -1;
+ }
+ }
+
+ return 0;
+}
+
+/* 进程加组,参数合理,预期成功, 并且申请dvpp内存,确认是否是设置的范围 */
+static int testcase3(void)
+{
+ int ret;
+
+ if (addgroup(100))
+ return -1;
+
+ struct sp_config_dvpp_range_info cdr_info = {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = DVPP_16G / 2,
+ .device_id = 0,
+ .pid = getpid(),
+ };
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 4,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 100,
+ };
+
+ if (ioctl_config_dvpp_range(dev_fd, &cdr_info)) {
+ pr_info("ioctl_config_dvpp_range failed unexpected");
+ return -1;
+ }
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if(ret != 0) {
+ pr_info("sp_alloc failed errno %d\n", errno);
+ return ret;
+ } else if (alloc_info.addr < cdr_info.start &&
+ alloc_info.addr >= cdr_info.start + cdr_info.size) {
+ pr_info("the range of addr is invalid 0x%llx\n", alloc_info.addr);
+ return -1;
+ } else
+ cleanup(&alloc_info);
+
+ return 0;
+}
+
+/* 进程加组,参数合理,重复设置,预期失败 */
+static int testcase4(void)
+{
+ if (addgroup(10))
+ return -1;
+
+ struct sp_config_dvpp_range_info cdr_info = {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = DVPP_16G / 2,
+ .device_id = 0,
+ .pid = getpid(),
+ };
+
+ if (ioctl_config_dvpp_range(dev_fd, &cdr_info)) {
+ pr_info("ioctl_config_dvpp_range failed unexpected");
+ return -1;
+ }
+ if (!ioctl_config_dvpp_range(dev_fd, &cdr_info)) {
+ pr_info("ioctl_config_dvpp_range successed unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* 进程A配置dvpp地址空间,并且创建共享组。
+ * 申请内存,应该落在进程A的区间内。
+ * 进程B也配置不同的dvpp地址空间。
+ * 然后加组。预期成功但是有warning.
+ * 再次申请内存,所属地址空间是在进程A的。
+ * */
+
+/* size: 4 0000 0000 (16G)
+ * proc A: 0x e700 0000 0000 ~ 0x e704 0000 0000
+ * proc B: 0x e600 0000 0000 ~ 0x e604 0000 0000
+ */
+struct sp_config_dvpp_range_info dvpp_infos[] = {
+ {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = DVPP_16G / 2,
+ .device_id = 0,
+ },
+ {
+ .start = DVPP_BASE + 3 * DVPP_16G,
+ .size = DVPP_16G / 2,
+ .device_id = 0,
+ },
+};
+static int semid;
+static int testcase5(void)
+{
+ int ret;
+ pid_t procA, procB;
+ unsigned long addr;
+ int spg_id = 1;
+ unsigned long size;
+
+ semid = sem_create(1234, "proc A then proc B");
+ procA = fork();
+ if (procA == 0) {
+ /* 配置dvpp地址空间 */
+ dvpp_infos[0].pid = getpid();
+ if (ioctl_config_dvpp_range(dev_fd, &dvpp_infos[0])) {
+ pr_info("proc A config dvpp failed. errno: %d", errno);
+ exit(-1);
+ }
+ if (addgroup(spg_id) < 0) {
+ pr_info("add group failed.");
+ exit(-1);
+ }
+ size = PMD_SIZE;
+ addr = (unsigned long)wrap_sp_alloc(spg_id, size, SP_DVPP | SP_HUGEPAGE);
+ if (addr == -1) {
+ pr_info("alloc failed.");
+ exit(-1);
+ }
+ if (addr < dvpp_infos[0].start ||
+ addr + size > dvpp_infos[0].start + dvpp_infos[0].size) {
+ pr_info("alloc dvpp range incorrect. addr: %lx", addr);
+ exit(-1);
+ }
+ sem_inc_by_one(semid);
+ pr_info(" proc A finished.");
+ sem_check_zero(semid);
+ sem_dec_by_one(semid);
+ exit(0);
+ }
+
+ procB = fork();
+ if (procB == 0) {
+ sem_dec_by_one(semid);
+ pr_info(" proc B started.");
+ /* 配置dvpp地址空间 */
+ dvpp_infos[1].pid = getpid();
+ if (ioctl_config_dvpp_range(dev_fd, &dvpp_infos[1])) {
+ pr_info("proc B config dvpp failed. errno: %d", errno);
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ if (addgroup(spg_id) < 0) {
+ pr_info("add group failed");
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ sem_inc_by_one(semid);
+ exit(0);
+ }
+
+ WAIT_CHILD_STATUS(procA, out_a);
+ WAIT_CHILD_STATUS(procB, out_b);
+out_a:
+ KILL_CHILD(procB);
+out_b:
+ sem_close(semid);
+ return ret;
+}
+
+/* 进程A配置dvpp地址空间,并且创建共享组。
+ * 但不申请内存。
+ * 进程B也配置不同的dvpp地址空间,并且申请直调内存。
+ * 然后加组。预期成功但是有warning.
+ * 再次申请内存,所属地址空间在进程B的。
+ * */
+static int testcase6(void)
+{
+ int ret;
+ pid_t procA, procB;
+ unsigned long addr;
+ int spg_id = 1;
+ unsigned long size;
+
+ semid = sem_create(1234, "proc A then proc B");
+ procA = fork();
+ if (procA == 0) {
+ /* 配置dvpp地址空间 */
+ dvpp_infos[0].pid = getpid();
+ if (ioctl_config_dvpp_range(dev_fd, &dvpp_infos[0])) {
+ pr_info("proc A config dvpp failed. errno: %d", errno);
+ exit(-1);
+ }
+ if (addgroup(spg_id) < 0) {
+ pr_info("add group failed.");
+ exit(-1);
+ }
+ sem_inc_by_one(semid);
+ pr_info(" proc A finished.");
+ sem_check_zero(semid);
+ sem_dec_by_one(semid);
+ exit(0);
+ }
+
+ procB = fork();
+ if (procB == 0) {
+ sem_dec_by_one(semid);
+ pr_info(" proc B started.");
+ /* 配置dvpp地址空间 */
+ dvpp_infos[1].pid = getpid();
+ if (ioctl_config_dvpp_range(dev_fd, &dvpp_infos[1])) {
+ pr_info("proc B config dvpp failed. errno: %d", errno);
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ size = PMD_SIZE;
+ addr = (unsigned long)wrap_sp_alloc(SPG_ID_DEFAULT, size, SP_DVPP | SP_HUGEPAGE);
+ if (addr == -1) {
+ pr_info("alloc pass through failed.");
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ if (addr < dvpp_infos[1].start ||
+ addr + size > dvpp_infos[1].start + dvpp_infos[1].size) {
+ pr_info("alloc dvpp range incorrect. addr: %lx", addr);
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ if (addgroup(spg_id) < 0) {
+ pr_info("add group failed");
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ /* 再次申请内存,检验区间 */
+ addr = (unsigned long)wrap_sp_alloc(spg_id, size, SP_DVPP | SP_HUGEPAGE);
+ if (addr == -1) {
+ pr_info("alloc failed.");
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ if (addr < dvpp_infos[1].start ||
+ addr + size > dvpp_infos[1].start + dvpp_infos[1].size) {
+ pr_info("alloc dvpp range incorrect. addr: %lx", addr);
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ sem_inc_by_one(semid);
+ exit(0);
+ }
+
+ WAIT_CHILD_STATUS(procA, out_a);
+ WAIT_CHILD_STATUS(procB, out_b);
+out_a:
+ KILL_CHILD(procB);
+out_b:
+ sem_close(semid);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "进程未加组,调用dvpp_config,预期失败")
+ // TESTCASE_CHILD(testcase2, "进程加组,但是size、pid、device_id非法,预期失败")
+ TESTCASE_CHILD(testcase3, "进程加组,参数合理,预期成功, 并且申请dvpp内存,确认是否是设置的范围")
+ //TESTCASE_CHILD(testcase4, "进程加组,参数合理,重复设置,预期失败")
+ // TESTCASE_CHILD(testcase5, "DVPP地址空间合并:设置dvpp但未申请内存的进程加入已经设置不同dvpp且已申请内存的组,预期成功,有warning打印")
+ // TESTCASE_CHILD(testcase6, "DVPP地址空间合并:设置dvpp且已申请内存的进程加入已经设置不同dvpp但未申请内存的组,预期成功,有warning打印")
+
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/test_sp_multi_numa_node.c b/tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/test_sp_multi_numa_node.c
new file mode 100644
index 000000000000..02d7a2317f25
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_config_dvpp_range/test_sp_multi_numa_node.c
@@ -0,0 +1,289 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 08:38:18 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+#include "sharepool_lib.h"
+
+#define NUMA_NODES 4
+#define PAGE_NUM 10
+#define ALLOC_SIZE (1024UL * 1024UL * 20UL)
+#define DVPP_SIZE (0xffff0000)
+#define DEVICE_SHIFT 32
+static int addgroup(int group_id)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int cleanup(struct sp_alloc_info *alloc_info)
+{
+ int ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+struct sp_config_dvpp_range_info cdr_infos[] = { // config 两P环境, pg0和pg1
+ {
+ .start = DVPP_BASE + DVPP_16G,
+ .size = DVPP_SIZE,
+ .device_id = 0,
+ },
+ {
+ .start = DVPP_BASE + DVPP_16G * 3,
+ .size = DVPP_SIZE,
+ .device_id = 1,
+ },
+};
+
+/* 进程加组,参数合理,预期成功, 并且申请dvpp内存,确认是否是设置的范围 */
+static int testcase1(void)
+{
+ int ret;
+
+ if (addgroup(100))
+ return -1;
+
+ for (int i = 0; i < sizeof(cdr_infos) / sizeof(cdr_infos[0]); i++) {
+ cdr_infos[i].pid = getpid();
+ if (ioctl_config_dvpp_range(dev_fd, &cdr_infos[i])) {
+ pr_info("ioctl_config_dvpp_range failed unexpected, errno: %d", errno);
+ return -1;
+ } else
+ pr_info("ioctl_config_dvpp_range success: node %d, start: %lx, size: %lx",
+ i, cdr_infos[i].start, cdr_infos[i].size);
+ }
+
+ return 0;
+}
+
+static int testcase2(void)
+{
+ int ret = 0;
+
+ if (addgroup(100))
+ return -1;
+
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_DVPP, // 4
+ .size = ALLOC_SIZE,
+ .spg_id = 100,
+ };
+
+
+ // 用device_id申请,device_id=0申请到node0,device_id=1申请到node1
+ for (int i = 0; i < 2; i++) {
+ unsigned long device_id = i;
+ device_id = device_id << DEVICE_SHIFT;
+ alloc_info.flag = SP_DVPP | device_id;
+ pr_info("alloc %d time, flag: 0x%lx", i, alloc_info.flag);
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if(ret == -1) {
+ pr_info("sp_alloc failed errno %d\n", errno);
+ return -1;
+ } else {
+ pr_info("alloc at device %d success! va: 0x%lx", i, alloc_info.addr);
+ }
+ }
+
+ return 0;
+}
+
+static int testcase3(void)
+{
+ int ret = 0;
+
+ if (addgroup(100))
+ return -1;
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0, // 4
+ .size = ALLOC_SIZE,
+ .spg_id = 100,
+ };
+
+ for (int i = 0; i < NUMA_NODES; i++) {
+ unsigned long node_id = i;
+ node_id = node_id << NODE_ID_SHIFT;
+ alloc_info.flag = SP_SPEC_NODE_ID | node_id;
+ pr_info("alloc at node %d, flag: 0x%lx", i, alloc_info.flag);
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if(ret == -1) {
+ pr_info("sp_alloc failed errno %d\n", errno);
+ ret = -1;
+ } else {
+ pr_info("alloc at node %d success! va: 0x%lx", i, alloc_info.addr);
+ }
+ }
+
+ return 0;
+}
+
+static int testcase4(void)
+{
+ int ret = 0;
+ struct vmalloc_info vmalloc_info = {
+ .size = ALLOC_SIZE,
+ };
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'a',
+ .addr = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto out;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = SP_DVPP,
+ .pid = getpid(),
+ };
+
+ for (int i = 0; i < 2; i++) {
+ // 配置DVPP地址空间
+ cdr_infos[i].pid = getpid();
+ if (ioctl_config_dvpp_range(dev_fd, &cdr_infos[i])) {
+ pr_info("ioctl_config_dvpp_range failed unexpected, errno: %d", errno);
+ return -1;
+ } else
+ pr_info("ioctl_config_dvpp_range success: node %d, start: %lx, size: %lx",
+ i, cdr_infos[i].start, cdr_infos[i].size);
+
+ unsigned long device_id = i;
+ device_id = device_id << DEVICE_SHIFT;
+ k2u_info.sp_flags |= device_id;
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto out;
+ } else if (k2u_info.addr < cdr_infos[i].start ||
+ k2u_info.addr >= cdr_infos[i].start + cdr_infos[i].size) {
+ pr_info("the range of addr is invalid 0x%llx\n", k2u_info.addr);
+ // return -1;
+ } else
+ pr_info("k2u success for device %d, addr: %#x", i, k2u_info.addr);
+
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0)
+ pr_info("unshare failed");
+ }
+
+out:
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ return ret;
+};
+
+/*
+ * 前置条件:系统中node数量不少于2,node的空闲内存不少于100M
+ * 测试步骤:1. 进程加组,从node1申请内存
+ * 2. 反复测试100次
+ * 3. 加组\直调,DVPP\normal,大页\小页组合分别测试
+ * 预期结果:申请前后node1的free内存减少的数量应不少于申请的内存数量
+ */
+#define SIZE_1M 0x100000UL
+static int test_child(int spg_id, unsigned long flags)
+{
+ int node_id = 3;
+ unsigned long size = SIZE_1M * 10;
+
+ if (spg_id != SPG_ID_DEFAULT) {
+ spg_id = wrap_add_group(getpid(), PROT_READ|PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0)
+ return -1;
+ }
+
+ flags |= (unsigned long)node_id << NODE_ID_SHIFT;
+ flags |= SP_SPEC_NODE_ID;
+ void *addr = wrap_sp_alloc(spg_id, size, flags);
+ if (addr == (void *)-1)
+ return -1;
+
+ int ret = ioctl_check_memory_node((unsigned long)addr, size, node_id);
+
+ wrap_sp_free(addr);
+
+ return ret;
+}
+
+static int test_route(int spg_id, unsigned long flags)
+{
+ int i, ret = 0;
+
+ for (i = 0; i < 20; i++) {
+ pid_t pid;
+ FORK_CHILD_ARGS(pid, test_child(spg_id, flags));
+
+ WAIT_CHILD_STATUS(pid, out);
+ }
+
+out:
+ if (ret)
+ pr_info("numa node alloc test failed: spg_id:%d, flag:%u, i:%d", spg_id, flags, i);
+
+ return ret;
+}
+
+static int testcase5(void) { return test_route(0, 0); } // 直调,小页
+static int testcase6(void) { return test_route(0, SP_HUGEPAGE); } // 直调,大页
+static int testcase7(void) { return test_route(1, 0); } // 加组,小页
+static int testcase8(void) { return test_route(1, SP_HUGEPAGE); } // 加组,大页
+static int testcase9(void) { return test_route(0, SP_DVPP | 0); } // 直调,小页,DVPP
+static int testcase10(void) { return test_route(0, SP_DVPP | SP_HUGEPAGE); } // 直调,大页,DVPP
+static int testcase11(void) { return test_route(1, SP_DVPP | 0); } // 加组,小页,DVPP
+static int testcase12(void) { return test_route(1, SP_DVPP | SP_HUGEPAGE); } // 加组,大页,DVPP
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "进程分别在device 0和device 1进行dvpp config,传入start和size,预期成功")
+ TESTCASE_CHILD(testcase2, "config dvpp地址空间后,用device_id申请内存,预期成功(未校验device/node是否正确)")
+ TESTCASE_CHILD(testcase3, "config dvpp地址空间后,用node_id申请内存,预期成功(未校验node是否正确)")
+ TESTCASE_CHILD(testcase4, "config dvpp地址空间后,用device_id进行k2u,并校验地址在config区间内。预期成功")
+ TESTCASE_CHILD(testcase5, "系统中node数量不少于2,node的空闲内存不少于100M。测试步骤:1. 进程加组,从node1申请内存 2. 反复测试100次 3. 加组/直调,DVPP/normal,大页/小页组合分别测试。预期结果:申请前后node1的free内存减少的数量应不少于申请的内存数量. 直调,小页")
+ TESTCASE_CHILD(testcase6, "直调,大页")
+ TESTCASE_CHILD(testcase7, "加组,小页")
+ TESTCASE_CHILD(testcase8, "加组,大页")
+ TESTCASE_CHILD(testcase9, "直调,小页,DVPP")
+ TESTCASE_CHILD(testcase10, "直调,大页,DVPP")
+ TESTCASE_CHILD(testcase11, "加组,小页,DVPP")
+ TESTCASE_CHILD(testcase12, "加组,大页,DVPP")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_free/Makefile b/tools/testing/sharepool/testcase/api_test/sp_free/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_free/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_free/test_sp_free.c b/tools/testing/sharepool/testcase/api_test/sp_free/test_sp_free.c
new file mode 100644
index 000000000000..246f710f4ca8
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_free/test_sp_free.c
@@ -0,0 +1,127 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Dec 15 20:41:34 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <string.h>
+
+#include "sharepool_lib.h"
+
+#define PAGE_NUM 100
+
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+/*
+ * testcase1: 释放地址为非sp_alloc申请地址,预期失败。
+ */
+static int testcase1(void)
+{
+ int ret;
+
+ if (addgroup())
+ return -1;
+
+ char *user_addr = malloc(PAGE_NUM * PAGE_SIZE);
+ if (user_addr == NULL) {
+ pr_info("testcase1 malloc failed, errno: %d", errno);
+ return -1;
+ }
+ memset((void *)user_addr, 'q', PAGE_NUM * PAGE_SIZE);
+
+ struct sp_alloc_info fake_alloc_info = {
+ .flag = 0,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ .addr = (unsigned long)user_addr,
+ };
+
+ ret = ioctl_free(dev_fd, &fake_alloc_info);
+ if (ret < 0 && errno == EINVAL) {
+ pr_info("testcase1 ioctl_free failed expected");
+ free(user_addr);
+ return 0;
+ } else {
+ pr_info("testcase1 ioctl_free failed unexpected");
+ free(user_addr);
+ return -1;
+ }
+}
+
+/*
+ * testcase2: 释放地址不是sp_alloc()分配内存的起始地址,预期失败。
+ */
+static int testcase2(void)
+{
+ int ret;
+ int result;
+
+ if (addgroup())
+ return -1;
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ pr_info("testcase2 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+
+ alloc_info.addr += 1;
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret == 0) {
+ pr_info("testcase2 ioctl_free success unexpected");
+ return -1;
+ } else if (ret < 0 && errno == EINVAL) {
+ pr_info("testcase2 ioctl_free failed expected");
+ result = 0;
+ } else {
+ pr_info("testcase2 ioctl_free failed unexpected, errno = %d", errno);
+ result = -1;
+ }
+
+ // clean up
+ alloc_info.addr -= 1;
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("testcase2 ioctl_free failed, errno: %d", errno);
+ }
+ return result;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "释放地址为非sp_alloc申请地址,预期失败。")
+ TESTCASE_CHILD(testcase2, "释放地址不是sp_alloc()分配内存的起始地址,预期失败。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_add_task/Makefile b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task.c b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task.c
new file mode 100644
index 000000000000..54d353d9afd1
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task.c
@@ -0,0 +1,568 @@
+#include <stdio.h>
+#include <errno.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/wait.h> // for wait
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+
+/*
+ * testcase1
+ * 测试点:有效pid加组
+ * 预期结果:加组成功,返回正确组id
+ */
+static int testcase1(void)
+{
+ int ret;
+ int group_id = 10;
+ pid_t pid;
+
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ while (1);
+ exit(-1);
+ } else {
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0 || ag_info.spg_id != group_id) {
+ pr_info("unexpect result, ret:%d, ag_info.spg_id:%d", ret, ag_info.spg_id);
+ ret = -1;
+ } else {
+ //pr_info("testcase1 success!!");
+ ret = 0;
+ }
+
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ }
+
+ return ret;
+}
+
+/*
+ * testcase2
+ * 测试点:进程退出时加入指定共享组
+ * 预期结果:加组失败,错误码 -ESRCH
+ */
+static int testcase2_result = 0;
+static struct sp_add_group_info testcase2_ag_info = {
+ .spg_id = 10,
+ .prot = PROT_READ | PROT_WRITE,
+};
+
+static void testcase2_sigchld_handler(int num)
+{
+ int ret = ioctl_add_group(dev_fd, &testcase2_ag_info);
+ if (!(ret < 0 && errno == ESRCH)) {
+ pr_info("unexpect result, ret: %d, errno: %d", ret, errno);
+ testcase2_result = -1;
+ }
+}
+
+static int testcase2(void)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ while (1);
+ exit(-1);
+ } else {
+ struct sigaction osa = {0};
+ struct sigaction sa = {0};
+ sa.sa_handler = testcase2_sigchld_handler;
+ sigaction(SIGCHLD, &sa, &osa);
+ testcase2_ag_info.pid = pid;
+
+ kill(pid, SIGKILL);
+
+ waitpid(pid, NULL, 0);
+ sigaction(SIGCHLD, &osa, NULL);
+ }
+
+ return testcase2_result;
+}
+
+/*
+ * testcase3
+ * 测试点:无效pid加入共享组
+ * 预期结果:加组失败,错误码-ESRCH
+ */
+static int testcase3(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = -1,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 10,
+ };
+
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (!(ret < 0 && errno == ESRCH)) {
+ pr_info("failed, ret:%d, errno:%d", ret, errno);
+ return -1;
+ } else {
+ //pr_info("testcase3 success!!");
+ return 0;
+ }
+}
+
+/*
+ * testcase4
+ * 测试点:有效pid(进程非退出状态)重复加入不同组
+ * 预期结果:加组失败,错误码-EEXIST
+ */
+static int testcase4(void)
+{
+ int ret;
+ int group_id = 10;
+ pid_t pid;
+
+ // 多组场景可以单进程加多组,用例失效
+ return 0;
+
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ while (1);
+ exit(-1);
+ } else {
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0 || ag_info.spg_id != group_id) {
+ pr_info("first add failed, ret:%d, ag_info.spg_id:%d", ret, ag_info.spg_id);
+ ret = -1;
+ goto error_out;
+ }
+
+ ag_info.spg_id = group_id + 1;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (!(ret < 0 && errno == EEXIST)) {
+ pr_info("failed, ret:%d, errno:%d", ret, errno);
+ ret = -1;
+ } else {
+ //pr_info("testcase4 success!!");
+ ret = 0;
+ }
+
+error_out:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return ret;
+ }
+}
+
+/*
+ * testcase5
+ * 测试点:有效pid(进程非退出状态)重复加入相同组
+ * 预期结果:加组失败,错误码-EEXIST
+ */
+static int testcase5(void)
+{
+ int ret;
+ int group_id = 10;
+ pid_t pid;
+
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ while (1);
+ exit(-1);
+ } else {
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0 || ag_info.spg_id != group_id) {
+ pr_info("first add failed, ret:%d, ag_info.spg_id:%d", ret, ag_info.spg_id);
+ ret = -1;
+ goto error_out;
+ }
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (!(ret < 0 && errno == EEXIST)) {
+ pr_info("failed, ret:%d, errno:%d", ret, errno);
+ ret = -1;
+ } else {
+ //pr_info("testcase5 success!!");
+ ret = 0;
+ }
+
+error_out:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return ret;
+ }
+}
+
+/*
+ * testcase6
+ * 测试点:不同线程加相同组
+ * 预期结果:第一个线程加组成功,其他线程加组失败,错误码-EEXIST
+ */
+static void *testcase6_7_thread1(void *data)
+{
+ int group_id = *(int *)data;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0 || ag_info.spg_id != group_id) {
+ pr_info("first add failed, ret:%d, ag_info.spg_id:%d", ret, ag_info.spg_id);
+ ret = -1;
+ }
+
+ return (void *)ret;
+}
+
+static void *testcase6_7_thread2(void *data)
+{
+ int group_id = *(int *)data;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (!(ret < 0 && errno == EEXIST)) {
+ pr_info("failed, ret:%d, errno:%d", ret, errno);
+ ret = -1;
+ } else
+ ret = 0;
+
+ return (void *)ret;
+}
+
+static int testcase6_7_child_process(int arg)
+{
+ int ret;
+ pthread_t p1, p2;
+
+ int group1 = 10;
+ int group2 = group1 + arg;
+
+ ret = pthread_create(&p1, NULL, testcase6_7_thread1, &group1);
+ if (ret) {
+ pr_info("create thread1 failed, ret: %d", ret);
+ return -1;
+ }
+ void *result = NULL;
+ pthread_join(p1, &result);
+ if (result)
+ return -1;
+
+ ret = pthread_create(&p2, NULL, testcase6_7_thread2, &group2);
+ if (ret) {
+ pr_info("create thread2 failed, ret: %d", ret);
+ return -1;
+ }
+ pthread_join(p2, &result);
+ if (result)
+ return -1;
+
+ return 0;
+}
+
+static int testcase6(void)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ int ret = testcase6_7_child_process(0);
+ if (ret)
+ pr_info("testcase6 failed!!");
+ else
+ //pr_info("testcase6 success!!");
+ exit(ret);
+ } else {
+ int status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status)) {
+ pr_info("child process exited unexpected");
+ return -1;
+ }
+ return (char)WEXITSTATUS(status);
+ }
+}
+
+/*
+ * testcase7
+ * 测试点:不同线程加不同组
+ * 预期结果:第一个线程加组成功,其他线程加组失败,错误码-EEXIST
+ */
+static int testcase7(void)
+{
+ // 多组场景可以单进程加多组,用例失效
+ return 0;
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ int ret = testcase6_7_child_process(1);
+ if (ret)
+ pr_info("testcase7 failed!!");
+ else
+ //pr_info("testcase7 success!!");
+ exit(ret);
+ } else {
+ int status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status)) {
+ pr_info("child process exited unexpected");
+ return -1;
+ }
+ return (char)WEXITSTATUS(status);
+ }
+}
+
+/*
+ * testcase8
+ * 测试点:父进程加组后fork子进程加相同的组
+ * 预期结果:子进程加组失败,错误码-EEXIST
+ */
+
+/*
+ * 入参为0表示父子进程加入相同的组,非0表示加入不同的组
+ */
+static int testcase8_9_child(int arg)
+{
+ int group_id = 10;
+ pid_t pid;
+
+ char *sem_name = "/add_task_testcase8";
+ sem_t *sync = sem_open(sem_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sem_name);
+
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id + arg,
+ };
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret && errno == EINTR);
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed, ret:%d, errno:%d", ret, errno);
+ } else {
+ ret = 0;
+ }
+
+ exit(ret);
+ } else {
+ int ret;
+ int status = 0;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ sem_post(sync);
+ if (ret < 0) {
+ pr_info("add group failed");
+ ret = -1;
+ goto error_out;
+ }
+
+error_out:
+ waitpid(pid, &status, 0);
+ if (!ret)
+ ret = (char)WEXITSTATUS(status);
+ return ret;
+ }
+}
+
+static int testcase8(void)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(testcase8_9_child(0));
+ } else {
+ int status = 0;
+ waitpid(pid, &status, 0);
+ int ret = (char)WEXITSTATUS(status);
+ if (ret) {
+ return -1;
+ } else {
+ return 0;
+ }
+ }
+}
+
+/*
+ * testcase9
+ * 测试点:父进程加组后fork子进程加不同的组
+ * 预期结果:子进程加组成功
+ */
+static int testcase9(void)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(testcase8_9_child(1));
+ } else {
+ int status = 0;
+ waitpid(pid, &status, 0);
+ int ret = (char)WEXITSTATUS(status);
+ if (ret) {
+ return -1;
+ } else {
+ return 0;
+ }
+ }
+}
+
+/*
+ * testcase10
+ * 测试点:有效pid加入非法组
+ * 预期结果:加组失败,错误码-EINVAL
+ */
+static int testcase10(void)
+{
+ int ret = 0;
+ int group_ids[] = {0, -1, 100000, 200001, 800000, 900001};
+
+ for (int i = 0; !ret && i < sizeof(group_ids) / sizeof(int); i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ while (1);
+ exit(-1);
+ } else {
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_ids[i],
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (!(ret < 0 && errno == EINVAL)) {
+ pr_info("failed, ret: %d, errno: %d", ret, errno);
+ ret = -1;
+ } else
+ ret = 0;
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ }
+ }
+
+ return ret;
+}
+
+/*
+ * testcase11
+ * 测试点:有效pid加组,spg_id=SPG_ID_AUTO
+ * 预期结果:加组成功,自动生成[SPG_ID_AUTO_MIN, SPG_ID_AUTO_MAX]之间的组id
+ */
+static int testcase11(void)
+{
+ int ret;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ while (1);
+ exit(-1);
+ } else {
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0 || ag_info.spg_id < SPG_ID_AUTO_MIN
+ || ag_info.spg_id > SPG_ID_AUTO_MAX) {
+ pr_info("failed, ret: %d, errno: %d, spg_id: %d",
+ ret, errno, ag_info.spg_id);
+ ret = -1;
+ }
+
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ }
+
+ //pr_info("testcase11 %s!!", ret ? "failed" : "success");
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "有效pid加组,预期成功,返回正确组Id")
+ TESTCASE(testcase2, "进程退出时加入指定共享组,预期结果:加组失败,错误码 -ESRCH")
+ TESTCASE(testcase3, "无效pid加入共享组,预期结果:加组失败,错误码-ESRCH")
+ TESTCASE(testcase4, "有效pid(进程非退出状态)重复加入不同组,预期结果:加组失败,错误码-EEXIST")
+ TESTCASE(testcase5, "有效pid(进程非退出状态)重复加入相同组,预期结果:加组失败,错误码-EEXIST")
+ TESTCASE(testcase6, "不同线程加相同组,预期结果:第一个线程加组成功,其他线程加组失败,错误码-EEXIST")
+ TESTCASE(testcase7, "不同线程加不同组,预期结果:第一个线程加组成功,其他线程加组失败,错误码-EEXIST")
+ TESTCASE(testcase8, "父进程加组后fork子进程加相同的组,预期结果:子进程加组失败,错误码-EEXIST")
+ TESTCASE(testcase9, "父进程加组后fork子进程加不同的组,预期结果:子进程加组成功")
+ TESTCASE(testcase10, "有效pid加入非法组,预期结果:加组失败,错误码-EINVAL")
+ TESTCASE(testcase11, "有效pid加组,spg_id=SPG_ID_AUTO,预期结果:加组成功,自动生成[SPG_ID_AUTO_MIN, SPG_ID_AUTO_MAX]之间的组id")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task2.c b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task2.c
new file mode 100644
index 000000000000..d31e409e110e
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task2.c
@@ -0,0 +1,254 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue May 18 03:09:01 2021
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <sys/mman.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * 单进程两次加不同组
+ * 预期加组成功
+ */
+static int testcase1(void)
+{
+ int ret;
+ pid_t pid;
+
+ FORK_CHILD_DEADLOOP(pid);
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+
+out:
+ KILL_CHILD(pid);
+ return ret;
+}
+
+/*
+ * 单进程两次加相同组
+ * 预期第二次加组失败,错误码EEXIST
+ */
+static int testcase2(void)
+{
+ int ret;
+ pid_t pid;
+
+ FORK_CHILD_DEADLOOP(pid);
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ TEST_CHECK_FAIL(ioctl_add_group(dev_fd, &ag_info), EEXIST, out);
+
+out:
+ KILL_CHILD(pid);
+ return ret;
+}
+
+/*
+ * 多个进程都加入多个组
+ * 预期加组成功
+ */
+#define testcase3_group_num 10
+#define testcase3_child_num 20
+static int testcase3_child(int idx, sem_t *sync)
+{
+ int ret;
+
+ SEM_WAIT(sync);
+
+ pr_info("child idx: %d", idx);
+
+ return 0;
+}
+
+static int testcase3(void)
+{
+ int ret, i, j;
+ pid_t pid[testcase3_child_num];
+ sem_t *sync[testcase3_child_num];
+ int group_num = testcase3_group_num;
+ int groups[testcase3_group_num];
+
+ for (i = 0; i < testcase3_child_num; i++)
+ SEM_INIT(sync[i], i);
+
+ for (i = 0; i < testcase3_child_num; i++)
+ FORK_CHILD_ARGS(pid[i], testcase3_child(i, sync[i]));
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid[0],
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+
+ for (i = 0; i < group_num; i++) {
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ groups[i] = ag_info.spg_id;
+ }
+
+ for (i = 0; i < group_num; i++) {
+ ag_info.spg_id = groups[i];
+ for (j = 1; j < testcase3_child_num; j++) {
+ ag_info.pid = pid[j];
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ }
+ }
+
+ for (i = 0; i < testcase3_child_num; i++)
+ sem_post(sync[i]);
+
+ for (i = 0; i < testcase3_child_num;i++)
+ waitpid(pid[0], NULL, 0);
+
+out:
+ return ret;
+}
+
+/*
+ * 多个进程都加入多个组,首进程申请内存,并写入,后面的进程读数据
+ * 预期加组成功,读写内存成功
+ */
+#define testcase4_group_num 10
+#define testcase4_child_num 11
+#define testcase4_buf_size 0x1024
+
+struct testcase4_data {
+ int group_id;
+ void *share_area;
+};
+
+static int testcase4_writer(sem_t **sync, struct testcase4_data *data)
+{
+ int ret, i;
+
+ SEM_WAIT(sync[0]);
+
+ for (i = 0; i < testcase4_group_num; i++) {
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = testcase4_buf_size,
+ .spg_id = data[i].group_id,
+ };
+ TEST_CHECK(ioctl_alloc(dev_fd, &alloc_info), out);
+ data[i].share_area = (void *)alloc_info.addr;
+ memset(data[i].share_area, 'a', testcase4_buf_size);
+ }
+
+ for (i = 1; i < testcase4_child_num; i++)
+ sem_post(sync[i]);
+
+out:
+ return ret;
+}
+
+static int testcase4_reader(sem_t *sync, struct testcase4_data *data)
+{
+ int ret, i, j;
+
+ SEM_WAIT(sync);
+
+ for (i = 0; i < testcase4_group_num; i++) {
+ char *base = data[i].share_area;
+ for (j = 0; j < testcase4_buf_size; j++)
+ if (base[j] != 'a') {
+ pr_info("unexpect result: i = %d, base[%d]: %d", i, j, base[j]);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase4(void)
+{
+ int ret, i, j, fd;
+ pid_t pid[testcase4_child_num];
+ sem_t *sync[testcase4_child_num];
+ struct testcase4_data *data = mmap(NULL, sizeof(*data) * testcase4_child_num,
+ PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0);
+ if (data == MAP_FAILED) {
+ pr_info("map failed");
+ return -1;
+ }
+
+ for (i = 0; i < testcase4_child_num; i++)
+ SEM_INIT(sync[i], i);
+
+ FORK_CHILD_ARGS(pid[0], testcase4_writer(sync, data));
+ for (i = 1; i < testcase4_child_num; i++)
+ FORK_CHILD_ARGS(pid[i], testcase4_reader(sync[i], data));
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid[0],
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+
+ for (i = 0; i < testcase4_group_num; i++) {
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out_kill);
+ data[i].group_id = ag_info.spg_id;
+ }
+
+ for (i = 0; i < testcase4_group_num; i++) {
+ ag_info.spg_id = data[i].group_id;
+ for (j = 1; j < testcase4_child_num; j++) {
+ ag_info.pid = pid[j];
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out_kill);
+ }
+ }
+
+ sem_post(sync[0]);
+
+ for (i = 0; i < testcase4_child_num; i++)
+ WAIT_CHILD_STATUS(pid[i], out_kill);
+
+ return 0;
+
+out_kill:
+ for (i = 0; i < testcase4_child_num; i++)
+ kill(pid[i], SIGKILL);
+out:
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "单进程两次加不同组,预期:加组成功")
+ TESTCASE(testcase2, "单进程两次加相同组,预期:第二次加组失败,错误码EEXIST")
+ TESTCASE(testcase3, "多个进程都加入多个组,预期:加组成功")
+ TESTCASE(testcase4, "多个进程都加入多个组,首进程申请内存,并写入,后面的进程读数据。预期:加组成功,读写内存成功")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task3.c b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task3.c
new file mode 100644
index 000000000000..b56999ba371e
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task3.c
@@ -0,0 +1,250 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed May 19 06:19:21 2021
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <sys/mman.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * 进程加组,设置成只读权限,尝试读写普通内存和共享内存
+ * 预期普通内存读写成功,共享内存读成功,写失败
+ */
+static int testcase1(void)
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ,
+ .spg_id = SPG_ID_AUTO,
+ };
+
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+
+ char *buf = mmap(NULL, 1024, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+ if (buf == MAP_FAILED) {
+ pr_info("mmap failed");
+ return -1;
+ }
+ memset(buf, 'a', 10);
+ memcpy(buf + 20, buf, 10);
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 1024,
+ .spg_id = ag_info.spg_id,
+ };
+ TEST_CHECK(ioctl_alloc(dev_fd, &alloc_info), out);
+
+ buf = (char *)alloc_info.addr;
+ /* 这一步应该触发segment fault */
+ memset(buf, 'a', 10);
+ memcpy(buf + 20, buf, 10);
+ if (strncmp(buf, buf + 20, 10))
+ pr_info("compare failed");
+ else
+ pr_info("compare success");
+
+ // unaccessable
+ pr_info("ERROR!! unaccessable statement reach");
+ return -1;
+out:
+ return ret;
+}
+
+/*
+ * 进程A加组,申请共享内存,读写;进程B加组,设置成只读权限,读写进程A申请的共享内存。
+ * 预期进程B读成功,写失败。
+ */
+static int testcase2_child(char *addr, sem_t *sync)
+{
+ int ret;
+ pr_info("in child process");
+
+ SEM_WAIT(sync);
+
+ pr_info("first two char: %c %c", addr[0], addr[1]);
+ if (addr[0] != 'a' || addr[1] != 'a') {
+ pr_info("memory context check failed");
+ return -1;
+ }
+
+ addr[1] = 'b';
+
+ // unaccessable
+ pr_info("ERROR!! unaccessable statement reach");
+
+ return -1;
+}
+
+static int testcase2(void)
+{
+ int ret;
+ pid_t pid;
+ sem_t *sync;
+ SEM_INIT(sync, 0);
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 1024,
+ .spg_id = ag_info.spg_id,
+ };
+ TEST_CHECK(ioctl_alloc(dev_fd, &alloc_info), out);
+ char *buf = (char *)alloc_info.addr;
+ memset(buf, 'a', 10);
+
+ FORK_CHILD_ARGS(pid, testcase2_child(buf, sync));
+ ag_info.pid = pid;
+ ag_info.prot = PROT_READ;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+
+ sem_post(sync);
+
+ WAIT_CHILD_SIGNAL(pid, SIGSEGV, out);
+
+out:
+ return ret;
+}
+
+/*
+ * 测试步骤:两个进程加同一组,A读写,B只读,然后A申请内存,A读写内存,B读、写
+ * 预期结果:加组成功, A写成功,B读成功,写失败
+ */
+static int testcase3_child(char **paddr, sem_t *sync)
+{
+ int ret;
+ pr_info("in child process");
+
+ SEM_WAIT(sync);
+
+ char *addr = *paddr;
+
+ pr_info("first two char: %c %c", addr[0], addr[1]);
+ if (addr[0] != 'a' || addr[1] != 'a') {
+ pr_info("memory context check failed");
+ return -1;
+ }
+
+ addr[1] = 'b';
+
+ // unaccessable
+ pr_info("ERROR!! unaccessable statement reach");
+
+ return -1;
+}
+
+static int testcase3(void)
+{
+ int ret;
+ pid_t pid;
+ sem_t *sync;
+ SEM_INIT(sync, 0);
+
+ char **paddr = mmap(NULL, sizeof(*paddr), PROT_READ | PROT_WRITE,
+ MAP_SHARED | MAP_ANONYMOUS, -1, 0);
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+
+ FORK_CHILD_ARGS(pid, testcase3_child(paddr, sync));
+ ag_info.pid = pid;
+ ag_info.prot = PROT_READ;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 1024,
+ .spg_id = ag_info.spg_id,
+ };
+ TEST_CHECK(ioctl_alloc(dev_fd, &alloc_info), out);
+ char *buf = (char *)alloc_info.addr;
+ memset(buf, 'a', 10);
+ *paddr = buf;
+
+ sem_post(sync);
+
+ WAIT_CHILD_SIGNAL(pid, SIGSEGV, out);
+
+out:
+ return ret;
+}
+
+/*
+ * 进程加组,权限位无效,0,设置无效bit
+ * 预期返回错误
+ */
+static int testcase4(void)
+{
+ int i, ret;
+
+ struct sp_add_group_info ag_info[] = {
+ {
+ .pid = getpid(),
+ .prot = 0,
+ .spg_id = SPG_ID_AUTO,
+ },
+ {
+ .pid = getpid(),
+ .prot = PROT_EXEC,
+ .spg_id = SPG_ID_AUTO,
+ },
+ {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_EXEC,
+ .spg_id = SPG_ID_AUTO,
+ },
+ {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE | PROT_EXEC,
+ .spg_id = SPG_ID_AUTO,
+ },
+ };
+
+ for (i = 0; i < ARRAY_SIZE(ag_info); i++) {
+ ret = ioctl_add_group(dev_fd, ag_info + i);
+ if (!(ret == -1 && errno == EINVAL)) {
+ pr_info("ioctl_add_group return unexpected, i:%d, ret:%d, errno:%d", i, ret, errno);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD_SIGNAL(testcase1, SIGSEGV, "进程加组,设置成只读权限,尝试读写普通内存和共享内存。预期:普通内存读写成功,共享内存读成功,写失败")
+ TESTCASE_CHILD(testcase2, "进程A加组,申请共享内存,读写;进程B加组,设置成只读权限,读写进程A申请的共享内存。预期:进程B读成功,写失败。")
+ TESTCASE_CHILD(testcase3, "两个进程加同一组,A读写,B只读,然后A申请内存,A读写内存,B读、写。预期:加组成功, A写成功,B读成功,写失败")
+ TESTCASE_CHILD(testcase4, "进程加组,权限位无效,0,设置无效bit。预期:返回错误")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task4.c b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task4.c
new file mode 100644
index 000000000000..fbd2073333d6
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task4.c
@@ -0,0 +1,148 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Sat May 29 07:24:40 2021
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <sys/mman.h>
+#include <pthread.h>
+
+#include "sharepool_lib.h"
+
+
+#define MAX_PROCESS_PER_GROUP 1024
+#define MAX_GROUP_PER_PROCESS 3000
+
+/*
+ * 测试步骤:单进程循环加组
+ * 预期结果:第3000次加组失败,前面加组成功
+ */
+static int testcase1(void)
+{
+ int i, ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ,
+ };
+
+ for (i = 0; i < MAX_GROUP_PER_PROCESS - 1; i++) {
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ }
+
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK_FAIL(ioctl_add_group(dev_fd, &ag_info), ENOSPC, out);
+
+out:
+ return ret;
+}
+
+/*
+ * 测试步骤:多线程并发加组
+ * 预期结果,所有线程加组成功次数为2999
+ */
+static void *test2_thread(void *arg)
+{
+ int i, ret = 0;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ,
+ };
+
+ for (i = 0; i < MAX_GROUP_PER_PROCESS - 1; i++) {
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ }
+
+out:
+ pr_info("thread%d, returned, %d groups has been added successfully", (int)arg, i);
+ return (void *)i;
+}
+
+#define TEST2_THERAD_NUM 20
+static int testcase2(void)
+{
+ int i, ret, sum = 0;
+ void *val;
+ pthread_t th[TEST2_THERAD_NUM];
+
+ for (i = 0; i < ARRAY_SIZE(th); i++)
+ TEST_CHECK(pthread_create(th + i, NULL, test2_thread, (void *)i), out);
+
+ for (i = 0; i < ARRAY_SIZE(th); i++) {
+ TEST_CHECK(pthread_join(th[i], &val), out);
+ sum += (int)val;
+ }
+
+ if (sum != MAX_GROUP_PER_PROCESS - 1) {
+ pr_info("MAX_GROUP_PER_PROCESS check failed, %d", sum);
+ return -1;
+ }
+
+out:
+ return ret;
+}
+
+/*
+ * 功能未实现
+ * 测试点:单组进程上限1024
+ * 测试步骤:进程不断fork,并且给子进程加组
+ * 预期结果:有1023个进程加组成功
+ */
+
+static int testcase3(void)
+{
+ int i = 0, ret;
+ pid_t pid[MAX_PROCESS_PER_GROUP + 1];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+
+ for (i = 0; i < MAX_PROCESS_PER_GROUP + 1; i++) {
+ FORK_CHILD_SLEEP(pid[i]);
+ ag_info.pid = pid[i];
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ }
+
+out:
+ if (i == MAX_PROCESS_PER_GROUP - 1) {
+ ret = 0;
+ pr_info("%d processes added to a group, success", i);
+ } else {
+ ret = -1;
+ pr_info("%d processes added to a group, failed", i);
+ }
+
+ while (i >= 0) {
+ KILL_CHILD(pid[i]);
+ i--;
+ }
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "单进程循环加组。预期结果:第3000次加组失败,前面加组成功")
+ TESTCASE_CHILD(testcase2, "多线程并发加组。预期结果:所有线程加组成功次数为2999")
+ TESTCASE_CHILD(testcase3, "单组进程上限1024;进程不断fork,并且给子进程加组。预期结果:有1023个进程加组成功")
+};
+
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task5.c b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task5.c
new file mode 100644
index 000000000000..30ae4e1f6cf9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_add_task/test_sp_group_add_task5.c
@@ -0,0 +1,113 @@
+#include <stdio.h>
+#include <errno.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/wait.h> // for wait
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+#include "sem_use.h"
+
+#define GROUP_NUM 2999
+#define PROC_NUM 100
+
+int sem;
+int sem_dump;
+
+void *thread_dump(void *arg)
+{
+ sem_dec_by_one(sem_dump);
+ generateCoredump();
+ return (void *)0;
+}
+
+static int tc1_child(void)
+{
+ int ret = 0;
+ pthread_t thread;
+
+ pthread_create(&thread, NULL, thread_dump, NULL);
+ sem_dec_by_one(sem);
+ pr_info("child %d starts add group", getpid());
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, i + 1);
+ if (ret < 0)
+ break;
+ }
+
+ return 0;
+}
+
+/*
+ * testcase1
+ * 测试点:有效pid加组
+ * 预期结果:加组成功,返回正确组id
+ */
+static int testcase1(void)
+{
+ int ret;
+ int pid;
+ int child[PROC_NUM];
+
+ sem = sem_create(1234, "12");
+ sem_dump = sem_create(3456, "sem for coredump control");
+
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, i + 1);
+ if (ret < 0) {
+ pr_info("create group failed");
+ return -1;
+ }
+ for (int j = 0; j < 5; j++)
+ ret = wrap_sp_alloc(i + 1, 4096, 0);
+ }
+ pr_info("create all groups success\n");
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(tc1_child());
+ } else {
+ child[i] = pid;
+ }
+ }
+
+ sem_inc_by_val(sem, PROC_NUM);
+ pr_info("create all processes success\n");
+
+ sleep(3);
+
+ sem_inc_by_val(sem_dump, PROC_NUM);
+
+ for (int i = 0; i < PROC_NUM; i++)
+ WAIT_CHILD_STATUS(child[i], out);
+out:
+ sem_close(sem);
+ sem_close(sem_dump);
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "拉起多个进程创建组,上限49999,同时令其coredump")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_del_task/Makefile b/tools/testing/sharepool/testcase/api_test/sp_group_del_task/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_del_task/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_del_task/test_sp_group_del_task.c b/tools/testing/sharepool/testcase/api_test/sp_group_del_task/test_sp_group_del_task.c
new file mode 100644
index 000000000000..090cfd514dae
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_del_task/test_sp_group_del_task.c
@@ -0,0 +1,1083 @@
+#include "sharepool_lib.h"
+#include "sem_use.h"
+#include <stdlib.h>
+#include <errno.h>
+#include <assert.h>
+#include <pthread.h>
+#include <sys/types.h>
+
+#define PROC_NUM 8
+#define THREAD_NUM 5
+#define GROUP_NUM 16
+#define ALLOC_TYPE 4
+#define REPEAT_TIMES 2
+#define ALLOC_SIZE PAGE_SIZE
+#define PROT (PROT_READ | PROT_WRITE)
+
+static int group_ids[GROUP_NUM];
+static int default_id = 1;
+static int semid;
+
+static int add_multi_group();
+static int check_multi_group();
+static int delete_multi_group();
+static int process();
+void *thread_and_process_helper(int group_id);
+void *del_group_thread(void *arg);
+void *del_proc_from_group(void *arg);
+
+/* testcase1: A and B 加组,A调用sp_group_del_task退出组。预期成功 */
+static int testcase1(void)
+{
+ int ret= 0;
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ int pid = fork();
+ if (pid == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ sharepool_print();
+ ret = wrap_del_from_group(getpid(), default_id);
+ pr_info("\nafter delete process %d\n", getpid());
+ sharepool_print();
+ exit(ret);
+ }
+
+ int status;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status)) {
+ pr_info("child process exited unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* testcase2: A and B 加组,B申请了内存。A调用sp_group_del_task退出组。预期失败 */
+static int testcase2(void)
+{
+ int ret = 0;
+ void *pret;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ pret = wrap_sp_alloc(default_id, ALLOC_SIZE, 0);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ ret = -1;
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ ret = 0;
+ }
+
+ int pid = fork();
+ if (pid == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ sharepool_print();
+ ret = wrap_del_from_group(getpid(), default_id);
+ if (ret < 0) {
+ pr_info("delete failed, errno: %d", errno);
+ } else {
+ pr_info("delete success");
+ }
+ pr_info("\nafter delete process %d\n", getpid());
+ sharepool_print();
+ exit(ret);
+ }
+
+ int status;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status)) {
+ pr_info("child process exited unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* testcase3: A加组,A调用sp_group_del_task退出组。预期成功 */
+static int testcase3(void)
+{
+ int ret = 0;
+
+ int pid = fork();
+ if (pid == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ sharepool_print();
+ ret = wrap_del_from_group(getpid(), default_id);
+ if (ret < 0) {
+ pr_info("delete failed, errno: %d", errno);
+ } else {
+ pr_info("delete success");
+ }
+ pr_info("\nafter delete process %d\n", getpid());
+ sharepool_print();
+ exit(ret);
+ }
+
+ int status;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status)) {
+ pr_info("child process exited unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* testcase4: A加组并申请内存。A调用sp_group_del_task退出组。预期失败。再free再del,预期成功 */
+static int testcase4(void)
+{
+ int ret = 0;
+
+ int pid = fork();
+ if (pid == 0) {
+ void *addr;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ addr = wrap_sp_alloc(default_id, PAGE_SIZE, 0);
+ if (addr == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ }
+ sharepool_print();
+
+ ret = wrap_del_from_group(getpid(), default_id);
+ if (ret < 0) {
+ pr_info("delete failed, errno: %d", errno);
+ } else {
+ pr_info("delete success");
+ }
+
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d free failed.", getpid());
+ } else {
+ pr_info("process %d free success.", getpid());
+ }
+
+ ret = wrap_del_from_group(getpid(), default_id);
+ if (ret < 0) {
+ pr_info("delete failed, errno: %d", errno);
+ } else {
+ pr_info("delete success");
+ }
+
+ pr_info("\nafter delete process %d\n", getpid());
+ sharepool_print();
+ exit(ret);
+ }
+
+ int status;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status)) {
+ pr_info("child process exited unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* testcase5: N个进程在未申请内存的组中,并发退组。预期成功 */
+static int testcase5(void)
+{
+ int ret = 0;
+ pid_t childs[PROC_NUM];
+
+ semid = sem_create(1234, "concurrent delete from group");
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed.");
+ return -1;
+ } else if (pid == 0) {
+ ret = add_multi_group();
+ if (ret < 0) {
+ pr_info("process %d add all groups failed.", getpid());
+ exit(-1);
+ }
+ ret = check_multi_group();
+ if (ret < 0) {
+ pr_info("process %d check all groups failed.", getpid());
+ exit(-1);
+ } else {
+ pr_info("process %d check all groups success.", getpid());
+ }
+
+ sem_inc_by_one(semid);
+ sem_check_zero(semid);
+
+ ret = delete_multi_group();
+ exit(ret);
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ sem_dec_by_val(semid, PROC_NUM);
+ for (int i = 0; i < PROC_NUM; i++) {
+ int status = 0;
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", i, status);
+ ret = -1;
+ }
+ }
+
+ sem_close(semid);
+ return ret;
+}
+
+/* testcase6: N个进程在申请内存的组中,并发退组。预期失败 */
+static int testcase6(void)
+{
+ int ret = 0;
+ void *pret;
+ pid_t childs[PROC_NUM];
+
+ semid = sem_create(1234, "concurrent delete from group");
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed.");
+ return -1;
+ } else if (pid == 0) {
+ pr_info("fork child %d success", getpid());
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ pret = wrap_sp_alloc(default_id, ALLOC_SIZE, 0);
+ if (pret == (void *)-1) {
+ pr_info("alloc failed %d", getpid());
+ sem_inc_by_one(semid);
+ exit(-1);
+ } else {
+ pr_info("alloc addr: %lx", pret);
+ }
+
+ sem_inc_by_one(semid);
+ sem_check_zero(semid);
+
+ pr_info("child %d del", getpid());
+ ret = wrap_del_from_group(getpid(), default_id);
+ if (ret == 0)
+ exit(-1);
+ exit(0);
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ sem_dec_by_val(semid, PROC_NUM);
+ for (int i = 0; i < PROC_NUM; i++) {
+ int status = 0;
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", i, status);
+ ret = -1;
+ } else {
+ pr_info("child%d test success, %d", i, status);
+ }
+ }
+
+ sem_close(semid);
+ sharepool_print();
+
+ return ret;
+}
+
+/* testcase7: 主进程申请释放,子进程一边加组一边退组。一段时间后杀死,预期无死锁/泄漏 */
+static int testcase7(void)
+{
+ int ret = 0;
+ int childs[PROC_NUM];
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("parent %d add into group failed. errno: %d", getpid(), ret);
+ return -1;
+ }
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0)
+ pr_info("child %d add into group failed. errno: %d", getpid(), ret);
+ ret = wrap_del_from_group(getpid(), default_id);
+ if (ret < 0)
+ pr_info("child %d del from group failed. errno: %d", getpid(), ret);
+ }
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ int alloc_time = 200, j = 0;
+ void *pret;
+ while (j++ < 200) {
+ pret = wrap_sp_alloc(default_id, ALLOC_SIZE, 0);
+ if (pret == (void *)-1)
+ pr_info("alloc failed errno %d", errno);
+ else {
+ ret = wrap_sp_free(ret);
+ if (ret < 0) {
+ pr_info("free failed errno %d", ret);
+ goto free_error;
+ }
+ }
+ }
+
+free_error:
+ for (int i = 0; i < PROC_NUM; i++) {
+ kill(childs[i], SIGKILL);
+ int status;
+ waitpid(childs[i], &status, 0);
+ }
+
+ sharepool_print();
+ return 0;
+}
+
+/* testcase8: N个进程加组,一半进程退组,一半进程退出。预期稳定 */
+static int testcase8(void)
+{
+ int ret = 0;
+ int childs[PROC_NUM];
+
+ semid = sem_create(1234, "half exit, half delete");
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int pid = fork();
+ if (pid == 0) {
+ //ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ add_multi_group();
+ check_multi_group();
+
+ sem_inc_by_one(semid);
+ sem_check_zero(semid);
+
+ if (getpid() % 2) {
+ pr_info("child %d exit", getpid());
+ exit(0);
+ } else {
+ pr_info("child %d del", getpid());
+ //ret = wrap_del_from_group(getpid(), default_id);
+ ret = delete_multi_group();
+ exit(ret);
+ }
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ sem_dec_by_val(semid, PROC_NUM);
+ for (int i = 0; i < PROC_NUM; i++) {
+ int status = 0;
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", i, status);
+ ret = -1;
+ }
+ }
+
+ sharepool_print();
+ sem_close(semid);
+ return ret;
+}
+
+/* testcase9: N个进程加组,顺序进行加组-alloc-free-u2k-k2u-退组。预期稳定 */
+static int testcase9(void)
+{
+ int ret = 0;
+ int childs[PROC_NUM];
+
+ return 0;
+ for (int i = 0; i < PROC_NUM; i++) {
+ int pid = fork();
+ if (pid == 0) {
+ add_multi_group();
+ if (check_multi_group()) {
+ pr_info("child %d add all groups check failed.", getpid());
+ exit(-1);
+ }
+ exit(process());
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int status = 0;
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", i, status);
+ ret = -1;
+ }
+ }
+
+ sharepool_print();
+ return ret;
+}
+
+/* testcase10: 多线程并发调用删除接口。预期只有一个执行成功 */
+static int testcase10(void)
+{
+ int ret = 0;
+ int del_fail = 0, del_succ = 0;
+ void *tret;
+ pthread_t threads[THREAD_NUM];
+
+ semid = sem_create(1234, "call del_group when all threads are ready.");
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process add group failed.");
+ return -1;
+ }
+ for (int i = 0; i < THREAD_NUM; i++) {
+ ret = pthread_create(threads + i, NULL, del_group_thread, (void *)i);
+ if (ret < 0) {
+ pr_info("pthread create failed.");
+ return -1;
+ }
+ }
+
+ // wait until all threads are created.
+ sharepool_print();
+ sem_dec_by_val(semid, THREAD_NUM);
+
+ for (int i = 0; i < THREAD_NUM; i++) {
+ ret = pthread_join(threads[i], &tret);
+ if (ret < 0) {
+ pr_info("pthread %d join failed.", i);
+ ret = -1;
+ }
+ if ((long)tret < 0) {
+ pr_info("thread %dth del failed", i);
+ del_fail++;
+ } else {
+ pr_info("thread %dth del success", i);
+ del_succ++;
+ }
+ }
+
+ pr_info("thread total num: %d, del fail %d, del success %d\n",
+ THREAD_NUM, del_fail, del_succ);
+ sharepool_print();
+ return ret;
+}
+
+/* testcase11: 退组和alloc接口并发。预期无死锁/泄漏 */
+static int testcase11(void)
+{
+ int ret = 0;
+ int childs[PROC_NUM];
+
+ semid = sem_create(1234, "half delete, half alloc");
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed!");
+ return -1;
+ } else if (pid == 0) {
+ //ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group failed.", getpid());
+ return -1;
+ } else {
+ pr_info("process %d add group success.", getpid());
+ }
+
+ sem_inc_by_one(semid);
+ sem_check_zero(semid);
+
+ if (getpid() % 2) {
+ ret = wrap_sp_alloc(default_id, ALLOC_SIZE, 0);
+ if (ret == -1)
+ pr_info("child %d alloc failed. errno is: %d", getpid(), errno);
+ else
+ pr_info("child %d alloc success.", getpid());
+ } else {
+ ret = wrap_del_from_group(getpid(), default_id);
+ if (ret < 0)
+ pr_info("child %d del failed. errno is: %d", getpid(), errno);
+ else
+ pr_info("child %d del success.", getpid());
+ }
+
+ pr_info("child %d finish, sem val is %d", getpid(), sem_get_value(semid));
+ while(1) {
+
+ }
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ sem_dec_by_val(semid, PROC_NUM); /* let child process alloc or del */
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int status = 0;
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ }
+
+ sharepool_print(); /* observe the result */
+ sem_close(semid);
+ return ret;
+}
+
+/* testcase12: A进程调用del将C进程退出组,同时B进程也调用del将C进程退出组。预期成功 */
+static int testcase12(void)
+{
+ int ret = 0;
+ int childs[PROC_NUM];
+ int ppid;
+ int del_fail = 0, del_succ = 0;
+
+ ppid = getpid();
+ semid = sem_create(1234, "half exit, half delete");
+ ret = wrap_add_group(ppid, PROT, default_id);
+ if (ret < 0) {
+ pr_info("parent proc %d add group failed.", ppid);
+ return -1;
+ }
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int pid = fork();
+ if (pid == 0) {
+ sem_inc_by_one(semid);
+ /* a2 add group finish*/
+ sem_check_zero(semid);
+ sem_dec_by_one(semid);
+
+ ret = wrap_del_from_group(ppid, default_id);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ sem_dec_by_val(semid, PROC_NUM);
+ for (int i = 0; i < PROC_NUM; i++) {
+ ret = wrap_add_group(childs[i], PROT, default_id);
+ if (ret < 0) {
+ pr_info("p %d add group failed.", childs[i]);
+ return -1;
+ }
+ }/* a1 add group finish */
+ sharepool_print();
+ sem_inc_by_val(semid, PROC_NUM);
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int status = 0;
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("proc %dth del failed", i);
+ del_fail++;
+ } else {
+ pr_info("proc %dth del success", i);
+ del_succ++;
+ }
+ }
+
+ pr_info("del fail: %d, del success: %d", del_fail, del_succ);
+ sharepool_print();
+ sem_close(semid);
+ return del_succ == 1 ? 0 : -1;
+}
+
+/* testcase13: A进程调用del将N个进程退出组,同时N个进程exit */
+static int testcase13(void)
+{
+ int ret = 0;
+ void *tret;
+ int childs[PROC_NUM];
+ pthread_t threads[PROC_NUM]; /* one thread del one proc, so use PROC_NUM here */
+ int del_fail = 0, del_succ = 0;
+
+ semid = sem_create(1234, "exit & del group");
+ ret = wrap_add_group(getpid(), PROT, default_id);
+ if (ret < 0) {
+ pr_info("parent proc %d add group failed.", getpid());
+ return -1;
+ }
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int pid = fork();
+ if (pid == 0) {
+ sem_inc_by_one(semid);
+ /* a2 add group finish*/
+ sem_check_zero(semid);
+ sem_dec_by_one(semid);
+
+ exit(0);
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ sem_dec_by_val(semid, PROC_NUM);
+ for (int i = 0; i < PROC_NUM; i++) {
+ ret = wrap_add_group(childs[i], PROT, default_id);
+ if (ret < 0) {
+ pr_info("p %d add group failed.", childs[i]);
+ return -1;
+ }
+ }/* a1 add group finish */
+ sharepool_print();
+
+ for (int j = 0; j < PROC_NUM; j++) {
+ ret = pthread_create(threads + j, NULL, del_proc_from_group, (void *)childs[j]);
+ }
+ sem_inc_by_val(semid, 2 * PROC_NUM);
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int status = 0;
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("proc %dth exit unexpected", i);
+ }
+ }
+
+ for (int j = 0; j < PROC_NUM; j++) {
+ ret = pthread_join(threads[j], &tret);
+ if (ret < 0) {
+ pr_info("pthread %d join failed.", j);
+ ret = -1;
+ }
+ if ((long)tret < 0) {
+ pr_info("thread %dth del failed", j);
+ del_fail++;
+ } else {
+ pr_info("thread %dth del success", j);
+ del_succ++;
+ }
+ }
+
+ pr_info("del fail: %d, del success: %d", del_fail, del_succ);
+ sharepool_print();
+ sem_close(semid);
+ return 0;
+}
+
+/*
+* A 进程加组1000,B进程不加组,A调用删组接口将B从组1000中删除,预期失败
+*/
+static int testcase14(void)
+{
+ pid_t pid;
+ int ret, group_id = 1000;
+
+ ret = wrap_add_group(getpid(), PROT, group_id);
+ if (ret < 0) {
+ pr_info("add group failed.");
+ return -1;
+ }
+
+ pid = fork();
+ if (pid == 0) {
+ /* do nothing for child */
+ while (1);
+ }
+
+ ret = wrap_del_from_group(pid, group_id);
+ if (!ret) {
+ pr_info("del task from group success unexpected");
+ ret = -1;
+ } else {
+ pr_info("del task from group failed as expected");
+ ret = 0;
+ }
+
+ kill(pid, SIGKILL);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "A and B 加组,A调用sp_group_del_task退出组。预期成功")
+ TESTCASE_CHILD(testcase2, "A and B 加组,B申请了内存。A调用sp_group_del_task退出组。预期失败")
+ TESTCASE_CHILD(testcase3, "A加组,A调用sp_group_del_task退出组。预期成功")
+ TESTCASE_CHILD(testcase4, "A加组并申请内存。A调用sp_group_del_task退出组。预期失败。再free再del,预期成功")
+ TESTCASE_CHILD(testcase5, "N个进程在未申请内存的组中,并发退组。预期成功")
+ TESTCASE_CHILD(testcase6, "N个进程在申请内存的组中,并发退组。预期失败")
+ TESTCASE_CHILD(testcase7, "主进程申请释放,子进程一边加组一边退组。一段时间后杀死,预期无死锁/泄漏")
+ TESTCASE_CHILD(testcase8, "N个进程加组,一半进程退组,一半进程退出。预期稳定")
+ //TESTCASE_CHILD(testcase9, "N个进程加组,顺序进行加组-alloc-free-u2k-k2u-退组。预期稳定")
+ TESTCASE_CHILD(testcase10, "多线程并发调用删除接口。预期只有一个执行成功")
+ TESTCASE_CHILD(testcase11, "退组和alloc接口并发。预期无死锁/泄漏")
+ TESTCASE_CHILD(testcase12, "A进程调用del将C进程退出组,同时B进程也调用del将C进程退出组。预期成功")
+ TESTCASE_CHILD(testcase13, "A进程调用del将N个进程退出组,同时N个进程exit")
+ TESTCASE_CHILD(testcase14, "A 进程加组1000,B进程不加组,A调用删组接口将B从组1000中删除,预期失败")
+};
+
+static int add_multi_group()
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+ for (int i = 0; i < GROUP_NUM; i++) {
+ group_ids[i] = i + 1;
+ ag_info.spg_id = group_ids[i];
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("proc%d add group%d failed", getpid(), group_ids[i]);
+ return -1;
+ }
+ }
+
+ return ret;
+}
+
+static int check_multi_group()
+{
+ int ret;
+ // query groups
+ int spg_ids[GROUP_NUM];
+ int group_num = GROUP_NUM;
+ struct sp_group_id_by_pid_info find_group_info = {
+ .num = &group_num,
+ .spg_ids = spg_ids,
+ .pid = getpid(),
+ };
+ ret = ioctl_find_group_by_pid(dev_fd, &find_group_info);
+ if (ret < 0) {
+ pr_info("find group id by pid failed");
+ return ret;
+ } else
+ for (int i = 0; i < GROUP_NUM; i++)
+ if (spg_ids[i] != group_ids[i]) {
+ pr_info("group id %d not consistent", i);
+ ret = -1;
+ }
+ return ret;
+}
+
+static int delete_multi_group()
+{
+ int ret = 0;
+ int fail = 0, suc = 0;
+ // delete from all groups
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = wrap_del_from_group(getpid(), group_ids[i]);
+ if (ret < 0) {
+ //pr_info("process %d delete from group %d failed, errno: %d", getpid(), group_ids[i], errno);
+ fail++;
+ }
+ else {
+ pr_info("process %d delete from group %d success", getpid(), group_ids[i]);
+ suc++;
+ }
+ }
+
+ return fail;
+}
+
+static int process()
+{
+ int ret = 0;
+ for (int j = 0; j < REPEAT_TIMES; j++) {
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = thread_and_process_helper(group_ids[i]);
+ if (ret < 0) {
+ pr_info("thread_and_process_helper failed");
+ return -1;
+ }
+ }
+ }
+
+ return ret;
+}
+
+static int try_del_from_group(int group_id)
+{
+ int ret = wrap_del_from_group(getpid(), group_id);
+ return -errno;
+}
+
+void *thread_and_process_helper(int group_id)
+{
+ int ret = 0, i;
+ pid_t pid;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[ALLOC_TYPE] = {0};
+ struct sp_make_share_info u2k_info[ALLOC_TYPE] = {0}, k2u_info = {0}, k2u_huge_info = {0};
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ char *addr;
+
+ /* check sp group */
+ pid = getpid();
+
+ // 大页
+ alloc_info[0].flag = SP_HUGEPAGE;
+ alloc_info[0].spg_id = group_id;
+ alloc_info[0].size = 2 * PMD_SIZE;
+
+ // 大页 DVPP
+ alloc_info[1].flag = SP_DVPP | SP_HUGEPAGE;
+ alloc_info[1].spg_id = group_id;
+ alloc_info[1].size = 2 * PMD_SIZE;
+
+ // 普通页 DVPP
+ alloc_info[2].flag = SP_DVPP;
+ alloc_info[2].spg_id = group_id;
+ alloc_info[2].size = 4 * PAGE_SIZE;
+
+ // 普通页
+ alloc_info[3].flag = 0;
+ alloc_info[3].spg_id = group_id;
+ alloc_info[3].size = 4 * PAGE_SIZE;
+
+ /* alloc & u2k */
+ for (i = 0; i < ALLOC_TYPE; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("sp_alloc return addr %lx\n", alloc_info[i].addr);
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(dev_fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("addr %lx is a valid share pool addr\n", alloc_info[i].addr);
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ pr_info("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("u2k return addr %lx, check memory content succ.\n",
+ // u2k_info[i].addr);
+ }
+ }
+ }
+ assert(try_del_from_group(group_id) == -EINVAL);
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = group_id;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = group_id;
+
+ /* k2u */
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+ assert(try_del_from_group(group_id) == -EINVAL);
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ ioctl_vfree(dev_fd, &vmalloc_huge_info);
+ assert(try_del_from_group(group_id) == -EINVAL);
+
+ /* unshare kva & sp_free*/
+ for (i = 0; i < ALLOC_TYPE; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ return 0;
+
+error:
+ return -1;
+}
+
+void *del_group_thread(void *arg)
+{
+ int ret = 0;
+ int i = (int)arg;
+
+ sem_inc_by_one(semid);
+ sem_check_zero(semid);
+
+ pr_info("thread %d now tries to exit from group %d", getpid() + i + 1, default_id);
+ ret = wrap_del_from_group(getpid() + i + 1, default_id);
+ if (ret < 0)
+ pthread_exit((void *)-1);
+ pthread_exit((void *)0);
+}
+
+void *del_proc_from_group(void *arg)
+{
+ sem_dec_by_one(semid);
+ pthread_exit((void *)(wrap_del_from_group((int)arg, default_id)));
+}
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/Makefile b/tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/test_sp_group_id_by_pid.c b/tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/test_sp_group_id_by_pid.c
new file mode 100644
index 000000000000..604de856f9ab
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/test_sp_group_id_by_pid.c
@@ -0,0 +1,179 @@
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+#include "sharepool_lib.h"
+
+
+
+/*
+ * testcase1
+ * 测试点:有效pid,已加组查询
+ * 预期结果:查询成功,返回正确的group_id
+ */
+static int testcase1(void)
+{
+ int group_id = 10;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ while (1);
+ exit(-1);
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ goto out;
+ }
+
+ ret = ioctl_find_first_group(dev_fd, pid);
+ if (ret != group_id) {
+ pr_info("failed, group_id: %d, expected: %d", ret, group_id);
+ ret = -1;
+ } else {
+ //pr_info("testcase1 success!!");
+ ret = 0;
+ }
+
+out:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return ret;
+}
+
+/*
+ * testcase2
+ * 测试点:有效pid,进程处于退出状态并且已经加过组
+ * 预期结果:查询失败,错误码-ESRCH
+ */
+static int testcase2_result = -1;
+static pid_t testcase2_child_pid;
+static void testcase2_sigchld_handler(int num)
+{
+ int ret = ioctl_find_first_group(dev_fd, testcase2_child_pid);
+ if (!(ret < 0 && errno == ESRCH)) {
+ pr_info("failed, ret: %d, errno: %d", ret, errno);
+ pr_info("testcase2 failed!!");
+ testcase2_result = -1;
+ } else {
+ //pr_info("testcase2 success!!");
+ testcase2_result = 0;
+ }
+}
+
+static int testcase2(void)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ pr_info("child pid %d", getpid());
+ while (1);
+ exit(-1);
+ }
+
+ struct sigaction sa = {0};
+ struct sigaction osa = {0};
+ sa.sa_handler = testcase2_sigchld_handler;
+ sigaction(SIGCHLD, &sa, &osa);
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 10,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret)
+ pr_info("add group failed, errno: %d", errno);
+
+ testcase2_child_pid = pid;
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+
+ sigaction(SIGCHLD, &osa, NULL);
+
+ return testcase2_result;
+}
+
+/*
+ * testcase3
+ * 测试点:有效pid,未加组查询
+ * 预期结果:查询失败,错误码 -ENODEV
+ */
+static int testcase3(void)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ while (1);
+ exit(-1);
+ }
+
+ int ret = ioctl_find_first_group(dev_fd, pid);
+ if (!(ret < 0 && errno == ENODEV)) {
+ pr_info("failed, ret: %d, errno: %d", ret, errno);
+ pr_info("testcase3 failed!!");
+ ret = -1;
+ } else {
+ //pr_info("testcase3 success!!");
+ ret = 0;
+ }
+
+out:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return ret;
+}
+
+/*
+ * testcase4
+ * 测试点:无效pid查询
+ * 预期结果:查询失败,错误码 -ESRCH
+ */
+static int testcase4(void)
+{
+ int ret = ioctl_find_first_group(dev_fd, -1);
+ if (!(ret < 0 && errno == ESRCH)) {
+ pr_info("failed, ret: %d, errno: %d", ret, errno);
+ pr_info("testcase4 failed!!");
+ ret = -1;
+ } else {
+ //pr_info("testcase4 success!!");
+ ret = 0;
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "有效pid,已加组查询。预期结果:查询成功,返回正确的group_id")
+ TESTCASE(testcase2, "有效pid,进程处于退出状态并且已经加过组。预期结果:查询失败,错误码-ESRCH")
+ TESTCASE(testcase3, "有效pid,未加组查询。预期结果:查询失败,错误码 -ENODEV")
+ TESTCASE(testcase4, "无效pid查询。预期结果:查询失败,错误码 -ESRCH")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/test_sp_group_id_by_pid2.c b/tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/test_sp_group_id_by_pid2.c
new file mode 100644
index 000000000000..7b82e591fce7
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_group_id_by_pid/test_sp_group_id_by_pid2.c
@@ -0,0 +1,318 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed May 26 06:20:07 2021
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+#include "sharepool_lib.h"
+
+
+static int spg_id_query(int id, int *ids, int len)
+{
+ for (int i = 0; i < len; i++)
+ if (id == ids[i])
+ return i;
+
+ return -1;
+}
+
+/*
+ * 进程加n个组,查询组id
+ * 预期加组成功,组ID查询结果正确
+ */
+static int testcase1(void)
+{
+#define group_nr_test1 10
+ int ret = 0, nr, i;
+ int spg_id1[group_nr_test1];
+ int spg_id2[group_nr_test1 + 1];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+
+ for (i = 0; i < ARRAY_SIZE(spg_id1); i++) {
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ spg_id1[i] = ag_info.spg_id;
+ }
+
+ nr = ARRAY_SIZE(spg_id2);
+ struct sp_group_id_by_pid_info info = {
+ .pid = getpid(),
+ .spg_ids = spg_id2,
+ .num = &nr,
+ };
+ TEST_CHECK(ioctl_find_group_by_pid(dev_fd, &info), out);
+
+ if (nr != ARRAY_SIZE(spg_id1)) {
+ pr_info("sp_group_id_by_pid check failed, group_nr:%d, expect:%d", nr, ARRAY_SIZE(spg_id1));
+ return -1;
+ }
+
+ for (i = 0; i < group_nr_test1; i++)
+ if (spg_id_query(spg_id2[i], spg_id1, ARRAY_SIZE(spg_id1)) < 0) {
+ pr_info("sp_group_id_by_pid check failed, spg_id %d no found", spg_id2[i]);
+ return -1;
+ }
+
+out:
+ return ret;
+}
+
+/*
+ * 进程加n个组,查询组id
+ * 预期加组成功,组ID查询失败,错误码E2BIG
+ */
+static int testcase2(void)
+{
+#define group_nr_test2 10
+ int ret = 0, nr, i;
+ int spg_id2[group_nr_test2 - 1];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+
+ for (i = 0; i < group_nr_test2; i++) {
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ }
+
+ nr = ARRAY_SIZE(spg_id2);
+ struct sp_group_id_by_pid_info info = {
+ .pid = getpid(),
+ .spg_ids = spg_id2,
+ .num = &nr,
+ };
+ TEST_CHECK_FAIL(ioctl_find_group_by_pid(dev_fd, &info), E2BIG, out);
+
+out:
+ return ret;
+}
+
+/*
+ * 进程不加组,查询组id
+ * 查询失败,错误码ENODEV
+ */
+static int testcase3(void)
+{
+ int ret = 0, nr, i;
+ int spg_id2[3];
+
+ nr = ARRAY_SIZE(spg_id2);
+ struct sp_group_id_by_pid_info info = {
+ .pid = getpid(),
+ .spg_ids = spg_id2,
+ .num = &nr,
+ };
+ TEST_CHECK_FAIL(ioctl_find_group_by_pid(dev_fd, &info), ENODEV, out);
+
+out:
+ return ret;
+}
+
+/*
+ * 查询内核线程组id
+ * 查询失败,错误码ENODEV
+ */
+static int testcase4(void)
+{
+ int ret = 0, nr, i;
+ int spg_id2[3];
+
+ nr = ARRAY_SIZE(spg_id2);
+ struct sp_group_id_by_pid_info info = {
+ .pid = 2, // kthreadd
+ .spg_ids = spg_id2,
+ .num = &nr,
+ };
+ TEST_CHECK_FAIL(ioctl_find_group_by_pid(dev_fd, &info), ENODEV, out);
+
+out:
+ return ret;
+}
+
+/*
+ * 非法pid
+ * 预期加组成功,组ID查询失败,错误码ESRCH
+ */
+static int testcase5(void)
+{
+ int ret = 0, nr, i;
+ int spg_id2[3];
+
+ nr = ARRAY_SIZE(spg_id2);
+ struct sp_group_id_by_pid_info info = {
+ .pid = -1,
+ .spg_ids = spg_id2,
+ .num = &nr,
+ };
+ TEST_CHECK_FAIL(ioctl_find_group_by_pid(dev_fd, &info), ESRCH, out);
+
+out:
+ return ret;
+}
+
+static int testcase6(void)
+{
+ int ret = 0, nr, i;
+ int spg_id2[2];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+
+ nr = ARRAY_SIZE(spg_id2);
+ struct sp_group_id_by_pid_info info = {
+ .pid = getpid(),
+ .spg_ids = spg_id2,
+ .num = NULL
+ };
+ TEST_CHECK_FAIL(ioctl_find_group_by_pid(dev_fd, &info), EINVAL, out);
+ info.num = &nr;
+ info.spg_ids = NULL;
+ TEST_CHECK_FAIL(ioctl_find_group_by_pid(dev_fd, &info), EINVAL, out);
+
+out:
+ return ret;
+}
+
+/*
+ * 子进程加n个组,查询组id
+ * 预期加组成功,组ID查询结果正确
+ */
+static int testcase7(void)
+{
+#define group_nr_test7 10
+ pid_t pid;
+ int ret = 0, nr, i;
+ int spg_id1[group_nr_test7];
+ int spg_id2[group_nr_test7 + 1];
+
+ FORK_CHILD_DEADLOOP(pid);
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ };
+
+ for (i = 0; i < ARRAY_SIZE(spg_id1); i++) {
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ spg_id1[i] = ag_info.spg_id;
+ }
+
+ nr = ARRAY_SIZE(spg_id2);
+ struct sp_group_id_by_pid_info info = {
+ .pid = pid,
+ .spg_ids = spg_id2,
+ .num = &nr,
+ };
+ TEST_CHECK(ioctl_find_group_by_pid(dev_fd, &info), out);
+
+ if (nr != ARRAY_SIZE(spg_id1)) {
+ pr_info("sp_group_id_by_pid check failed, group_nr:%d, expect:%d", nr, ARRAY_SIZE(spg_id1));
+ ret = -1;
+ goto out;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(spg_id1); i++)
+ if (spg_id_query(spg_id2[i], spg_id1, ARRAY_SIZE(spg_id1)) < 0) {
+ pr_info("sp_group_id_by_pid check failed, spg_id %d no found", spg_id2[i]);
+ ret = -1;
+ goto out;
+ }
+
+out:
+ KILL_CHILD(pid);
+ return ret;
+}
+
+/*
+ * 测试步骤:进程直调申请内存,然后查询组
+ * 预期结果:查询失败,错误码ENODEV
+ */
+static int testcase8(void)
+{
+ int ret = 0;
+ void *buf;
+ unsigned long size = 1024;
+
+ buf = (void *)wrap_sp_alloc(SPG_ID_DEFAULT, size, 0);
+ if (buf == (void *)-1)
+ return -1;
+
+ TEST_CHECK_FAIL(ioctl_find_first_group(dev_fd, getpid()), ENODEV, out);
+ return 0;
+
+out:
+ return ret;
+}
+
+/*
+ * 测试步骤:做k2task,然后查询组
+ * 预期结果:查询失败,错误码ENODEV
+ */
+static int testcase9(void)
+{
+ int ret = 0;
+ void *buf;
+ unsigned long kva, uva;
+ unsigned long size = 1024;
+
+ kva = wrap_vmalloc(size, true);
+ if (!kva)
+ return -1;
+
+ uva = wrap_k2u(kva, size, SPG_ID_DEFAULT, 0);
+ if (!uva) {
+ ret = -1;
+ goto out_vfree;
+ }
+
+ TEST_CHECK_FAIL(ioctl_find_first_group(dev_fd, getpid()), ENODEV, out_unshare);
+ ret = 0;
+
+out_unshare:
+ wrap_unshare(uva, size);
+out_vfree:
+ wrap_vfree(kva);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "进程加n个组,查询组id。预期加组成功,组ID查询结果正确")
+ TESTCASE_CHILD(testcase2, "进程加n个组,查询组id。预期加组成功,组ID查询失败,错误码E2BIG")
+ TESTCASE(testcase3, "进程不加组,查询组id。预期查询失败,错误码ENODEV")
+ TESTCASE(testcase4, "查询内核线程组id。预期查询失败,错误码ENODEV")
+ TESTCASE(testcase5, "非法pid。预期加组成功,组ID查询失败,错误码ESRCH")
+ TESTCASE_CHILD(testcase6, "使用auto组id,传入空info指针/空返回值指针,预期失败,错误码EINVAL")
+ TESTCASE(testcase7, "子进程加n个组,查询组id。预期加组成功,组ID查询结果正确")
+ TESTCASE_CHILD(testcase8, "进程直调申请内存,然后查询组。预期结果:查询失败,错误码ENODEV")
+ TESTCASE_CHILD(testcase9, "k2task,然后查询组。预期结果:查询失败,错误码ENODEV")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_id_of_current/Makefile b/tools/testing/sharepool/testcase/api_test/sp_id_of_current/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_id_of_current/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_id_of_current/test_sp_id_of_current.c b/tools/testing/sharepool/testcase/api_test/sp_id_of_current/test_sp_id_of_current.c
new file mode 100644
index 000000000000..7aa05b2bdca6
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_id_of_current/test_sp_id_of_current.c
@@ -0,0 +1,112 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Sat Dec 19 11:29:06 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdbool.h>
+#include <pthread.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+/*
+ * case1: 进程不加组,多线程并发获取进程的local 组ID
+ * 预期都能获取成功
+ */
+/* 等待一个信号,然后开始查询 */
+static void *thread1(void *arg)
+{
+ int semid = *(int *)arg;
+
+ sem_dec_by_one(semid);
+ for (int i = 0; i < 20; i++) {
+ int ret = wrap_sp_id_of_current();
+ if (ret < 0)
+ return (void *)-1;
+ }
+
+ return NULL;
+}
+
+#define TEST1_CHILD_NUM 20
+#define TEST1_THREAD_NUM 20
+static int child1(int idx)
+{
+ void *thread_ret;
+ int semid, j, ret = 0;
+ pthread_t threads[TEST1_THREAD_NUM];
+
+ semid = sem_create(4466 + idx, "sp_id_of_current sem");
+ if (semid < 0)
+ return semid;
+
+ for (j = 0; j < TEST1_THREAD_NUM; j++) {
+ ret = pthread_create(threads + j, NULL, thread1, &semid);
+ if (ret < 0) {
+ pr_info("pthread create failed");
+ goto out_pthread_join;
+ }
+ }
+
+ sem_set_value(semid, TEST1_THREAD_NUM);
+
+out_pthread_join:
+ for (j--; j >= 0; j--) {
+ pthread_join(threads[j], &thread_ret);
+ if (thread_ret != NULL) {
+ pr_info("child thread%d exited unexpected", j + 1);
+ ret = -1;
+ }
+ }
+
+ sem_close(semid);
+
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret = 0;
+ pid_t child[TEST1_CHILD_NUM];
+
+ for (int i = 0; i < TEST1_CHILD_NUM; i++) {
+ pid_t pid = fork();
+ if (pid == 0)
+ exit(child1(i));
+ child[i] = pid;
+ }
+
+ // 等待子进程退出
+ for (int i = 0; i < TEST1_CHILD_NUM; i++) {
+ int status;
+ waitpid(child[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d exited unexpected", i);
+ ret = -1;
+ }
+ child[i] = 0;
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "进程不加组,多线程并发获取进程的local 组ID,预期都能获取成功")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/Makefile b/tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/test_sp_make_share_k2u.c b/tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/test_sp_make_share_k2u.c
new file mode 100644
index 000000000000..24fe1d2320ca
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/test_sp_make_share_k2u.c
@@ -0,0 +1,624 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 10:45:21 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+#define FAKE_NUM 3
+
+/*
+ * testcase1: vmalloc_user申请共享内存,k2u的pid未加组,spg_id == SPG_ID_DEFAULT或SPG_ID_NONE。预期成功。
+ * testcase2: vmalloc_user申请共享内存,k2u的pid已加组,(1) spg_id == SPG_ID_DEFAULT。预期成功。
+ * (2) spg_id == SPG_ID_NONE。预期失败,返回EINVAL。
+ * testcase3: vmalloc_huge_user申请共享内存,(1) k2u的pid已加组,spg_id有效范围内未使用过。预期失败。
+ * testcase4: vmalloc_user申请共享内存,(1) k2u的kva不存在。预期失败。(2) k2u的size超出申请大小、size为0。预期失败。
+ * testcase5: vmalloc_huge_user申请共享内存,(1) k2u的pid未加组,spg_id == SPG_ID_NONE。预期成功。
+ * (2) k2u的kva、size不对齐。预期成功。(3) k2u的sp_flags=SP_DVPP。预期成功。
+ */
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int prepare(struct vmalloc_info *ka_info, bool ishuge)
+{
+ int ret;
+ if (ishuge) {
+ ret = ioctl_vmalloc_hugepage(dev_fd, ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc_hugepage failed, errno: %d", errno);
+ return -1;
+ }
+ } else {
+ ret = ioctl_vmalloc(dev_fd, ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'a',
+ .addr = ka_info->addr,
+ .size = ka_info->size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, ka_info);
+ }
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_infos[] = {
+ {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ };
+ for (int i = 0; i < sizeof(k2u_infos) / sizeof(k2u_infos[0]); i++) {
+ ret = ioctl_k2u(dev_fd, &k2u_infos[i]);
+ if (ret < 0) {
+ pr_info("testcase1 ioctl_k2u %d failed unexpected, errno: %d", i, errno);
+ goto out;
+ } else {
+ //pr_info("testcase1 ioctl_k2u %d success expected", i);
+ ret = ioctl_unshare(dev_fd, &k2u_infos[i]);
+ if (ret < 0) {
+ pr_info("testcase1 ioctl_unshare %d failed, errno: %d", i, errno);
+ goto out;
+ }
+ }
+ }
+
+out:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+static int testcase2(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ if (addgroup() != 0) {
+ return -1;
+ }
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_infos[] = {
+ {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_infos[0]);
+ if (ret < 0) {
+ pr_info("testcase2 ioctl_k2u 0 failed unexpected, errno: %d", errno);
+ goto out;
+ } else {
+ //pr_info("testcase2 ioctl_k2u 0 success unexpected");
+ ret = ioctl_unshare(dev_fd, &k2u_infos[0]);
+ if (ret < 0) {
+ pr_info("testcase2 ioctl_unshare 0 failed, errno: %d", errno);
+ goto out;
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_infos[1]);
+ if (ret != 0 && errno == EINVAL) {
+ //pr_info("testcase2 ioctl_k2u 1 failed expected");
+ ret = 0;
+ goto out;
+ } else if (ret != 0) {
+ pr_info("testcase2 ioctl_k2u 1 failed unexpected, errno: %d", errno);
+ goto out;
+ } else {
+ pr_info("testcase2 ioctl_k2u 1 success unexpected");
+ ioctl_unshare(dev_fd, &k2u_infos[1]);
+ ret = -1;
+ goto out;
+ }
+
+out:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+static int testcase3(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PMD_SIZE,
+ };
+ if (prepare(&ka_info, true) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_infos[] = {
+ {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_AUTO_MIN,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ };
+ int errs[] = {EINVAL, ESRCH};
+
+ if (addgroup() != 0) {
+ return -1;
+ }
+
+ for (int i = 0; i < sizeof(k2u_infos) / sizeof(k2u_infos[0]); i++) {
+ ret = ioctl_k2u(dev_fd, &k2u_infos[i]);
+ if (ret != 0 && errno == errs[i]) {
+ //pr_info("testcase3 ioctl_k2u %d failed expected", i);
+ ret = 0;
+ } else if (ret != 0) {
+ pr_info("testcase3 ioctl_k2u %d failed unexpected, errno: %d", i, errno);
+ goto out;
+ } else {
+ pr_info("testcase3 ioctl_k2u %d success unexpected", i);
+ ioctl_unshare(dev_fd, &k2u_infos[i]);
+ ret = -1;
+ goto out;
+ }
+ }
+
+out:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+static int testcase4(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ struct vmalloc_info fake_ka_info = {
+ .size = PAGE_SIZE,
+ };
+ ret = ioctl_vmalloc(dev_fd, &fake_ka_info);
+ if (ret < 0) {
+ pr_info("testcase4 vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+ ioctl_vfree(dev_fd, &fake_ka_info);
+
+ struct sp_make_share_info k2u_infos[] = {
+ {
+ .kva = fake_ka_info.addr,
+ .size = fake_ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ /*{
+ .kva = ka_info.addr,
+ .size = ka_info.size * FAKE_NUM,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ */{
+ .kva = ka_info.addr,
+ .size = 0,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ {
+ .kva = ka_info.addr,
+ .size = 0,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ };
+
+ for (int i = 0; i < sizeof(k2u_infos) / sizeof(k2u_infos[0]); i++) {
+ ret = ioctl_k2u(dev_fd, &k2u_infos[i]);
+ if (ret != 0 && errno == EINVAL) {
+ //pr_info("testcase4 ioctl_k2u %d failed expected", i);
+ ret = 0;
+ } else if (ret != 0) {
+ pr_info("testcase4 ioctl_k2u %d failed unexpected, errno: %d", i, errno);
+ goto out;
+ } else {
+ pr_info("testcase4 ioctl_k2u %d success unexpected", i);
+ ioctl_unshare(dev_fd, &k2u_infos[i]);
+ ret = -1;
+ goto out;
+ }
+ }
+
+out:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+static int testcase5(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PMD_SIZE,
+ };
+ if (prepare(&ka_info, true) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_infos[] = {
+ {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ {
+ .kva = ka_info.addr,
+ .size = ka_info.size - 1,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ {
+ .kva = ka_info.addr + 1,
+ .size = ka_info.size - 1,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ },
+ {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = SP_DVPP,
+ .pid = getpid(),
+ },
+ };
+
+ for (int i = 0; i < sizeof(k2u_infos) / sizeof(k2u_infos[0]); i++) {
+ ret = ioctl_k2u(dev_fd, &k2u_infos[i]);
+ if (ret < 0) {
+ pr_info("testcase5 ioctl_k2u %d failed unexpected, errno: %d", i, errno);
+ goto out;
+ } else {
+ //pr_info("testcase5 ioctl_k2u %d success expected", i);
+ ret = ioctl_unshare(dev_fd, &k2u_infos[i]);
+ if (ret < 0) {
+ pr_info("testcase5 ioctl_unshare %d failed, errno: %d", i, errno);
+ goto out;
+ }
+ }
+ }
+
+out:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+static int testcase6(void)
+{
+ int ret = 0;
+
+ struct sp_make_share_info k2u_info = {
+ .kva = 0,
+ .size = 4096,
+ .spg_id = 1,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("testcase5 ioctl_k2u failed expected", errno);
+ return 0;
+ } else {
+ pr_info("testcase5 ioctl_k2u success unexpected");
+ ioctl_unshare(dev_fd, &k2u_info);
+ return -1;
+ }
+}
+
+static int testcase7(void)
+{
+ int ret = 0;
+
+ struct vmalloc_info ka_info = {
+ .size = PMD_SIZE,
+ };
+ if (prepare(&ka_info, true) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = 1,
+ .sp_flags = 25,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("testcase5 ioctl_k2u failed expected", errno);
+ ret = 0;
+ } else {
+ pr_info("testcase5 ioctl_k2u success unexpected");
+ ioctl_unshare(dev_fd, &k2u_info);
+ ret = -1;
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+static int testcase8(void)
+{
+ int ret = 0;
+
+ struct vmalloc_info ka_info = {
+ .size = PMD_SIZE,
+ };
+ if (prepare(&ka_info, true) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 25,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("testcase5 ioctl_k2u failed expected", errno);
+ ret = 0;
+ } else {
+ pr_info("testcase5 ioctl_k2u success unexpected");
+ ioctl_unshare(dev_fd, &k2u_info);
+ ret = -1;
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+static int testcase9(void)
+{
+ int ret = 0;
+#if 0
+ unsigned long flag;
+ int node_id = 5;
+
+ struct vmalloc_info ka_info = {
+ .size = PMD_SIZE,
+ };
+ if (prepare(&ka_info, true) != 0) {
+ return -1;
+ }
+
+ flag |= (unsigned long)node_id << NODE_ID_SHIFT;
+ flag |= SP_SPEC_NODE_ID;
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = flag,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("testcase5 ioctl_k2u failed expected", errno);
+ ret = 0;
+ } else {
+ pr_info("testcase5 ioctl_k2u success unexpected");
+ ioctl_unshare(dev_fd, &k2u_info);
+ ret = -1;
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+#endif
+ return ret;
+}
+
+static int testcase10(void)
+{
+ int ret = 0;
+#if 0
+ unsigned long flag;
+ int node_id = 5;
+
+ struct vmalloc_info ka_info = {
+ .size = PMD_SIZE,
+ };
+ if (prepare(&ka_info, true) != 0) {
+ return -1;
+ }
+
+ flag |= (unsigned long)node_id << NODE_ID_SHIFT;
+ flag |= SP_SPEC_NODE_ID;
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = 1,
+ .sp_flags = flag,
+ .pid = getpid(),
+ };
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 1);
+ if (ret < 0)
+ return -1;
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("testcase5 ioctl_k2u failed expected", errno);
+ ret = 0;
+ } else {
+ pr_info("testcase5 ioctl_k2u success unexpected");
+ ioctl_unshare(dev_fd, &k2u_info);
+ ret = -1;
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+#endif
+ return ret;
+}
+
+static int testcase11(void)
+{
+ int ret = 0;
+ unsigned long flag;
+ int node_id = 5;
+
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ if (ioctl_kmalloc(dev_fd, &ka_info) < 0) {
+ pr_info("kmalloc failed");
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = 1,
+ .sp_flags = flag,
+ .pid = getpid(),
+ };
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 1);
+ if (ret < 0)
+ return -1;
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("k2u failed as expected");
+ ret = 0;
+ } else {
+ pr_info("k2u success unexpected");
+ ret = -1;
+ }
+
+ ioctl_kfree(dev_fd, &ka_info);
+ return ret;
+}
+
+#define PROC_NUM 100
+static int testcase12(void) {
+ pid_t child[PROC_NUM];
+ pid_t pid;
+ int ret = 0;
+ int semid = sem_create(1234, "sem");
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid = fork();
+ if (pid == 0) {
+ sem_dec_by_one(semid);
+ pr_info("child %d started!", getpid());
+ exit(testcase4());
+ } else {
+ child[i] = pid;
+ }
+ }
+
+ sem_inc_by_val(semid, PROC_NUM);
+ pr_info("sem released!");
+ for (int i = 0; i < PROC_NUM; i++) {
+ WAIT_CHILD_STATUS(child[i], out);
+ }
+out:
+ sem_close(semid);
+ return ret;
+}
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "vmalloc_user申请共享内存,k2u的pid未加组,spg_id == SPG_ID_DEFAULT或SPG_ID_NONE。预期成功。")
+ TESTCASE_CHILD_MANUAL(testcase2, "vmalloc_user申请共享内存,k2u的pid已加组,(1) spg_id == SPG_ID_DEFAULT。预期成功。(2) spg_id == SPG_ID_NONE。预期失败,返回EINVAL。") // single only
+ TESTCASE_CHILD_MANUAL(testcase3, "vmalloc_huge_user申请共享内存,(1) k2u的pid已加组,spg_id有效范围内未使用过。预期失败。") // single only
+ TESTCASE_CHILD(testcase4, "vmalloc_user申请共享内存,(1) k2u的kva不存在。预期失败。(2) k2u的size超出申请大小、size为0。预期失败。")
+ TESTCASE_CHILD(testcase5, "vmalloc_huge_user申请共享内存,(1) k2u的pid未加组,spg_id == SPG_ID_NONE。预期成功。(2) k2u的kva、size不对齐。预期成功。(3) k2u的sp_flags=SP_DVPP。预期成功。")
+ TESTCASE_CHILD(testcase6, "kva is 0")
+ TESTCASE_CHILD(testcase7, "k2spg invalid flag")
+ TESTCASE_CHILD(testcase8, "k2task invalid flag")
+ TESTCASE_CHILD(testcase9, "用例废弃:k2spg invalid numa node flag")
+ TESTCASE_CHILD(testcase10, "用例废弃:k2task invalid numa node flag")
+ TESTCASE_CHILD(testcase11, "use kmalloc memory for k2u")
+ TESTCASE_CHILD(testcase12, "BUG test")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/test_sp_make_share_k2u2.c b/tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/test_sp_make_share_k2u2.c
new file mode 100644
index 000000000000..79bf2ee7c665
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_make_share_k2u/test_sp_make_share_k2u2.c
@@ -0,0 +1,361 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Thu Jun 03 06:35:49 2021
+ */
+#include <stdio.h>
+#include <string.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <sys/mman.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 测试步骤:进程A加组,内核申请内存,做k2spg,用户写,内核check,用户调用unshare----先加组,后k2spg
+ * 预期结果:内核check内存成功
+ */
+static int testcase1(bool is_hugepage)
+{
+ int ret = 0, spg_id;
+ unsigned long kva, uva;
+ unsigned long size = 1024;
+
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0)
+ return -1;
+
+ // 加组两次
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0) {
+ pr_info("add group failed. ret %d", spg_id);
+ return -1;
+ }
+
+ kva = wrap_vmalloc(size, is_hugepage);
+ if (!kva) {
+ pr_info("kva null");
+ return -1;
+ }
+
+ uva = wrap_k2u(kva, size, spg_id, 0);
+ if (!uva) {
+ pr_info("k2u failed");
+ ret = -1;
+ goto out_vfree;
+ }
+
+ pr_info("memset to uva 0x%lx", uva);
+ sleep(1);
+ for ( int i = 0; i < size; i++) {
+ memset((void *)uva + i, 'a', 1);
+ pr_info("memset success at %dth byte", i);
+ KAREA_ACCESS_CHECK('a', kva + i, 1, out_unshare);
+ pr_info("kva check success at %dth byte", i);
+ }
+
+out_unshare:
+ wrap_unshare(uva, size);
+out_vfree:
+ wrap_vfree(kva);
+
+ return ret;
+}
+
+/*
+ * 测试步骤:父进程加组,并申请内核内存,做k2spg,写内存,fork子进程,子进程加相同组,子进程读(check)、写共享内存,
+ * 内核check-----先做k2spg,然后加组
+ */
+static int testcase2_child(sem_t *sync, unsigned long uva, unsigned long kva, unsigned long size)
+{
+ int ret = 0, i;
+
+ SEM_WAIT(sync);
+
+ for (i = 0; i < size; i++)
+ if (((char *)uva)[i] != 'a') {
+ pr_info("buf check failed, i:%d, value:%d", i, ((char *)uva)[i]);
+ return -1;
+ }
+
+ memset((void *)uva, 'b', size);
+ KAREA_ACCESS_CHECK('b', kva, size, out);
+
+out:
+ return ret;
+}
+
+static int testcase2(bool is_hugepage)
+{
+ pid_t pid;
+ int ret = 0, spg_id;
+ unsigned long kva, uva;
+ unsigned long size = 1024;
+ sem_t *sync;
+
+ SEM_INIT(sync, (int)is_hugepage);
+
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0)
+ return -1;
+
+ // 加组两次
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0)
+ return -1;
+
+ kva = wrap_vmalloc(size, is_hugepage);
+ if (!kva)
+ return -1;
+
+ uva = wrap_k2u(kva, size, spg_id, 0);
+ if (!uva) {
+ ret = -1;
+ goto out_vfree;
+ }
+
+ memset((void *)uva, 'a', size);
+
+ FORK_CHILD_ARGS(pid, testcase2_child(sync, uva, kva, size));
+ ret = wrap_add_group(pid, PROT_READ | PROT_WRITE, spg_id);
+ if (ret < 0 || ret != spg_id) {
+ pr_info("add child to group %d failed, ret: %d, errno: %d", spg_id, ret, errno);
+ ret = -1;
+ KILL_CHILD(pid);
+ goto out_unshare;
+ } else
+ ret = 0;
+
+ sem_post(sync);
+
+ WAIT_CHILD_STATUS(pid, out_unshare);
+
+out_unshare:
+ wrap_unshare(uva, size);
+out_vfree:
+ wrap_vfree(kva);
+
+ return ret;
+}
+
+/*
+ * 测试步骤:父进程加组,并申请内核内存,fork子进程,子进程加相同组,设置成只读权限,父进程执行k2spg,并且写共享内存----先加组,然后k2spg
+ * 预期结果:用户写失败,触发segment fault
+ */
+static int testcase3_child(sem_t *sync, unsigned long *puva, unsigned long size)
+{
+ int ret;
+ SEM_WAIT(sync);
+
+ memset((void *)(*puva), 'a', size);
+
+ // unaccessable
+ pr_info("ERROR!! unaccessable statement reach");
+
+ return -1;
+}
+
+static int testcase3(bool is_hugepage)
+{
+ pid_t pid;
+ int ret = 0, spg_id;
+ unsigned long kva, uva;
+ unsigned long size = 1024;
+ unsigned long *puva;
+ sem_t *sync;
+
+ SEM_INIT(sync, (int)is_hugepage);
+
+ puva = mmap(NULL, sizeof(*puva), PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0);
+ if (puva == MAP_FAILED) {
+ pr_info("map failed");
+ return -1;
+ }
+
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0)
+ return -1;
+
+ // 加组两次
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0)
+ return -1;
+
+ kva = wrap_vmalloc(size, is_hugepage);
+ if (!kva)
+ return -1;
+
+ FORK_CHILD_ARGS(pid, testcase3_child(sync, puva, size));
+ ret = wrap_add_group(pid, PROT_READ, spg_id);
+ if (ret < 0 || ret != spg_id) {
+ pr_info("add child to group %d failed, ret: %d, errno: %d", spg_id, ret, errno);
+ ret = -1;
+ KILL_CHILD(pid);
+ goto out_vfree;
+ } else
+ ret = 0;
+
+ uva = wrap_k2u(kva, size, spg_id, 0);
+ if (!uva) {
+ pr_info("k2u failed");
+ ret = -1;
+ KILL_CHILD(pid);
+ goto out_vfree;
+ }
+ *puva = uva;
+
+ sem_post(sync);
+
+ WAIT_CHILD_SIGNAL(pid, SIGSEGV, out_unshare);
+
+out_unshare:
+ wrap_unshare(uva, size);
+out_vfree:
+ wrap_vfree(kva);
+
+ return ret;
+}
+
+/*
+ * 测试步骤:父进程加组,并申请内核内存,做k2spg,fork子进程,子进程加相同组,设置成只读权限,子进程写共享内存-----先做k2spg,然后加组
+ * 预期结果:用户写失败,触发segment fault
+ */
+static int testcase4_child(sem_t *sync, unsigned long uva, unsigned long size)
+{
+ int ret;
+ SEM_WAIT(sync);
+
+ memset((void *)uva, 'a', size);
+
+ // unaccessable
+ pr_info("ERROR!! unaccessable statement reach");
+
+ return -1;
+}
+
+static int testcase4(bool is_hugepage)
+{
+ pid_t pid;
+ int ret = 0, spg_id;
+ unsigned long kva, uva;
+ unsigned long size = 1024;
+ sem_t *sync;
+
+ SEM_INIT(sync, (int)is_hugepage);
+
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0)
+ return -1;
+
+ // 加组两次
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0)
+ return -1;
+
+ kva = wrap_vmalloc(size, is_hugepage);
+ if (!kva)
+ return -1;
+
+ uva = wrap_k2u(kva, size, spg_id, 0);
+ if (!uva) {
+ ret = -1;
+ goto out_vfree;
+ }
+
+ FORK_CHILD_ARGS(pid, testcase4_child(sync, uva, size));
+ ret = wrap_add_group(pid, PROT_READ, spg_id);
+ if (ret < 0 || ret != spg_id) {
+ pr_info("add child to group %d failed, ret: %d, errno: %d", spg_id, ret, errno);
+ ret = -1;
+ KILL_CHILD(pid);
+ goto out_unshare;
+ } else
+ ret = 0;
+
+ sem_post(sync);
+
+ WAIT_CHILD_SIGNAL(pid, SIGSEGV, out_unshare);
+
+out_unshare:
+ wrap_unshare(uva, size);
+out_vfree:
+ wrap_vfree(kva);
+
+ return ret;
+}
+
+/*
+ * 测试步骤:内核申请内存,k2task,用户写,内核check,内核重写,用户check,用户unshare
+ * 预期结果:内核check内存成功,用户check成功,无其他异常
+ */
+static int testcase5(bool is_hugepage)
+{
+ int ret = 0, i;
+ unsigned long kva, uva;
+ unsigned long size = 1024;
+
+ kva = wrap_vmalloc(size, is_hugepage);
+ if (!kva)
+ return -1;
+
+ uva = wrap_k2u(kva, size, SPG_ID_DEFAULT, 0);
+ if (!uva) {
+ ret = -1;
+ goto out_vfree;
+ }
+
+ memset((void *)uva, 'a', size);
+ KAREA_ACCESS_CHECK('a', kva, size, out_unshare);
+ KAREA_ACCESS_SET('b', kva, size, out_unshare);
+
+ for (i = 0; i < size; i++)
+ if (((char *)uva)[i] != 'b') {
+ pr_info("buf check failed, i:%d, val:%d", i, ((char *)uva)[i]);
+ ret = -1;
+ break;
+ }
+
+out_unshare:
+ wrap_unshare(uva, size);
+out_vfree:
+ wrap_vfree(kva);
+
+ return ret;
+}
+
+static int test1(void) { return testcase1(false); }
+static int test2(void) { return testcase1(true); }
+static int test3(void) { return testcase2(false); }
+static int test4(void) { return testcase2(true); }
+static int test5(void) { return testcase3(false); }
+static int test6(void) { return testcase3(true); }
+static int test7(void) { return testcase4(false); }
+static int test8(void) { return testcase4(true); }
+static int test9(void) { return testcase5(false); }
+static int test10(void) { return testcase5(true); }
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(test1, "进程A加组,内核申请小页,做k2spg,用户写,内核check,用户调用unshare。预期结果:内核check内存成功")
+ TESTCASE_CHILD(test2, "进程A加组,内核申请大页,做k2spg,用户写,内核check,用户调用unshare。预期结果:内核check内存成功")
+ TESTCASE_CHILD(test3, "父进程加组,并申请内核内存小页,做k2spg,写内存,fork子进程,子进程加相同组,子进程读(check)、写共享内存,内核check-----先做k2spg,然后加组")
+ TESTCASE_CHILD(test4, "父进程加组,并申请内核内存大页,做k2spg,写内存,fork子进程,子进程加相同组,子进程读(check)、写共享内存,内核check -----先做k2spg,然后加组")
+ TESTCASE_CHILD(test5, "父进程加组,并申请内核内存小页,fork子进程,子进程加相同组,设置成只读权限,父进程执行k2spg,并且写共享内存----先加组,然后k2spg。预期结果:用户写失败,触发segment fault")
+ TESTCASE_CHILD(test6, "父进程加组,并申请内核内存大页,fork子进程,子进程加相同组,设置成只读权限,父进程执行k2spg,并且写共享内存----先加组,然后k2spg。预期结果:用户写失败,触发segment fault")
+ TESTCASE_CHILD(test7, "父进程加组,并申请内核内存小页,做k2spg,fork子进程,子进程加相同组,设置成只读权限,子进程写共享内存-----先做k2spg,然后加组预期结果:用户写失败,触发segment fault")
+ TESTCASE_CHILD(test8, "父进程加组,并申请内核内存大页,做k2spg,fork子进程,子进程加相同组,设置成只读权限,子进程写共享内存-----先做k2spg,然后加组预期结果:用户写失败,触发segment fault")
+ TESTCASE_CHILD(test9, "内核申请内存小页,k2task,用户写,内核check,内核重写,用户check,用户unshare。预期结果:内核check内存成功,用户check成功,无其他异常")
+ TESTCASE_CHILD(test10, "内核申请内存大页,k2task,用户写,内核check,内核重写,用户check,用户unshare。预期结果:内核check内存成功,用户check成功,无其他异常")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_make_share_u2k/Makefile b/tools/testing/sharepool/testcase/api_test/sp_make_share_u2k/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_make_share_u2k/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_make_share_u2k/test_sp_make_share_u2k.c b/tools/testing/sharepool/testcase/api_test/sp_make_share_u2k/test_sp_make_share_u2k.c
new file mode 100644
index 000000000000..828b6bf4691a
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_make_share_u2k/test_sp_make_share_u2k.c
@@ -0,0 +1,307 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 10:45:21 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+#include "sharepool_lib.h"
+
+#define LARGE_PAGE_NUM 100000
+
+/*
+ * testcase1: 用户地址映射到内核,参数均有效。预期成功。
+ * testcase2: 用户地址映射到内核,pid无效。预期失败。
+ * testcase3: 用户地址映射到内核,uva无效(未申请过,在sp范围内)。预期失败。
+ * testcase4: 用户地址映射到内核,size超出范围。预期失败。
+ * testcase5: 用户地址映射到内核,size为0、uva、size未对齐。预期成功。
+ * testcase6: 用户态申请大页内存,参数有效,映射给内核,内核check。
+ */
+
+static int prepare(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info->addr, 'q', alloc_info->size);
+ return ret;
+}
+
+static int cleanup(struct sp_make_share_info *u2k_info, struct sp_alloc_info *alloc_info)
+{
+ int ret = 0;
+ if (u2k_info != NULL) {
+ ret = ioctl_unshare(dev_fd, u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ if (alloc_info != NULL) {
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_SIZE,
+ .spg_id = 1,
+ };
+ if (prepare(&alloc_info) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("testcase1 ioctl_u2k failed, errno: %d", errno);
+ cleanup(NULL, &alloc_info);
+ return ret;
+ }
+
+ return cleanup(&u2k_info, &alloc_info);
+}
+
+static int testcase2(void)
+{
+ int ret;
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_SIZE,
+ .spg_id = 1,
+ };
+ if (prepare(&alloc_info) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = 0,
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret != 0 && errno == ESRCH) {
+ //pr_info("testcase2 ioctl_u2k failed as expected, errno: %d", errno);
+ } else if (ret != 0) {
+ pr_info("testcase2 ioctl_u2k failed unexpected, errno: %d", errno);
+ cleanup(NULL, &alloc_info);
+ return ret;
+ } else {
+ pr_info("testcase2 ioctl_u2k success unexpected");
+ cleanup(&u2k_info, &alloc_info);
+ return -1;
+ }
+
+ return cleanup(NULL, &alloc_info);
+}
+
+static int testcase3(void)
+{
+ int ret;
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_SIZE,
+ .spg_id = 1,
+ };
+ if (prepare(&alloc_info) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+ if (cleanup(NULL, &alloc_info) != 0) {
+ return -1;
+ }
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret != 0 && errno == EFAULT) {
+ //pr_info("testcase3 ioctl_u2k failed as expected, errno: %d", errno);
+ return 0;
+ } else if (ret != 0) {
+ pr_info("testcase3 ioctl_u2k failed unexpected, errno: %d", errno);
+ return ret;
+ } else {
+ pr_info("testcase3 ioctl_u2k success unexpected");
+ cleanup(&u2k_info, NULL);
+ return -1;
+ }
+}
+
+static int testcase4(void)
+{
+ int ret;
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_SIZE,
+ .spg_id = 1,
+ };
+ if (prepare(&alloc_info) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info u2k_infos[] = {
+ {
+ .uva = alloc_info.addr,
+ .size = LARGE_PAGE_NUM * PAGE_SIZE,
+ .pid = getpid(),
+ },
+ };
+
+ for (int i = 0; i < sizeof(u2k_infos) / sizeof(u2k_infos[0]); i++) {
+ ret = ioctl_u2k(dev_fd, &u2k_infos[i]);
+ if (ret != 0 && errno == EFAULT) {
+ //pr_info("testcase4 ioctl_u2k %d failed as expected, errno: %d", i, errno);
+ } else if (ret != 0) {
+ pr_info("testcase4 ioctl_u2k %d failed unexpected, errno: %d", i, errno);
+ cleanup(NULL, &alloc_info);
+ return ret;
+ } else {
+ pr_info("testcase4 ioctl_u2k %d success unexpected", i);
+ cleanup(&u2k_infos[i], &alloc_info);
+ return -1;
+ }
+ }
+
+ return cleanup(NULL, &alloc_info);
+}
+
+static int testcase5(void)
+{
+ int ret;
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_SIZE,
+ .spg_id = 1,
+ };
+ if (prepare(&alloc_info) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info u2k_infos[] = {
+ {
+ .uva = alloc_info.addr,
+ .size = 0,
+ .pid = getpid(),
+ },
+ {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size - 1,
+ .pid = getpid(),
+ },
+ {
+ .uva = alloc_info.addr + 1,
+ .size = alloc_info.size - 1,
+ .pid = getpid(),
+ },
+ };
+
+ for (int i = 0; i < sizeof(u2k_infos) / sizeof(u2k_infos[0]); i++) {
+ ret = ioctl_u2k(dev_fd, &u2k_infos[i]);
+ if (ret != 0) {
+ pr_info("testcase5 ioctl_u2k %d failed unexpected, errno: %d", i, errno);
+ cleanup(NULL, &alloc_info);
+ return ret;
+ } else {
+ //pr_info("testcase5 ioctl_u2k %d success expected", i);
+ if (cleanup(&u2k_infos[i], NULL) != 0) {
+ return -1;
+ }
+ }
+ }
+ return cleanup(NULL, &alloc_info);
+}
+
+static int testcase6(void)
+{
+ int ret;
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_HUGEPAGE,
+ .size = PMD_SIZE * 2,
+ .spg_id = 1,
+ };
+ if (prepare(&alloc_info) != 0) {
+ return -1;
+ }
+
+ char *addr = (char *)alloc_info.addr;
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ .u2k_hugepage_checker = true,
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("testcase1 ioctl_u2k failed, errno: %d", errno);
+ cleanup(NULL, &alloc_info);
+ return ret;
+ }
+
+ return cleanup(&u2k_info, &alloc_info);
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "用户地址映射到内核,参数均有效。预期成功。")
+ TESTCASE_STUB(testcase2, "用户地址映射到内核,pid无效。预期失败。")
+ TESTCASE_CHILD(testcase3, "用户地址映射到内核,uva无效(未申请过,在sp范围内)。预期失败。")
+ TESTCASE_CHILD(testcase4, "用户地址映射到内核,size超出范围。预期失败。")
+ TESTCASE_CHILD(testcase5, "用户地址映射到内核,size为0、uva、size未对齐。预期成功。")
+ TESTCASE_CHILD(testcase6, "用户态申请大页内存,参数有效,映射给内核,内核check。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_numa_maps/Makefile b/tools/testing/sharepool/testcase/api_test/sp_numa_maps/Makefile
new file mode 100644
index 000000000000..b640edb83546
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_numa_maps/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+# install: $(testcases)
+# cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_numa_maps/test_sp_numa_maps.c b/tools/testing/sharepool/testcase/api_test/sp_numa_maps/test_sp_numa_maps.c
new file mode 100644
index 000000000000..42cedd4457e7
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_numa_maps/test_sp_numa_maps.c
@@ -0,0 +1,164 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2026-2026. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ unsigned long node_id = 0;
+ if (addgroup() != 0) {
+ return -1;
+ }
+
+ /* normal */
+ struct sp_alloc_info alloc_info = {
+ .flag = (node_id << 36UL) | SP_SPEC_NODE_ID,
+ .size = 100 * PAGE_SIZE,
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_alloc failed1, errno: %d", errno);
+ return ret;
+ }
+
+ node_id = 1;
+
+ struct sp_alloc_info alloc_info2 = {
+ .flag = (node_id << 36) | SP_SPEC_NODE_ID,
+ .size = 200 * PAGE_SIZE,
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info2);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_alloc failed2, errno: %d", errno);
+ return ret;
+ }
+
+ /* hugetlb */
+ node_id = 2;
+
+ struct sp_alloc_info alloc_info3 = {
+ .flag = (node_id << 36) | SP_SPEC_NODE_ID | SP_HUGEPAGE,
+ .size = 10 * PMD_SIZE,
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info3);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_alloc failed3, errno: %d", errno);
+ return ret;
+ }
+
+
+ /* remote */
+ // struct sp_add_group_info ag_info = {
+ // .pid = getpid(),
+ // .prot = PROT_READ | PROT_WRITE,
+ // .spg_id = 20,
+ // };
+ // ret = ioctl_add_group(dev_fd, &ag_info);
+ // if (ret < 0) {
+ // pr_info("ioctl_add_group failed, errno: %d", errno);
+ // }
+
+ // struct register_remote_range_struct info = {
+ // .spg_id = 20,
+ // .va = 0xe8b000000000,
+ // .pa = 0x1ff0000000,
+ // .size = 8 * 1024 *1024, // 8M
+ // };
+
+ // ret = ioctl_register_remote_range(dev_fd, &info);
+ // if (ret != 0 && errno == ENOMEM) {
+ // printf("ioctl_register_remote_range failed, ret: %d\n", ret);
+ // return -1;
+ // } else if (ret != 0) {
+ // printf("ioctl_register_remote_range failed, ret: %d\n", ret);
+ // return -1;
+ // }
+
+
+ /* k2u */
+ struct vmalloc_info vmalloc_info = {
+ .size = 20 * PMD_SIZE,
+ };
+
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'a',
+ .addr = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = SP_DVPP,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ return -1;
+ } else
+ pr_info("k2u success, addr: %#x", k2u_info.addr);
+
+ /* 手动 Ctrl + Z,然后 cat /proc/sharepool/proc_stat */
+ sleep(10);
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "Ctrl + Z, and then `cat /proc/sharepool/proc_stat` to show numa maps; 预期: N0: 400, N2: 20480; REMOTE: 8192")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/Makefile b/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg.c b/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg.c
new file mode 100644
index 000000000000..ac06e14e1e64
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg.c
@@ -0,0 +1,44 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+static int testcase1(void)
+{
+ int ret;
+
+ ret = ioctl_hpage_reg_test_suite(dev_fd, (void *)1);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_hpage_reg_test_suite failed, ret: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "入参合法性检查 & 注册一次成功,第二次失败")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg_after_alloc.c b/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg_after_alloc.c
new file mode 100644
index 000000000000..797c1595240a
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg_after_alloc.c
@@ -0,0 +1,84 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+#define PAGE_NUM 100
+
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int cleanup(struct sp_alloc_info *alloc_info)
+{
+ int ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_NUM * PAGE_SIZE,
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_hpage_reg_after_alloc(dev_fd, (void *)1);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_hpage_reg_after_alloc failed, ret: %d\n", ret);
+ return ret;
+ }
+
+ return cleanup(&alloc_info);
+}
+
+/* testcase1: "先申请内存,后注册"
+*/
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "先申请内存,后注册")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg_exec.c b/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg_exec.c
new file mode 100644
index 000000000000..e88d83688149
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_reg_hpage/test_sp_hpage_reg_exec.c
@@ -0,0 +1,82 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int cleanup(struct sp_alloc_info *alloc_info)
+{
+ int ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+
+static int testcase1(void)
+{
+ int ret;
+
+ ret = ioctl_hpage_reg_test_exec(dev_fd, (void *)1);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_hpage_reg_test_exec failed, ret: %d\n", ret);
+ return ret;
+ }
+
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_HUGEPAGE,
+ .size = PMD_SIZE,
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != -ENOMEM) {
+ pr_info("testcase1 ioctl_alloc unexpected, ret: 0x%d, errno: %d", ret, errno);
+ return ret;
+ }
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "大页申请函数注册后,能够成功执行到。预期dmesg打印:test_alloc_hugepage: execute succ.")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_unshare/Makefile b/tools/testing/sharepool/testcase/api_test/sp_unshare/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_unshare/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_unshare/test_sp_unshare.c b/tools/testing/sharepool/testcase/api_test/sp_unshare/test_sp_unshare.c
new file mode 100644
index 000000000000..823060626fd9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_unshare/test_sp_unshare.c
@@ -0,0 +1,394 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 10:45:21 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * testcase1: k2u后,解映射用户空间地址范围,size超出申请范围;spg_id非法。预期失败。
+ * testcase2: k2u后,解映射用户地址空间范围,va、size未对齐;size为0;pid非法,预期成功。
+ * testcase3: u2k后,解映射内核地址空间范围,size为0或超出申请范围;pid和spg_id非法;va、size未对齐。预期成功。
+ */
+
+static int prepare_kva(struct vmalloc_info *ka_info)
+{
+ int ret;
+ ret = ioctl_vmalloc(dev_fd, ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'a',
+ .addr = ka_info->addr,
+ .size = ka_info->size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, ka_info);
+ }
+ return ret;
+}
+
+static int prepare_uva(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info->addr, 'q', alloc_info->size);
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ if (prepare_kva(&ka_info) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("testcase1 ioctl_k2u failed unexpected, errno: %d", errno);
+ goto out2;
+ }
+ struct sp_make_share_info back_k2u_info = k2u_info;
+
+ struct sp_make_share_info unshare_info[] = {
+ {
+ .size = ka_info.size * 2,
+ .spg_id = 20,
+ .pid = getpid(),
+ },
+ };
+
+ for (int i = 0; i < sizeof(unshare_info) / sizeof(unshare_info[0]); i++) {
+ k2u_info.size = unshare_info[i].size;
+ k2u_info.spg_id = unshare_info[i].spg_id;
+ k2u_info.pid = unshare_info[i].pid;
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret != 0 && errno == EINVAL) {
+ pr_info("testcase1 ioctl_unshare %d failed expected", i);
+ ret = 0;
+ } else if (ret != 0) {
+ pr_info("testcase1 ioctl_unshare %d failed unexpected, errno: %d", i, errno);
+ goto out1;
+ } else {
+ pr_info("testcase1 ioctl_unshare %d success unexpected", i);
+ ret = -1;
+ goto out2;
+ }
+ }
+
+out1:
+ if (ioctl_unshare(dev_fd, &back_k2u_info) != 0) {
+ pr_info("testcase1 cleanup ioctl_unshare failed unexpected");
+ return -1;
+ }
+out2:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+static int testcase2(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ if (prepare_kva(&ka_info) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ struct sp_make_share_info back_k2u_info1 = k2u_info;
+
+ struct sp_make_share_info unshare_info[] = {
+ {
+ .addr = 1,
+ .size = ka_info.size - 1,
+ .pid = getpid(),
+ },
+ {
+ .addr = 0,
+ .size = ka_info.size - 1,
+ .pid = getpid(),
+ },
+ {
+ .addr = 0,
+ .size = 0,
+ .pid = getpid(),
+ },
+ {
+ .addr = 0,
+ .size = ka_info.size - 1,
+ .pid = 0,
+ },
+ };
+
+ struct sp_make_share_info back_k2u_info2;
+
+ for (int i = 0; i < sizeof(unshare_info) / sizeof(unshare_info[0]); i++) {
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("testcase2 ioctl_k2u failed unexpected, errno: %d", errno);
+ goto out2;
+ }
+ back_k2u_info2 = k2u_info;
+ k2u_info.addr += unshare_info[i].addr;
+ k2u_info.size = unshare_info[i].size;
+ k2u_info.pid = unshare_info[i].pid;
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret != 0) {
+ pr_info("testcase2 ioctl_unshare %d failed unexpected, errno: %d", i, errno);
+ goto out1;
+ } else {
+ pr_info("testcase2 ioctl_unshare %d success expected", i);
+ }
+ k2u_info = back_k2u_info1;
+ }
+ goto out2;
+
+out1:
+ if (ioctl_unshare(dev_fd, &back_k2u_info2) != 0) {
+ pr_info("testcase2 cleanup ioctl_unshare failed unexpected");
+ return -1;
+ }
+out2:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+/*
+static int testcase3(void)
+{
+ int ret;
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_SIZE,
+ .spg_id = 1,
+ };
+ if (prepare_uva(&alloc_info) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("testcase3 ioctl_u2k failed unexpected, errno: %d", errno);
+ goto out2;
+ }
+ struct sp_make_share_info back_u2k_info = u2k_info;
+
+ struct sp_make_share_info unshare_info[] = {
+ {
+ .size = 0,
+ .spg_id = 1,
+ .pid = getpid(),
+ },
+ {
+ .size = alloc_info.size * 2,
+ .spg_id = 1,
+ .pid = getpid(),
+ },
+ {
+ .size = alloc_info.size,
+ .spg_id = 1,
+ .pid = 0,
+ },
+ {
+ .size = alloc_info.size,
+ .spg_id = SPG_ID_AUTO_MIN,
+ .pid = getpid(),
+ },
+ };
+
+ for (int i = 0; i < sizeof(unshare_info) / sizeof(unshare_info[0]); i++) {
+ u2k_info.size = unshare_info[i].size;
+ u2k_info.spg_id = unshare_info[i].spg_id;
+ u2k_info.pid = unshare_info[i].pid;
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret != 0 && errno == EINVAL) {
+ pr_info("testcase3 ioctl_unshare %d failed expected", i);
+ ret = 0;
+ } else if (ret != 0) {
+ pr_info("testcase3 ioctl_unshare %d failed unexpected, errno: %d", i, errno);
+ goto out1;
+ } else {
+ pr_info("testcase3 ioctl_unshare %d success unexpected", i);
+ ret = -1;
+ goto out2;
+ }
+ }
+
+out1:
+ if (ioctl_unshare(dev_fd, &back_u2k_info) != 0) {
+ pr_info("testcase3 cleanup ioctl_unshare failed unexpected");
+ return -1;
+ }
+out2:
+ if (ioctl_free(dev_fd, &alloc_info) != 0) {
+ pr_info("testcase3 cleanup ioctl_free failed unexpected");
+ return -1;
+ }
+ return ret;
+}
+*/
+
+static int testcase3(void)
+{
+ int ret;
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_SIZE,
+ .spg_id = 1,
+ };
+ if (prepare_uva(&alloc_info) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+ struct sp_make_share_info back_u2k_info1 = u2k_info;
+
+ struct sp_make_share_info unshare_info[] = {
+ {
+ .addr = 1,
+ .size = alloc_info.size - 1,
+ .spg_id = 1,
+ .pid = getpid(),
+ },
+ {
+ .addr = 0,
+ .size = alloc_info.size - 1,
+ .spg_id = 1,
+ .pid = getpid(),
+ },
+ {
+ .addr = 0,
+ .size = 0,
+ .spg_id = 1,
+ .pid = getpid(),
+ },
+ {
+ .addr = 0,
+ .size = alloc_info.size / 2,
+ .spg_id = 1,
+ .pid = getpid(),
+ },
+ {
+ .addr = 0,
+ .size = alloc_info.size,
+ .spg_id = 1,
+ .pid = 0,
+ },
+ {
+ .addr = 0,
+ .size = alloc_info.size,
+ .spg_id = SPG_ID_AUTO_MIN,
+ .pid = getpid(),
+ },
+ };
+
+ struct sp_make_share_info back_u2k_info2;
+
+ for (int i = 0; i < sizeof(unshare_info) / sizeof(unshare_info[0]); i++) {
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("testcase3 ioctl_u2k failed unexpected, errno: %d", errno);
+ goto out2;
+ }
+ back_u2k_info2 = u2k_info;
+ u2k_info.addr += unshare_info[i].addr;
+ u2k_info.size = unshare_info[i].size;
+ u2k_info.spg_id = unshare_info[i].spg_id;
+ u2k_info.pid = unshare_info[i].pid;
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret != 0) {
+ pr_info("testcase3 ioctl_unshare %d failed unexpected, errno: %d", i, errno);
+ goto out1;
+ } else {
+ pr_info("testcase3 ioctl_unshare %d success expected", i);
+ }
+ u2k_info = back_u2k_info1;
+ }
+ goto out2;
+
+out1:
+ if (ioctl_unshare(dev_fd, &back_u2k_info2) != 0) {
+ pr_info("testcase4 cleanup ioctl_unshare failed unexpected");
+ return -1;
+ }
+out2:
+ if (ioctl_free(dev_fd, &alloc_info) != 0) {
+ pr_info("testcase3 cleanup ioctl_free failed unexpected");
+ return -1;
+ }
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "k2u后,解映射用户空间地址范围,size超出申请范围;spg_id非法。预期失败。")
+ TESTCASE_CHILD(testcase2, "k2u后,解映射用户地址空间范围,va、size未对齐;size为0;pid非法,预期成功。")
+ TESTCASE_CHILD(testcase3, "u2k后,解映射内核地址空间范围,size为0或超出申请范围;pid和spg_id非法;va、size未对齐。预期成功。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/api_test/sp_walk_page_range_and_free/Makefile b/tools/testing/sharepool/testcase/api_test/sp_walk_page_range_and_free/Makefile
new file mode 100644
index 000000000000..592e58e18fc9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_walk_page_range_and_free/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/api_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/api_test/sp_walk_page_range_and_free/test_sp_walk_page_range_and_free.c b/tools/testing/sharepool/testcase/api_test/sp_walk_page_range_and_free/test_sp_walk_page_range_and_free.c
new file mode 100644
index 000000000000..7669dc57cd83
--- /dev/null
+++ b/tools/testing/sharepool/testcase/api_test/sp_walk_page_range_and_free/test_sp_walk_page_range_and_free.c
@@ -0,0 +1,339 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Sat Nov 28 08:13:17 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <sys/types.h>
+#include <sys/wait.h> // for wait
+
+#include "sharepool_lib.h"
+
+
+
+static int child(struct sp_alloc_info *alloc_info)
+{
+ int ret = 0;
+ int group_id = alloc_info->spg_id;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("add group failed, errno: %d", errno);
+ return -1;
+ }
+
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct sp_walk_page_range_info wpr_info = {
+ .uva = alloc_info->addr,
+ .size = alloc_info->size,
+ };
+ ret = ioctl_walk_page_range(dev_fd, &wpr_info);
+ if (ret < 0) {
+ pr_info("ioctl_walk_page_range failed, errno: %d", errno);
+ return -1;
+ }
+
+ ioctl_walk_page_free(dev_fd, &wpr_info);
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+ struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = 0,
+ .spg_id = 10,
+ .size = 10 * PAGE_SIZE,
+ },
+ {
+ .flag = SP_HUGEPAGE,
+ .spg_id = 12,
+ .size = 10 * PMD_SIZE,
+ },
+ {
+ .flag = SP_DVPP,
+ .spg_id = 19,
+ .size = 100000,
+ },
+ {
+ .flag = SP_DVPP | SP_HUGEPAGE,
+ .spg_id = 19,
+ .size = 10000000,
+ },
+ };
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(alloc_infos + i));
+ }
+
+ int status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("testcase1 failed!!, i: %d", i);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+/* uva 无效 */
+static int testcase2(void)
+{
+ struct sp_walk_page_range_info wpr_info = {
+ .uva = 0xe800000000,
+ .size = 1000,
+ };
+ int ret = ioctl_walk_page_range(dev_fd, &wpr_info);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range successed unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* size 为0,size超限制 */
+static int testcase3(void)
+{
+ int ret = 0;
+ int group_id = 100;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("add group failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .spg_id = group_id,
+ .size = 12345,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+ struct sp_walk_page_range_info wpr_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size * 10,
+ };
+ ret = ioctl_walk_page_range(dev_fd, &wpr_info);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range successed unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* size超限制,范围内存在未映射的物理页 */
+static int testcase4(void)
+{
+ int ret = 0;
+ int group_id = 130;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("add group failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_HUGEPAGE,
+ .spg_id = group_id,
+ .size = PMD_SIZE * 2,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+ struct sp_walk_page_range_info wpr_info = {
+ .uva = alloc_info.addr - PMD_SIZE,
+ .size = alloc_info.size * 10,
+ };
+ pr_info("uva is %lx, size is %lx", wpr_info.uva, wpr_info.size);
+ ret = ioctl_walk_page_range(dev_fd, &wpr_info);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range successed unexpected");
+ return 0;
+ }
+
+ return 0;
+}
+
+/* uva正常,size越界 */
+static int testcase5(void)
+{
+ unsigned long size = 0xffffffffffffffff;
+ int ret = 0;
+ unsigned long addr;
+
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 1) < 0)
+ return -1;
+
+ addr = (unsigned long)wrap_sp_alloc(1, PMD_SIZE, 1);
+ if (addr == -1) {
+ pr_info("alloc failed");
+ return -1;
+ }
+
+ ret = wrap_walk_page_range(addr, size);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range successed unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* uva和size均正常,但vma中存在未建立页表的空洞,预期失败,无泄漏 */
+static int testcase6(void)
+{
+ int ret = 0;
+ void *addr;
+ unsigned long size = 3 * PAGE_SIZE;
+
+ addr = mmap(NULL, size, PROT_WRITE | PROT_READ,
+ MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+ if (addr == MAP_FAILED) {
+ pr_info("mmap failed! errno %d", errno);
+ return -1;
+ }
+
+ /* 先不踩页,walk_page预期失败 */
+ ret = wrap_walk_page_range(addr, size);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range successed unexpected");
+ return -1;
+ }
+
+ /* 踩页,再次walk_page相应的大小,预期成功 */
+ size = 2 * PAGE_SIZE;
+ memset(addr, 0, size);
+
+ struct sp_walk_page_range_info wpr_info = {
+ .uva = addr,
+ .size = size,
+ };
+ pr_info("uva is %lx, size is %lx", wpr_info.uva, wpr_info.size);
+ ret = ioctl_walk_page_range(dev_fd, &wpr_info);
+ if (ret < 0) {
+ pr_info("ioctl_walk_page_range failed unexpected, errno %d", ret);
+ return -1;
+ }
+ ret = ioctl_walk_page_free(dev_fd, &wpr_info);
+ if (ret < 0) {
+ pr_info("ioctl_walk_page_free failed unexpected, errno %d", ret);
+ return -1;
+ }
+
+ return 0;
+}
+
+/* uva和size均正常,但vma中存在未建立页表的空洞,预期失败,无泄漏 */
+static int testcase7(void)
+{
+ int ret = 0;
+ void *addr;
+ unsigned long size = 20 * PMD_SIZE;
+
+ addr = mmap(NULL, size, PROT_WRITE | PROT_READ,
+ MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+ if (addr == MAP_FAILED) {
+ pr_info("mmap failed! errno %d", errno);
+ return -1;
+ }
+
+ /* 先不踩页,walk_page预期失败 */
+ ret = wrap_walk_page_range(addr, size);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range successed unexpected");
+ return -1;
+ }
+
+ /* 踩页,再次walk_page相应的大小,预期成功 */
+ size = 5 * PMD_SIZE;
+ memset(addr, 0, size);
+ /* 额外多walk_page一页,预期失败 */
+ ret = wrap_walk_page_range(addr, size + PMD_SIZE);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range success unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+#define TASK_SIZE 0xffffffffffff
+/* uva为TASK_SIZE-1,找不到vma,预期失败 */
+static int testcase8(void)
+{
+ int ret = 0;
+ void *addr = (void *)(TASK_SIZE);
+ unsigned long size = 3 * PAGE_SIZE;
+
+ ret = wrap_walk_page_range(addr, size);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range successed unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "遍历小页、大页、dvpp小页、dvpp大页,预期成功")
+ TESTCASE_CHILD(testcase2, "uva 无效 预期失败")
+ TESTCASE_CHILD(testcase3, "size 为0,size超限制 预期失败")
+// TESTCASE_CHILD(testcase4, "size超限制,范围内存在未映射的物理页 预期失败")
+ TESTCASE_CHILD(testcase5, "uva正常,size越界 预期失败")
+ TESTCASE_CHILD(testcase6, "uva和size均正常,但vma未建立页表,预期失败,无泄漏。踩页后再次walk该vma,预期成功")
+ TESTCASE_CHILD(testcase7, "uva和size均正常,但vma未建立页表,预期失败,无泄漏。踩页后再次walk该vma且额外walk一个页大小,预期失败")
+ TESTCASE_CHILD(testcase8, "传入uva = (TASK_SIZE - 1), 预期找不到vma,失败")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/Makefile b/tools/testing/sharepool/testcase/dts_bugfix_test/Makefile
new file mode 100644
index 000000000000..e5f6e448e506
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/Makefile
@@ -0,0 +1,15 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ mkdir -p $(TOOL_BIN_DIR)/dts_bugfix_test
+ cp $(testcases) $(TOOL_BIN_DIR)/dts_bugfix_test
+ cp dts_bugfix_test.sh $(TOOL_BIN_DIR)
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/dts_bugfix_test.sh b/tools/testing/sharepool/testcase/dts_bugfix_test/dts_bugfix_test.sh
new file mode 100755
index 000000000000..f22ee6d0d0db
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/dts_bugfix_test.sh
@@ -0,0 +1,43 @@
+#!/bin/sh
+
+set -x
+
+echo 'test_01_coredump_k2u_alloc
+ test_02_spg_not_alive
+ test_08_addr_offset' | while read line
+do
+ let flag=0
+ ./dts_bugfix_test/$line
+ if [ $? -ne 0 ] ;then
+ echo "testcase dts_bugfix_test/$line failed"
+ let flag=1
+ fi
+
+ sleep 3
+
+ #打印spa_stat
+ ret=`cat /proc/sharepool/spa_stat | wc -l`
+ if [ $ret -ge 15 ] ;then
+ cat /proc/sharepool/spa_stat
+ echo spa_stat not clean
+ let flag=1
+ fi
+ #打印proc_stat
+ ret=`cat /proc/sharepool/proc_stat | wc -l`
+ if [ $ret -ge 15 ] ;then
+ cat /proc/sharepool/proc_stat
+ echo proc_stat not clean
+ let flag=1
+ fi
+
+ cat /proc/sharepool/proc_overview
+ #如果泄漏 -->exit
+ if [ $flag -eq 1 ] ;then
+ exit 1
+ fi
+ echo "testcase dts_bugfix_test/$line success"
+
+ cat /proc/meminfo
+ free -m
+
+done
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_01_coredump_k2u_alloc.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_01_coredump_k2u_alloc.c
new file mode 100644
index 000000000000..4842fde68720
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_01_coredump_k2u_alloc.c
@@ -0,0 +1,603 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 06:59:45 2020
+ */
+#include <stdio.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <signal.h>
+#include <sys/time.h>
+#include <sys/resource.h>
+#include <stdlib.h> /* rand() and srand() */
+#include <time.h> /* time() */
+#include <pthread.h>
+
+#include "sharepool_lib.h"
+#include "sem_use.h"
+
+#define PROC_NUM 128
+#define GROUP_ID 1
+#define K2U_UNSHARE_TIME 2
+#define ALLOC_FREE_TIME 2
+#define VMALLOC_SIZE 4096
+#define PROT (PROT_READ | PROT_WRITE)
+#define GROUP_NUM 4
+#define K2U_CONTINUOUS_TIME 2000
+#define min(a,b) ((a)<(b)?(a):(b))
+
+/* testcase base:
+ * 每个组都拉起一个进程负责k2u/alloc,其他N个进程加多组后依次coredump,所有组k2u/alloc每次都应该返回成功。
+ * 打印维测信息正常。
+ * 所有进程coredump后,测试退出,打印维测信息,组和spa均为0,无泄漏。
+ */
+
+static int semid[PROC_NUM];
+static int sem_task;
+static int group_ids[GROUP_NUM];
+
+struct k2u_args {
+ int with_print;
+ int k2u_whole_times; // repeat times
+ int (*k2u_tsk)(struct k2u_args);
+};
+
+struct task_param {
+ bool with_print;
+};
+
+struct test_setting {
+ int (*task)(struct task_param*);
+ struct task_param *task_param;
+};
+
+static int init_sem();
+static int close_sem();
+static int k2u_unshare_task(struct task_param *task_param);
+static int k2u_continuous_task(struct task_param *task_param);
+static int child_process(int index);
+static int alloc_free_task(struct task_param *task_param);
+static int alloc_continuous_task(struct task_param *task_param);
+static int testcase_combine(int (*task1)(struct task_param*),
+ int (*task2)(struct task_param*), struct task_param *param);
+
+static int testcase_base(struct test_setting test_setting)
+{
+ int status;
+ int pid;
+ int child[PROC_NUM];
+ int ret;
+ int pid_k2u;
+
+ setCore();
+ // 初始化sem
+ ret = init_sem();
+ if (ret < 0) {
+ pr_info("init sem failed");
+ return -1;
+ }
+
+ // 创建组
+ //ret = wrap_add_group(getpid(), PROT, GROUP_ID);
+ ret = create_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ return -1;
+ }
+
+ // 拉起功能进程,负责k2u或者alloc
+ pid_k2u = fork();
+ if (pid_k2u < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid_k2u == 0) {
+ ret = add_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ exit(-1);
+ } else {
+ pr_info("functional process add group succuess.");
+ }
+ exit(test_setting.task(test_setting.task_param));
+ }
+
+ // 拉起子进程
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed, deleting procs...");
+ goto delete_procs;
+ } else if (pid == 0) {
+ // 拉起子进程 hanging
+ exit(child_process(i));
+ } else {
+ child[i] = pid;
+ //ret = wrap_add_group(pid, PROT, GROUP_ID);
+ ret = add_multi_groups(pid, GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", pid);
+ goto delete_procs;
+ }
+ }
+ }
+
+ // 依次让子进程coredump
+ for (int i = 0; i < PROC_NUM; i++) {
+ pr_info("coredump process %d", child[i]);
+ sem_inc_by_one(semid[i]);
+ waitpid(child[i], &status, 0);
+ usleep(200000);
+ }
+
+ // 功能进程退出
+ sem_inc_by_one(sem_task);
+ waitpid(pid_k2u, &status, 0);
+
+ close_sem();
+ return 0;
+
+delete_procs:
+ return -1;
+}
+
+static int testcase_combine(int (*task1)(struct task_param*),
+ int (*task2)(struct task_param*), struct task_param *param)
+{
+ int status;
+ int pid;
+ int child[PROC_NUM];
+ int ret;
+ int pid_k2u, pid_alloc;
+
+ setCore();
+ // 初始化sem
+ ret = init_sem();
+ if (ret < 0) {
+ pr_info("init sem failed");
+ return -1;
+ }
+
+ // 创建组
+ //ret = wrap_add_group(getpid(), PROT, GROUP_ID);
+ ret = create_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ return -1;
+ }
+
+ // 拉起功能进程,负责k2u或者alloc
+ pid_k2u = fork();
+ if (pid_k2u < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid_k2u == 0) {
+ ret = add_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ exit(-1);
+ } else {
+ pr_info("functional process add group succuess.");
+ }
+ exit(task1(param));
+ }
+
+ pid_alloc = fork();
+ if (pid_alloc < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid_alloc == 0) {
+ ret = add_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ exit(-1);
+ } else {
+ pr_info("functional process add group succuess.");
+ }
+ exit(task2(param));
+ }
+
+ // 拉起子进程
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed, deleting procs...");
+ goto delete_procs;
+ } else if (pid == 0) {
+ // 拉起子进程 hanging
+ exit(child_process(i));
+ } else {
+ child[i] = pid;
+ //ret = wrap_add_group(pid, PROT, GROUP_ID);
+ ret = add_multi_groups(pid, GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", pid);
+ goto delete_procs;
+ }
+ }
+ }
+
+ // 依次让子进程coredump
+ for (int i = 0; i < PROC_NUM; i++) {
+ pr_info("coredump process %d", child[i]);
+ sem_inc_by_one(semid[i]);
+ waitpid(child[i], &status, 0);
+ usleep(200000);
+ }
+
+ // 功能进程退出
+ sem_inc_by_val(sem_task, 2);
+ waitpid(pid_k2u, &status, 0);
+ waitpid(pid_alloc, &status, 0);
+
+ close_sem();
+ return 0;
+
+delete_procs:
+ return -1;
+}
+
+static struct task_param task_param_table[] = {
+ {
+ .with_print = false,
+ },
+ {
+ .with_print = true,
+ },
+ {
+ .with_print = false,
+ },
+ {
+ .with_print = false,
+ },
+};
+
+static struct test_setting test_setting_table[] = {
+ {
+ .task_param = &task_param_table[0],
+ .task = k2u_unshare_task,
+ },
+ {
+ .task_param = &task_param_table[1],
+ .task = k2u_unshare_task,
+ },
+ {
+ .task_param = &task_param_table[2],
+ .task = k2u_continuous_task,
+ },
+ {
+ .task_param = &task_param_table[3],
+ .task = alloc_free_task,
+ },
+ {
+ .task_param = &task_param_table[3],
+ .task = alloc_continuous_task,
+ },
+ {
+ .task_param = &task_param_table[1],
+ .task = alloc_free_task,
+ },
+};
+
+/* testcase1
+ * 执行k2u_unshare_task
+ */
+static int testcase1(void)
+{
+ return testcase_base(test_setting_table[0]);
+}
+/* testcase2
+ * 执行k2u_unshare_task, 同时打印维测
+ */
+static int testcase2(void)
+{
+ return testcase_base(test_setting_table[1]);
+}
+/* testcase3
+ * 执行k2u_continuous_task
+ */
+static int testcase3(void)
+{
+ return testcase_base(test_setting_table[2]);
+}
+/* testcase4
+ * 执行alloc_free_task
+ */
+static int testcase4(void)
+{
+ return testcase_base(test_setting_table[3]);
+}
+/* testcase5
+ * 执行alloc_continuous_task
+ */
+static int testcase5(void)
+{
+ return testcase_base(test_setting_table[4]);
+}
+/* testcase6
+ * 执行alloc_free_task, 同时打印维测
+ */
+static int testcase6(void)
+{
+ return testcase_base(test_setting_table[5]);
+}
+/* testcase7
+ * 混合执行k2u_continuous_task和alloc_continuous_task, 同时打印维测
+ */
+static int testcase7(void)
+{
+ return testcase_combine(k2u_continuous_task, alloc_continuous_task, &task_param_table[0]);
+}
+
+static int close_sem()
+{
+ int ret;
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ ret = sem_close(semid[i]);
+ if (ret < 0) {
+ pr_info("sem close failed");
+ return ret;
+ }
+ }
+ sem_close(sem_task);
+ pr_info("all sems deleted.");
+ return 0;
+}
+
+static int init_sem()
+{
+ int i = 0;
+
+ sem_task = sem_create(PROC_NUM, "sem_task");
+
+ for (i = 0; i < PROC_NUM; i++) {
+ key_t key = i;
+ semid[i] = sem_create(key, "sem_child");
+ if (semid[i] < 0) {
+ pr_info("semid %d init failed. errno: %d", i, errno);
+ goto delete_sems;
+ }
+ }
+ pr_info("all sems initialized.");
+ return 0;
+
+delete_sems:
+ for (int j = 0; j < i; j++) {
+ sem_close(semid[j]);
+ }
+ return -1;
+}
+
+static int child_process(int index)
+{
+ pr_info("child process %d created", getpid());
+ // 收到coredump信号后coredump
+ sem_dec_by_one(semid[index]);
+ pr_info("child process %d coredump", getpid());
+ generateCoredump();
+ return 0;
+}
+
+/* k2u_unshare_task
+ * 快速的k2u -> unshare -> k2u -> unshare 循环
+ */
+static int k2u_unshare_task(struct task_param *task_param)
+{
+ int ret;
+ int i;
+ struct vmalloc_info vmalloc_info;
+ struct sp_make_share_info k2u_info;
+ unsigned long uva[K2U_UNSHARE_TIME];
+
+ vmalloc_info.size = VMALLOC_SIZE;
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed.");
+ return -1;
+ } else {
+ pr_info("vmalloc %ld success.", vmalloc_info.size);
+ }
+
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = getpid();
+ k2u_info.spg_id = GROUP_ID;
+
+repeat:
+ memset(uva, 0, sizeof(unsigned long) * K2U_UNSHARE_TIME);
+ for (i = 0; i < K2U_UNSHARE_TIME; i++) {
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("k2u failed at %d time.", i);
+ goto unshare;
+ } else {
+ pr_info("k2u success %d time, addr = %lx", i, k2u_info.addr);
+ uva[i] = k2u_info.addr;
+ }
+ }
+
+ if (task_param->with_print)
+ sharepool_print();
+
+unshare:
+ for (int j = 0; j < i; j++) {
+ pr_info("uva[%d] is %ld", j, uva[j]);
+ k2u_info.addr = uva[j];
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("unshare failed at %d", j);
+ return -1;
+ }
+ }
+
+ if (sem_get_value(sem_task) == 0)
+ goto repeat;
+
+ ioctl_vfree(dev_fd, &vmalloc_info);
+
+ return 0;
+}
+
+/* k2u_continuous_task
+ * 先连续k2u 很多次 再连续unshare 很多次
+ */
+static int k2u_continuous_task(struct task_param *task_param)
+{
+ int ret;
+ int i, h;
+ struct vmalloc_info vmalloc_info;
+ struct sp_make_share_info k2u_info;
+ unsigned long *uva;
+
+ vmalloc_info.size = VMALLOC_SIZE;
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed.");
+ return -1;
+ } else {
+ pr_info("vmalloc %ld success.", vmalloc_info.size);
+ }
+
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = getpid();
+ //k2u_info.spg_id = GROUP_ID;
+
+ uva = malloc(sizeof(unsigned long) * K2U_CONTINUOUS_TIME * GROUP_NUM);
+
+ memset(uva, 0, sizeof(unsigned long) * K2U_CONTINUOUS_TIME * GROUP_NUM);
+ for (i = 0; i < K2U_CONTINUOUS_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ k2u_info.spg_id = group_ids[h];
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("k2u failed at %d time in group %d.", i, group_ids[h]);
+ goto unshare;
+ } else {
+ pr_info("k2u success %d time, addr = %lx", i, k2u_info.addr);
+ uva[i * GROUP_NUM + h] = k2u_info.addr;
+ }
+ }
+ }
+
+unshare:
+ for (int j = 0; j < min((i * GROUP_NUM + h), K2U_CONTINUOUS_TIME * GROUP_NUM); j++) {
+ pr_info("uva[%d] is %ld", j, uva[j]);
+ k2u_info.addr = uva[j];
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("unshare failed at %d", j);
+ return -1;
+ }
+ }
+
+ ioctl_vfree(dev_fd, &vmalloc_info);
+
+ return 0;
+}
+
+/* alloc_free_task
+ * 快速的alloc-free-alloc-free循环*/
+#define ALLOC_SIZE 4096
+#define ALLOC_FLAG 0
+static int alloc_free_task(struct task_param *task_param)
+{
+ int ret = 0;
+ int i, h;
+ unsigned long ret_addr = -1;
+ unsigned long addr[ALLOC_FREE_TIME][GROUP_NUM];
+
+repeat:
+ for (i = 0; i < ALLOC_FREE_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ ret_addr = wrap_sp_alloc(group_ids[h], ALLOC_SIZE, ALLOC_FLAG);
+ if (!ret_addr) {
+ pr_info("alloc failed %d time %d group", i, h + 1);
+ return ret_addr;
+ } else {
+ addr[i][h] = ret_addr;
+ pr_info("alloc success addr %x", ret_addr);
+ }
+ }
+ }
+
+ if (task_param->with_print)
+ sharepool_print();
+
+free:
+ for (i = 0; i < ALLOC_FREE_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ ret = wrap_sp_free(addr[i][h]);
+ if (ret < 0) {
+ pr_info("free failed %d time group %d", i, h + 1);
+ return ret;
+ }
+ }
+ }
+
+ if (sem_get_value(sem_task) == 0)
+ goto repeat;
+
+ return ret;
+}
+
+/* alloc_continuous_task
+ * 先连续alloc 很多次 再连续free 很多次
+ */
+
+#define ALLOC_CONTINUOUS_TIME 2000
+static int alloc_continuous_task(struct task_param *task_param)
+{
+ int ret = 0;
+ int i, h;
+ unsigned long ret_addr = -1;
+ unsigned long addr[ALLOC_FREE_TIME][GROUP_NUM];
+ for (i = 0; i < ALLOC_CONTINUOUS_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ ret_addr = wrap_sp_alloc(group_ids[h], ALLOC_SIZE, ALLOC_FLAG);
+ if (!ret_addr) {
+ pr_info("alloc failed %d time %d group", i, h + 1);
+ return ret_addr;
+ } else {
+ addr[i][h] = ret_addr;
+ pr_info("alloc success addr %x", ret_addr);
+ }
+ }
+ }
+ for (i = 0; i < ALLOC_CONTINUOUS_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ ret = wrap_sp_free(addr[i][h]);
+ if (ret < 0) {
+ pr_info("free failed %d time group %d", i, h + 1);
+ return ret;
+ }
+ }
+ }
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "1")
+ TESTCASE_CHILD(testcase2, "2")
+ TESTCASE_CHILD(testcase3, "3")
+ TESTCASE_CHILD(testcase4, "4")
+ TESTCASE_CHILD(testcase5, "5")
+ TESTCASE_CHILD(testcase6, "6")
+ TESTCASE_CHILD(testcase7, "7")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_02_spg_not_alive.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_02_spg_not_alive.c
new file mode 100644
index 000000000000..0da0356b34d8
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_02_spg_not_alive.c
@@ -0,0 +1,166 @@
+#include <stdio.h>
+#include <stdlib.h>
+#include <errno.h>
+#include "sharepool_lib.h"
+#include "sem_use.h"
+
+#define GROUP_ID 1
+#define ALLOC_SIZE (4096UL)
+#define TEST_SIZE 10
+#define MAX_RETRY 10
+#define REPEAT 30000
+#define PROT (PROT_READ | PROT_WRITE)
+int semid_add, semid_exit, semid_create;
+int sem1, sem2;
+
+static int fault_verify(void)
+{
+ int ret;
+ int test_pid[TEST_SIZE];
+ int pid;
+
+ ret = wrap_sp_alloc(GROUP_ID, ALLOC_SIZE, 0);
+ if (ret < 0) {
+ printf("fault verify --- alloc failed.\n");
+ }
+
+ for (int i = 0; i < TEST_SIZE; i++) {
+ pid = fork();
+ if (pid == 0) {
+ exit(wrap_add_group(getpid(), PROT, GROUP_ID));
+ }
+ }
+
+ printf("fault is verified!\n");
+ return 0;
+}
+
+static int child_process(int semid)
+{
+ int ret = 0;
+ int spg_id = 0;
+ int retry_time = 0;
+
+ printf("process %d created.\n", getpid());
+
+ sem_dec_by_one(semid);
+retry:
+ ret = wrap_add_group(getpid(), PROT, GROUP_ID);
+ if (errno == ENODEV || errno == ENOSPC) {
+ printf("process %d add group failed once, retry...\n", getpid());
+ errno = 0;
+ if (retry_time++ < MAX_RETRY)
+ goto retry;
+ }
+
+ retry_time = 0;
+ if (ret < 0) {
+ printf("process %d add group unexpected error, ret is %d\n",
+ getpid(), ret);
+ sem_dec_by_one(semid);
+ return -1;
+ }
+ printf("process %d add group success\n", getpid());
+
+ errno = 0;
+ spg_id = ioctl_find_first_group(dev_fd, getpid());
+ if (spg_id < 0 && errno == ENODEV) {
+ printf("fault is found.\n");
+ ret = fault_verify();
+ sem_dec_by_one(semid);
+ return -1;
+ }
+ if (spg_id != GROUP_ID) {
+ printf("unexpected find group fault %d\n", spg_id);
+ return -1;
+ }
+
+ sem_dec_by_one(semid);
+ // printf("process %d exit.\n", getpid());
+ return 0;
+}
+
+static int testcase1(void)
+{
+ int ret = 0;
+ int status;
+ int sem;
+ pid_t first, prev, current;
+
+ semid_add = sem_create(2234, "spg not alive test add group");
+ semid_exit = sem_create(3234, "spg not alive test exit group");
+ semid_create = sem_create(4234, "create");
+
+ sem1 = sem_create(1234, "sem lock for prev process");
+ sem2 = sem_create(4321, "sem lock for new process");
+
+ sem_inc_by_one(sem1);
+ first = fork();
+ if (first < 0)
+ goto close_sem;
+ else if (first == 0)
+ exit(child_process(sem1));
+ prev = first;
+ sem_check_zero(sem1);
+ sem = sem2;
+
+ for (int i = 0; i < REPEAT; i++) {
+ current = fork();
+ if (current < 0) {
+ printf("fork failed.\n");
+ kill(prev, SIGKILL);
+ ret = -1;
+ goto close_sem;
+ } else if (current == 0) {
+ exit(child_process(sem));
+ }
+
+ sem_inc_by_one(sem1); // 1退出 2加组
+ sem_inc_by_one(sem2); // 2加组 1退出
+
+ sem_check_zero(sem1);
+ sem_check_zero(sem2);
+
+ waitpid(prev, &status, 0);
+
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("process %d exit unexpected, %d", status);
+ ret = -1;
+ goto end;
+ } else
+ pr_info("process %d exit", prev);
+ prev = current;
+
+ if (sem == sem1) {
+ sem = sem2;
+ } else if (sem == sem2) {
+ sem = sem1;
+ } else {
+ printf("unexpected error: weird sem value: %d\n", sem);
+ goto end;
+ }
+ }
+end:
+ kill(current, SIGKILL);
+ waitpid(current, &status, 0);
+close_sem:
+ sem_close(sem1);
+ sem_close(sem2);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "加组和组销毁并发")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_03_hugepage_rsvd.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_03_hugepage_rsvd.c
new file mode 100644
index 000000000000..a442ab98b45c
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_03_hugepage_rsvd.c
@@ -0,0 +1,84 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sharepool_lib.h"
+
+#define HP_SIZE (3UL * 1024UL * 1024UL) // 1G
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int cleanup(struct sp_alloc_info *alloc_info)
+{
+ int ret = ioctl_free(dev_fd, alloc_info);
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ int i;
+ if (addgroup() != 0) {
+ return -1;
+ }
+ struct sp_alloc_info alloc_infos[100];
+
+ for (i = 0; i < 4; i++) {
+ alloc_infos[i].flag = 3;
+ alloc_infos[i].size = HP_SIZE;
+ alloc_infos[i].spg_id = 1;
+ ret = ioctl_alloc(dev_fd, &alloc_infos[i]);
+ if (ret != 0) {
+ pr_info("%dth testcase1 ioctl_alloc failed, errno: %d", i+1, errno);
+ goto out;
+ }
+ }
+
+ while(1);
+out:
+ for (int j = 0; j < i; j++) {
+ ret = cleanup(&alloc_infos[j]);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ } else
+ pr_info("free %d success", j+1);
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, true)
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_04_spg_add_del.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_04_spg_add_del.c
new file mode 100644
index 000000000000..f87c7fc4b8f6
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_04_spg_add_del.c
@@ -0,0 +1,100 @@
+#include <stdio.h>
+#include <errno.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/wait.h> // for wait
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+#define PROC_NUM 2
+#define PRINT_NUM 3
+
+/*
+ * testcase1
+ * 测试点:有效pid加组
+ * 预期结果:加组成功,返回正确组id
+ */
+
+static int add_del_child(int group_id)
+{
+ int ret = 0;
+
+ pr_info("child %d created", getpid());
+ while (1) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, group_id);
+ if (ret < 0)
+ pr_info("child %d add group failed, ret %d", getpid(), ret);
+
+ ret = wrap_del_from_group(getpid(), group_id);
+ if (ret < 0)
+ pr_info("child %d del from group failed, ret %d", getpid(), ret);
+ }
+
+ return 0;
+}
+
+static int print_child(void)
+{
+ while (1) {
+ sharepool_print();
+ sleep(2);
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+ int ret = 0;
+ int group_id = 1;
+ pid_t child[PROC_NUM];
+ pid_t printer[PRINT_NUM];
+
+ /*
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, group_id);
+ if (ret < 0) {
+ pr_info(" add group failed");
+ return -1;
+ }
+ */
+
+ pr_info("test 1");
+ for (int i = 0; i < PROC_NUM; i++)
+ FORK_CHILD_ARGS(child[i], add_del_child(group_id));
+
+ for (int i = 0; i < PRINT_NUM; i++)
+ FORK_CHILD_ARGS(printer[i], print_child());
+
+ sleep(30);
+
+ for (int i = 0; i < PROC_NUM; i++)
+ KILL_CHILD(child[i]);
+
+ for (int i = 0; i < PRINT_NUM; i++)
+ KILL_CHILD(printer[i]);
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "有效pid加组,预期成功,返回正确组Id")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_05_cgroup_limit.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_05_cgroup_limit.c
new file mode 100644
index 000000000000..7a4a2be9264a
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_05_cgroup_limit.c
@@ -0,0 +1,76 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <sys/ioctl.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sharepool_lib.h"
+
+#define ALLOC_NUM 2
+
+static int testcase1(void)
+{
+ int ret = 0;
+ void *addr;
+ unsigned long va[ALLOC_NUM];
+ int spg_id = 1;
+ unsigned long size = 1 * 1024UL * 1024UL * 1024UL;
+ int i;
+
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id) < 0) {
+ pr_info("add group failed");
+ return -1;
+ }
+
+ // 申请小页 5次 预计第5次失败,然后释放所有内存
+ for (i = 0; i < ALLOC_NUM; i++) {
+ addr = wrap_sp_alloc(spg_id, size, 0);
+ if (addr == (void *)-1) {
+ pr_info("alloc %d time failed, errno: %d", i + 1, errno);
+ i--;
+ break;
+ } else {
+ pr_info("alloc %d time success, va: 0x%lx", i + 1, addr);
+ va[i] = (unsigned long)addr;
+ }
+ }
+
+ // 将申请的内存释放
+ for ( ; i >= 0; i--) {
+ ret = wrap_sp_free_by_id(va[i], spg_id);
+ if (ret < 0) {
+ pr_info("free %d time failed, errno: %d", i + 1, ret);
+ return -1;
+ } else {
+ pr_info("free %d time success", i + 1);
+ }
+ }
+
+ sleep(1200);
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "将cgroup设置为N个GB + xxx个MB,进程申请N次内存后,预期第N+1次失败;释放已申请的内存,并挂起。观察cgroup中剩余内存")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_06_clone.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_06_clone.c
new file mode 100644
index 000000000000..267112eff125
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_06_clone.c
@@ -0,0 +1,176 @@
+#define _GNU_SOURCE
+#include <sys/wait.h>
+#include <sys/utsname.h>
+#include <sched.h>
+#include <string.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <sys/mman.h>
+
+#include "sharepool_lib.h"
+
+#define STACK_SIZE (1024 * 1024) /* Stack size for cloned child */
+
+/* case 1 */
+static int /* Start function for cloned child */
+childFunc_1(void *arg)
+{
+ printf("child finished\n");
+}
+
+int testcase1(void)
+{
+ char *stack; /* Start of stack buffer */
+ char *stackTop; /* End of stack buffer */
+ pid_t pid;
+ int ret = 0;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 1);
+ if (ret != 1) {
+ printf("Add group failed, ret is %d\n", ret);
+ return -1;
+ }
+
+ /* Allocate memory to be used for the stack of the child. */
+
+ stack = mmap(NULL, STACK_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_STACK, -1, 0);
+ stackTop = stack + STACK_SIZE; /* Assume stack grows downward */
+
+ pid = clone(childFunc_1, stackTop, CLONE_VM, NULL);
+ if (pid == -1)
+ printf("clone failed\n");
+
+ printf("clone() returned %jd\n", (intmax_t) pid);
+
+ return 0;
+}
+
+/* case 2 */
+static volatile int flag_2 = 0;
+static int /* start function for cloned child */
+childFunc_2(void *arg)
+{
+ while(!flag_2) {}
+
+ sleep(5);
+
+ printf("child finished\n");
+}
+
+int testcase2(void)
+{
+ char *stack; /* start of stack buffer */
+ char *stacktop; /* end of stack buffer */
+ pid_t pid;
+ int ret = 0;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 1);
+ if (ret != 1) {
+ printf("Add group [1] failed, ret is %d\n", ret);
+ return -1;
+ }
+
+ /* allocate memory to be used for the stack of the child. */
+ stack = mmap(NULL, STACK_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_STACK, -1, 0);
+ stacktop = stack + STACK_SIZE; /* assume stack grows downward */
+
+ pid = clone(childFunc_2, stacktop, CLONE_VM, NULL);
+ if (pid == -1)
+ printf("clone failed\n");
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 2);
+ if (ret != 2) {
+ printf("Add group [2] failed, ret is %d\n", ret);
+ return -1;
+ }
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 3);
+ if (ret != 3) {
+ printf("Add group [3] failed, ret is %d\n", ret);
+ return -1;
+ }
+
+ printf("clone() returned %jd\n", (intmax_t) pid);
+
+ flag_2 = 1;
+ printf("parent finished\n");
+
+ return 0;
+}
+
+/* case 3 */
+static volatile int flag_3 = 0;
+static int /* start function for cloned child */
+childFunc_3(void *arg)
+{
+ int ret = 0;
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 2);
+ if (ret == 2) {
+ printf("add group [2] should failed, ret is %d\n", ret);
+ return -1;
+ }
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 3);
+ if (ret == 3) {
+ printf("add group [3] should failed, ret is %d\n", ret);
+ return -1;
+ }
+
+ flag_3 = 1;
+
+ printf("child finished\n");
+}
+
+int testcase3(void)
+{
+ char *stack; /* start of stack buffer */
+ char *stacktop; /* end of stack buffer */
+ pid_t pid;
+ int ret;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, 1);
+ if (ret != 1) {
+ printf("Add group [1] failed, ret is %d\n", ret);
+ return -1;
+ }
+
+ /* allocate memory to be used for the stack of the child. */
+ stack = mmap(NULL, STACK_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_STACK, -1, 0);
+ stacktop = stack + STACK_SIZE; /* assume stack grows downward */
+
+ pid = clone(childFunc_3, stacktop, CLONE_VM, NULL);
+ if (pid == -1)
+ printf("clone failed\n");
+
+ printf("clone() returned %jd\n", (intmax_t) pid);
+
+ while(!flag_3) {}
+
+ sleep(5);
+
+ printf("parent finished\n");
+
+ return 0;
+}
+
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "进程添加组后, 进程clone(CLONE_VM)创建子进程,父子进程可以正常退出")
+ TESTCASE_CHILD(testcase2, "进程添加组后,进程clone(CLONE_VM)创建子进程,父进程退出后,子进程添加多组,子进程可以正常退出")
+ TESTCASE_CHILD(testcase3, "进程添加组后,进程clone(CLONE_VM)创建子进程,子进程退出后,父进程添加多组,父进程可以正常退出")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
+
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_08_addr_offset.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_08_addr_offset.c
new file mode 100644
index 000000000000..cd19273fd058
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_08_addr_offset.c
@@ -0,0 +1,156 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <sys/ioctl.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sharepool_lib.h"
+
+#define ALIGN_UP(x, align_to) (((x) + ((align_to)-1)) & ~((align_to)-1))
+#define CMD_LEN 100
+#define UNIT 1024
+#define PAGE_NUM 1
+#define HGPAGE_NUM 10
+#define LARGE_PAGE_NUM 1000000
+#define ATOMIC_TEST_SIZE (1024UL * 1024UL * 1024UL) // 1G
+#define SPG_ID_AUTO 200000
+#define DAVINCI_IOCTL_VA_TO_PA 0xfff9
+#define DVPP_START (0x100000000000UL)
+#define DVPP_SIZE 0x400000000UL
+
+
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ } else {
+ ret = ag_info.spg_id;
+ }
+ return ret;
+}
+
+static int cleanup(struct sp_alloc_info *alloc_info)
+{
+ int ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ int fd;
+ long err;
+ unsigned long phys_addr;
+ int spg_id = 0;
+
+ spg_id = addgroup();
+ if (spg_id <= 0) {
+ pr_info("spgid <= 0, value: %d", spg_id);
+ return -1;
+ } else {
+ pr_info("spg id %d", spg_id);
+ }
+
+ struct sp_config_dvpp_range_info cdr_info = {
+ .start = DVPP_START,
+ .size = DVPP_SIZE,
+ .device_id = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_config_dvpp_range(dev_fd, &cdr_info);
+ if (ret < 0) {
+ pr_info("dvpp config failed. errno: %d", errno);
+ return ret;
+ } else
+ pr_info("dvpp config success.");
+
+ struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = 1, // 普通大页
+ .size = PAGE_NUM * PMD_SIZE,
+ .spg_id = spg_id,
+ },
+ {
+ .flag = 1, // 普通大页
+ .size = PAGE_NUM * PMD_SIZE,
+ .spg_id = spg_id,
+ },
+ {
+ .flag = 5, // DVPP大页
+ .size = PAGE_NUM * PMD_SIZE,
+ .spg_id = spg_id,
+ },
+ {
+ .flag = 1, // 普通大页
+ .size = PAGE_NUM * PMD_SIZE,
+ .spg_id = spg_id,
+ },
+ {
+ .flag = 5, // DVPP大页
+ .size = PAGE_NUM * PMD_SIZE,
+ .spg_id = spg_id,
+ },
+ };
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_infos[i]);
+ if (ret != 0) {
+ pr_info("testcase1 ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ pr_info("alloc success, type: %s, va: %lx",
+ alloc_infos[i].flag == 5 ? "SP_DVPP" : "NORMAL",
+ alloc_infos[i].addr);
+ }
+
+ ret = memset((void *)(alloc_infos[2].addr), 'b', alloc_infos[2].size);
+ ret = memset((void *)(alloc_infos[0].addr), 'a', alloc_infos[0].size);
+ if (ret < 0)
+ pr_info("memset failed");
+ char *p;
+ p = (char *)(alloc_infos[2].addr);
+ if (*p != 'a') {
+ pr_info("pa not same. char: %c", *p);
+ ret = 0;
+ }else {
+ pr_info("pa same. char: %c", *p);
+ ret = -1;
+ }
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++)
+ cleanup(&alloc_infos[i]);
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "测试dvpp文件地址与普通alloc文件地址不重叠")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_09_spg_del_exit.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_09_spg_del_exit.c
new file mode 100644
index 000000000000..73b3d07992e9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_09_spg_del_exit.c
@@ -0,0 +1,150 @@
+#include <stdio.h>
+#include <stdlib.h>
+#include <errno.h>
+#include "sharepool_lib.h"
+#include "sem_use.h"
+
+#define GROUP_ID 1
+#define ALLOC_SIZE (4096UL)
+#define TEST_SIZE 10
+#define MAX_RETRY 10
+#define REPEAT 30000
+#define PROT (PROT_READ | PROT_WRITE)
+int semid_add, semid_exit, semid_create;
+int sem1, sem2;
+
+static int fault_verify(void)
+{
+ int ret;
+ int test_pid[TEST_SIZE];
+ int pid;
+
+ ret = wrap_sp_alloc(GROUP_ID, ALLOC_SIZE, 0);
+ if (ret < 0) {
+ printf("fault verify --- alloc failed.\n");
+ }
+
+ for (int i = 0; i < TEST_SIZE; i++) {
+ pid = fork();
+ if (pid == 0) {
+ exit(wrap_add_group(getpid(), PROT, GROUP_ID));
+ }
+ }
+
+ printf("fault is verified!\n");
+ return 0;
+}
+
+static int child_process(int semid)
+{
+ int ret = 0;
+ int spg_id = 0;
+ int retry_time = 0;
+
+ printf("process %d created.\n", getpid());
+
+ while (1) {
+ sem_dec_by_one(semid);
+retry:
+ ret = wrap_add_group(getpid(), PROT, GROUP_ID);
+ if (errno == ENODEV || errno == ENOSPC) {
+ printf("process %d add group failed once, retry...\n", getpid());
+ errno = 0;
+ if (retry_time++ < MAX_RETRY)
+ goto retry;
+ }
+
+ retry_time = 0;
+ if (ret < 0) {
+ printf("process %d add group unexpected error, ret is %d\n",
+ getpid(), ret);
+ sem_dec_by_one(semid);
+ return -1;
+ }
+ printf("process %d add group%d success!\n", getpid(), GROUP_ID);
+
+ errno = 0;
+ spg_id = ioctl_find_first_group(dev_fd, getpid());
+ if (spg_id < 0 && errno == ENODEV) {
+ printf("fault is found.\n");
+ ret = fault_verify();
+ sem_dec_by_one(semid);
+ return -1;
+ }
+ if (spg_id != GROUP_ID) {
+ printf("unexpected find group fault %d\n", spg_id);
+ return -1;
+ }
+
+ sem_dec_by_one(semid);
+ ret = wrap_del_from_group(getpid(), GROUP_ID);
+ if (ret < 0)
+ pr_info("del failed!");
+ else
+ pr_info("process %d del from group%d success!", getpid(), GROUP_ID);
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+ int ret = 0;
+ int status;
+ int sem;
+ pid_t first, second;
+
+ semid_add = sem_create(2234, "spg not alive test add group");
+ semid_exit = sem_create(3234, "spg not alive test exit group");
+ semid_create = sem_create(4234, "create");
+
+ sem1 = sem_create(1234, "sem lock for first process");
+ sem2 = sem_create(4321, "sem lock for second process");
+
+ sem_inc_by_one(sem1);
+ first = fork();
+ if (first < 0)
+ goto close_sem;
+ else if (first == 0)
+ exit(child_process(sem1));
+ sem_check_zero(sem1);
+
+ second = fork();
+ if (second < 0)
+ goto close_sem;
+ else if (second == 0)
+ exit(child_process(sem2));
+ sem_check_zero(sem2);
+
+ for (int i = 0; i < REPEAT; i++) {
+ sem_inc_by_one(sem1); // 1退组
+ sem_inc_by_one(sem2); // 2加组
+
+ sem_check_zero(sem1); // 1加组
+ sem_check_zero(sem2); // 2退组
+ }
+
+end:
+ kill(first, SIGKILL);
+ kill(second, SIGKILL);
+close_sem:
+ sem_close(sem1);
+ sem_close(sem2);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "加组和退组并发")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_10_walk_page_range_AA_lock.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_10_walk_page_range_AA_lock.c
new file mode 100644
index 000000000000..db70e7cf6718
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_10_walk_page_range_AA_lock.c
@@ -0,0 +1,124 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2023. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri May 19 16:06:03 2023
+ */
+#include <sys/ioctl.h>
+#include <sys/syscall.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+
+#include "sharepool_lib.h"
+
+#define TEST_SIZE 0x200000
+#define NUM 200
+
+/*
+ * sharepool 升级适配的5.10的时候引入一个bug,mg_sp_walk_page_range遍历的内存正在
+ * 发生页迁移的时候触发 AA死锁,重复拿页表的锁;
+ */
+static int case1()
+{
+ int err = 0;
+ int i, count = NUM;
+ unsigned long *addr[NUM] = {0};
+
+ for (i = 0; i < count; i++) {
+ addr[i] = wrap_sp_alloc(SPG_ID_DEFAULT, TEST_SIZE, 0);
+ if (addr[i] == (void *)-1) {
+ printf("ioctl alloc failed, %s.\n", strerror(errno));
+ count = i;
+ err = -1;
+ goto out;
+ }
+ }
+ printf("memory allocation done.\n");
+
+ // 先释放一半内存,更容易构造碎片化场景,增加页迁移概率
+ for (i = 0; i < count; i += 2) {
+ wrap_sp_free(addr[i]);
+ addr[i] = NULL;
+ }
+
+ for (i = 0; i < count; i++) {
+ if (!addr[i])
+ continue;
+
+ struct sp_walk_page_range_info wpr_info = {
+ .uva = addr[i],
+ .size = TEST_SIZE,
+ };
+ err = ioctl_walk_page_range(dev_fd, &wpr_info);
+ if (err < 0) {
+ pr_info("ioctl_walk_page_range failed,XXXXXXX errno: %d", errno);
+ err = -1;
+ goto out;
+ }
+
+ ioctl_walk_page_free(dev_fd, &wpr_info);
+ }
+ printf("walk_page_reange done\n");
+
+out:
+ for (i = 0; i < count; i++) {
+ if (addr[i])
+ wrap_sp_free(addr[i]);
+ }
+ printf("memory free done.\n");
+
+ return err;
+}
+
+static int case1_child(void)
+{
+ int i = 1;
+
+ while (1) {
+ printf("memory compact start: %d\n", i++);
+ system("echo 1 > /proc/sys/vm/compact_memory");
+ sleep(1);
+ }
+
+ return 0;
+}
+
+static int testcase1()
+{
+ int ret = 0;
+ pid_t pid;
+
+ FORK_CHILD_ARGS(pid, case1_child());
+
+ for (int i = 0; i < 100; i++) {
+ printf("loop count: %d\n", i);
+ ret = case1();
+ if (ret < 0)
+ break;
+ }
+
+ KILL_CHILD(pid);
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "申请内存,mg_sp_walk_page_range, 释放内存,循环跑,并且后台执行内存规整")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/dts_bugfix_test/test_dvpp_readonly.c b/tools/testing/sharepool/testcase/dts_bugfix_test/test_dvpp_readonly.c
new file mode 100644
index 000000000000..02d3161fb506
--- /dev/null
+++ b/tools/testing/sharepool/testcase/dts_bugfix_test/test_dvpp_readonly.c
@@ -0,0 +1,71 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 20 01:38:40 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+static int testcase1(void)
+{
+ int ret = 0;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_PROT_RO,
+ .size = 40960,
+ .spg_id = 1,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = mprotect(alloc_info.addr, 40960, PROT_WRITE);
+ if (ret) {
+ pr_info("mprotect failed, %d, %d\n", ret, errno);
+ return ret;
+ }
+ memset(alloc_info.addr, 0, alloc_info.size);
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, true)
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/Makefile b/tools/testing/sharepool/testcase/function_test/Makefile
new file mode 100644
index 000000000000..0d4d24db842a
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/Makefile
@@ -0,0 +1,36 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+dvpp?=true
+
+ifeq ($(dvpp),true)
+ testcases:=test_two_user_process \
+ test_dvpp_pass_through \
+ test_u2k test_k2u \
+ test_alloc_free_two_process \
+ test_mm_mapped_to_multi_groups \
+ test_sp_ro \
+ test_alloc_readonly \
+ test_dvpp_multi_16G_alloc \
+ test_dvpp_readonly \
+ test_dvpp_multi_16G_k2task \
+ test_non_dvpp_group \
+ test_hugetlb_alloc_hugepage
+else
+ testcases:=test_two_user_process \
+ test_dvpp_pass_through \
+ test_u2k test_k2u \
+ test_alloc_free_two_process \
+ test_mm_mapped_to_multi_groups \
+ test_sp_ro
+endif
+
+default: $(testcases)
+
+install:
+ mkdir -p $(TOOL_BIN_DIR)/function_test
+ cp $(testcases) $(TOOL_BIN_DIR)/function_test
+ cp function_test.sh $(TOOL_BIN_DIR)
+
+clean:
+ rm -rf $(testcases)
+
diff --git a/tools/testing/sharepool/testcase/function_test/function_test.sh b/tools/testing/sharepool/testcase/function_test/function_test.sh
new file mode 100755
index 000000000000..dc49a9cb2e0b
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/function_test.sh
@@ -0,0 +1,32 @@
+#!/bin/sh
+
+for line in test_two_user_process \
+ test_alloc_free_two_process \
+ test_mm_mapped_to_multi_groups \
+ test_alloc_readonly \
+ test_dvpp_pass_through \
+ test_u2k \
+ test_k2u \
+ test_dvpp_multi_16G_alloc \
+ test_dvpp_multi_16G_k2task \
+ test_non_dvpp_group \
+ test_dvpp_readonly
+do
+ ./function_test/$line
+ if [ $? -ne 0 ] ;then
+ echo "testcase function_test/$line failed"
+ exit 1
+ fi
+ cat /proc/meminfo
+ free -m
+done
+
+#echo 100 > /proc/sys/vm/nr_hugepages
+#line=test_hugetlb_alloc_hugepage
+#./function_test/$line
+#if [ $? -ne 0 ] ;then
+# echo "testcase function_test/$line failed"
+# echo 0 > /proc/sys/vm/nr_hugepages
+# exit 1
+#fi
+#echo 0 > /proc/sys/vm/nr_hugepages
diff --git a/tools/testing/sharepool/testcase/function_test/test_alloc_free_two_process.c b/tools/testing/sharepool/testcase/function_test/test_alloc_free_two_process.c
new file mode 100644
index 000000000000..263821bee137
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_alloc_free_two_process.c
@@ -0,0 +1,303 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 两个组,每组两个进程,同时多次申请释放内存
+ *
+ * 用信号量进行同步,进程创建完成后先睡眠
+ */
+
+#define NR_GROUP 100
+#define MAX_PROC_PER_GRP 100
+
+static int group_num = 2;
+static int process_per_group = 3;
+static int alloc_num = 1000;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int arg, sem_t *child_sync, sem_t *grandchild_sync)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", arg, getpid(), ##args)
+ int ret;
+
+ struct sp_alloc_info *alloc_infos = malloc(sizeof(*alloc_infos) * alloc_num);
+ if (!alloc_infos) {
+ pr_local_info("malloc failed");
+ return -1;
+ }
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync);
+ } while ((ret != 0) && errno == EINTR);
+
+ sleep(1); // it seems sem_wait doesn't work as expected
+ pr_local_info("start!!, ret is %d, errno is %d", ret, errno);
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ goto error_out;
+ }
+
+ for (int i = 0; i < alloc_num; i++) {
+ (alloc_infos + i)->flag = 0,
+ (alloc_infos + i)->spg_id = group_id,
+ (alloc_infos + i)->size = alloc_size,
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, alloc_infos + i);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ goto error_out;
+ } else {
+ if (IS_ERR_VALUE(alloc_infos[i].addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", alloc_infos[i].addr);
+ goto error_out;
+ }
+ }
+
+ memset((void *)alloc_infos[i].addr, 'z', alloc_infos[i].size);
+ }
+
+ sem_post(grandchild_sync);
+ do {
+ ret = sem_wait(child_sync);
+ } while (ret < 0 && errno == EINTR);
+
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_free(dev_fd, alloc_infos + i);
+ if (ret < 0) {
+ pr_local_info("ioctl free failed, errno: %d", errno);
+ goto error_out;
+ }
+ }
+ sem_post(grandchild_sync);
+ free(alloc_infos);
+ pr_local_info("exit!!");
+ return 0;
+
+error_out:
+ sem_post(grandchild_sync);
+ free(alloc_infos);
+ return -1;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+ sem_t *child_sync[MAX_PROC_PER_GRP] = {0};
+ sem_t *grandchild_sync[MAX_PROC_PER_GRP] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ // create syncs for grandchilds
+ for (int i = 0; i < process_per_group; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_local_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // create syncs for childs
+ for (int i = 0; i < process_per_group; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_local_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // 创建子进程并将之加组
+ for (int i = 0; i < process_per_group; i++) {
+ int num = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(num, child_sync[i], grandchild_sync[i]);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", num, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", num, group_id);
+ goto error_out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", num, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[i]);
+ }
+ }
+
+ for (int i = 0; i < process_per_group; i++)
+ do {
+ ret = sem_wait(grandchild_sync[i]);
+ } while (ret < 0 && errno == EINTR);
+
+ for (int i = 0; i < process_per_group; i++)
+ sem_post(child_sync[i]);
+ pr_local_info("grandchild-processes start to do sp_free");
+
+ for (int i = 0; i < process_per_group && childs[i] > 0; i++) {
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_local_info("grandchild%d test failed, %d", arg * MAX_PROC_PER_GRP + i, status);
+ ret = -1;
+ }
+ }
+ pr_local_info("exit!!");
+ return ret;
+
+error_out:
+ for (int i = 0; i < process_per_group && childs[i] > 0; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], NULL, 0);
+ };
+ pr_local_info("exit!!");
+ return -1;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_PER_GRP || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > NR_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 内存申请次数
+ alloc_num = atoi(optarg);
+ if (alloc_num > 100000 || alloc_num <= 0) {
+ printf("alloc number invalid\n");
+ return -1;
+ }
+ break;
+ case 's':
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+#define pr_local_info(fmt, args...) printf("[parent , pid:%d] " fmt "\n", getpid(), ##args)
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[NR_GROUP];
+
+ pr_local_info("group: %d, process_per_group: %d", group_num, process_per_group);
+
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+
+ pr_local_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_local_info("child%d test failed, %d", i, status);
+ ret = -1;
+ }
+ }
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "两个组每组两个进程,同时申请释放内存,简单验证")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_alloc_readonly.c b/tools/testing/sharepool/testcase/function_test/test_alloc_readonly.c
new file mode 100644
index 000000000000..3278cbbb2a0e
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_alloc_readonly.c
@@ -0,0 +1,588 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Nov 23 02:17:32 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/shm.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include <pthread.h>
+#include "sharepool_lib.h"
+
+#define GROUP_ID 1
+
+static bool is_addr_ro_range(unsigned long addr)
+{
+ return addr >= MMAP_SHARE_POOL_RO_START && addr < MMAP_SHARE_POOL_DVPP_START;
+}
+
+static jmp_buf sigsegv_env;
+static void sigsegv_handler(int num)
+{
+ pr_info("SIGSEGV 11 received.");
+ longjmp(sigsegv_env, 1);
+}
+
+static unsigned long alloc_readonly(int spg_id, unsigned long size, unsigned long flag)
+{
+ unsigned long va;
+ unsigned sp_flag = flag | SP_PROT_FOCUS | SP_PROT_RO;
+ void *addr = wrap_sp_alloc(spg_id, size, sp_flag);
+ if (addr == (void *)-1) {
+ pr_info("alloc read only memory failed, size %lx, flag %lx",
+ size, sp_flag);
+ return -1;
+ }
+
+ va = (unsigned long)addr;
+ if (!is_addr_ro_range(va)) {
+ pr_info("address not in read only range. %lx", va);
+ return -1;
+ }
+
+ return va;
+}
+
+int GROUP_TYPE[] = {
+ 1,
+ SPG_ID_AUTO,
+ SPG_ID_DEFAULT,
+};
+
+bool ALLOC_FLAG[] = {
+ false,
+ true,
+};
+
+unsigned long ALLOC_TYPE[] = {
+ SP_PROT_FOCUS | SP_PROT_RO,
+ 0,
+ SP_DVPP,
+};
+
+static int test(int spg_id, bool is_hugepage)
+{
+ int ret = 0;
+ unsigned long sp_flag = 0;
+ unsigned long size = PMD_SIZE;
+ void *addr;
+ char *caddr;
+ unsigned long uva, kva;
+ struct sigaction sa = {0};
+ sa.sa_handler = sigsegv_handler;
+ sa.sa_flags |= SA_NODEFER;
+ sigaction(SIGSEGV, &sa, NULL);
+
+ /* 申请只读内存 */
+ sp_flag |= SP_PROT_FOCUS;
+ sp_flag |= SP_PROT_RO;
+ if (is_hugepage)
+ sp_flag |= SP_HUGEPAGE;
+ pr_info("sp_flag: %lx, group_id:%d", sp_flag, spg_id);
+
+ addr = wrap_sp_alloc(spg_id, size, sp_flag);
+ if ((unsigned long)addr == -1) {
+ pr_info("alloc readonly memory failed, errno %d", errno);
+ return ret;
+ }
+
+ /* 是否在只读区间范围内 */
+ pr_info("address is %lx", addr);
+ if (!is_addr_ro_range((unsigned long)addr)) {
+ pr_info("address not in read only range.");
+ return -1;
+ }
+
+ /* 尝试直接读,预期不太清楚 有可能收到SIGSEGV */
+ caddr = (char *)addr;
+ ret = setjmp(sigsegv_env);
+ if (!ret) {
+ pr_info("value at addr[0] is %d", (int)caddr[0]);
+ pr_info("read success expected.");
+ }
+
+ /* 尝试写,预期收到signal11 SIGSEGV */
+ ret = setjmp(sigsegv_env);
+ if (!ret) {
+ memset(caddr, 'A', size);
+ pr_info("memset success unexpected.");
+ return -1;
+ }
+ pr_info("memset failed expected.");
+
+ /* u2k 然后让内核写 */
+ uva = (unsigned long)addr;
+ kva = wrap_u2k(uva, size);
+ if (!kva) {
+ pr_info("u2k failed, errno %d", errno);
+ return -1;
+ }
+ KAREA_ACCESS_SET('A', kva, size, out);
+ pr_info("kernel write success");
+ KAREA_ACCESS_CHECK('A', kva, size, out);
+ pr_info("kernel read success");
+
+ /* 用户进程再尝试读 */
+ for( int i = 0; i < size; i++) {
+ if (caddr[i] != 'A') {
+ pr_info("caddr[%d] is %c, not %c", i, caddr[i], 'A');
+ return -1;
+ }
+ }
+ pr_info("user read success");
+
+ /* 再尝试写,预期收到signal11 SIGSEGV */
+ sigaction(SIGSEGV, &sa, NULL);
+ ret = setjmp(sigsegv_env);
+ if (!ret) {
+ memset(caddr, 'A', size);
+ pr_info("memset success unexpected.");
+ return -1;
+ }
+ pr_info("memset failed expected.");
+
+ /* 从内核unshare 该uva */
+ if (wrap_unshare(kva, size) < 0) {
+ pr_info("unshare failed");
+ return -1;
+ }
+
+ /* free uva*/
+ if (wrap_sp_free_by_id(uva, spg_id) < 0) {
+ pr_info("free failed");
+ return -1;
+ }
+ pr_info("free success");
+
+ return 0;
+out:
+ return ret;
+
+}
+
+static int testcase1(void)
+{
+ int i, j;
+
+ for (i = 0; i < sizeof(GROUP_TYPE) / sizeof(GROUP_TYPE[0]); i++) {
+ /* 读写权限加组 */
+ int group_id = GROUP_TYPE[i];
+ if (group_id != SPG_ID_DEFAULT) {
+ int ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, group_id);
+ if (ret < 0) {
+ pr_info("add group failed, errno %d", errno);
+ return ret;
+ }
+ group_id = ret;
+ }
+
+ for (j = 0; j < sizeof(ALLOC_FLAG) / sizeof(ALLOC_FLAG[0]); j++)
+ if (test(group_id, ALLOC_FLAG[j]))
+ goto out;
+ }
+
+ return 0;
+out:
+ pr_info("test failed for group id %d type %s",
+ GROUP_TYPE[i], ALLOC_FLAG[i] & SP_HUGEPAGE ? "hugepage" : "normal page");
+ return -1;
+}
+
+#define PROC_NUM 10
+static unsigned long UVA[2];
+static int tc2_child(int idx)
+{
+ int ret = 0;
+ unsigned long size = 10 * PMD_SIZE;
+ struct sigaction sa = {0};
+ sa.sa_handler = sigsegv_handler;
+ sa.sa_flags |= SA_NODEFER;
+ sigaction(SIGSEGV, &sa, NULL);
+
+ /* 子进程加组 */
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID) < 0)
+ return -1;
+
+ /* 尝试写内存 */
+ for (int i = 0; i < sizeof(UVA) / sizeof(UVA[0]); i++) {
+ ret = setjmp(sigsegv_env);
+ if(!ret) {
+ memset(UVA[i], 'A', size);
+ pr_info("child process write success unexpected");
+ return -1;
+ }
+ pr_info("child process %d write %s failed expected.",
+ idx, ALLOC_FLAG[i] & SP_HUGEPAGE ? "huge page" : "normal page");
+ }
+
+ return 0;
+}
+static int testcase2(void)
+{
+ int ret = 0;
+ unsigned long size = 10 * PMD_SIZE;
+ unsigned long uva;
+ pid_t child[PROC_NUM];
+
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID) < 0)
+ return -1;
+
+ /* 申请内存 小页/大页/dvpp */
+ for (int i = 0; i < sizeof(ALLOC_FLAG) / sizeof(ALLOC_FLAG[0]); i++) {
+ uva = alloc_readonly(GROUP_ID, size, ALLOC_FLAG[i]);
+ if (uva == -1) {
+ pr_info("alloc uva size %lx, flag %lx failed", size, ALLOC_FLAG[i]);
+ return -1;
+ }
+ UVA[i] = uva;
+ }
+
+ /* fork 子进程加组 */
+ for (int i = 0; i < PROC_NUM; i++) {
+ FORK_CHILD_ARGS(child[i], tc2_child(i));
+ }
+
+ /* 回收子进程 */
+ for (int i = 0; i < PROC_NUM; i++) {
+ WAIT_CHILD_STATUS(child[i], out);
+ }
+
+ /* free 内存*/
+ for (int i = 0; i < sizeof(UVA) / sizeof(UVA[0]); i++) {
+ if(wrap_sp_free_by_id(UVA[i], GROUP_ID) < 0) {
+ pr_info("free uva[%d] failed", i);
+ return -1;
+ }
+ }
+
+ return 0;
+out:
+ return ret;
+}
+
+#define REPEAT 20
+
+static void *thread_alloc_rdonly(void *spg_id)
+{
+ unsigned long size = PMD_SIZE;
+ /* 申请只读内存小页/大页各2M,重复REPEAT次 */
+ for (int i = 0; i < REPEAT; i++) {
+ if (alloc_readonly(spg_id, size, 0) < 0 ||
+ alloc_readonly(spg_id, size, 1) < 0)
+ return (void *)-1;
+ if (i % 10 == 0)
+ pr_info("%dMB RDONLY memory allocated in group %d", i * 4, spg_id);
+ }
+ return (void *)0;
+}
+
+static void *thread_alloc_normal(void *spg_id)
+{
+ unsigned long size = PMD_SIZE;
+ /* 申请普通内存小页/大页各2M,重复REPEAT次 */
+ for (int i = 0; i < REPEAT; i++) {
+ if (wrap_sp_alloc(spg_id, size, 0) == (void *)-1 ||
+ wrap_sp_alloc(spg_id, size, 1) == (void *)-1)
+ return (void *)-1;
+ if (i % 10 == 0)
+ pr_info("%dMB memory allocated in group %d", i * 4, spg_id);
+ }
+ return (void *)0;
+}
+
+static void *thread_alloc_dvpp(void *spg_id)
+{
+ unsigned long size = PMD_SIZE;
+ /* 申请dvpp内存小页/大页各2M,重复REPEAT次 */
+ for (int i = 0; i < REPEAT; i++) {
+ if (wrap_sp_alloc(spg_id, size, SP_DVPP) == (void *)-1 ||
+ wrap_sp_alloc(spg_id, size, SP_DVPP | SP_HUGEPAGE) == (void *)-1)
+ return (void *)-1;
+ if (i % 10 == 0)
+ pr_info("%dMB dvpp memory allocated in group %d", i * 4, spg_id);
+ }
+ return (void *)0;
+}
+
+void * (*thread_func[]) (void *) = {
+ thread_alloc_rdonly,
+ thread_alloc_normal,
+ thread_alloc_dvpp,
+};
+static int testcase3(void)
+{
+ int ret = 0;
+ pthread_t threads[3];
+ void *pret;
+
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID) < 0)
+ return -1;
+
+ for (int i = 0; i < ARRAY_SIZE(thread_func); i++) {
+ pthread_create(threads + i, NULL, thread_func[i], (void *)GROUP_ID);
+ if (ret < 0) {
+ pr_info("pthread create failed.");
+ return -1;
+ }
+ }
+
+ for (int i = 0; i < ARRAY_SIZE(threads); i++) {
+ pthread_join(threads[i], &pret);
+ if (ret < 0) {
+ pr_info("pthread join failed.");
+ return -1;
+ }
+ if (pret == (void *)-1)
+ pr_info("thread %d failed", i);
+ }
+
+
+ pr_info("threads allocating different memory from group success!");
+
+ for (int i = 0; i < ARRAY_SIZE(thread_func); i++) {
+ pthread_create(threads + i, NULL, thread_func[i], (void *)0);
+ if (ret < 0) {
+ pr_info("pthread create failed.");
+ return -1;
+ }
+ }
+
+ for (int i = 0; i < ARRAY_SIZE(threads); i++) {
+ pthread_join(threads[i], &pret);
+ if (ret < 0) {
+ pr_info("pthread join failed.");
+ return -1;
+ }
+ if (pret == (void *)-1)
+ pr_info("thread %d failed", i);
+ }
+
+ pr_info("threads allocating different memory pass-through success!");
+
+ return 0;
+}
+
+static int testcase4(void)
+{
+ int ret = 0;
+ unsigned long size = 4UL * PMD_SIZE;
+ unsigned long va;
+ int count = 0;
+
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID) < 0)
+ return -1;
+
+ while (1) {
+ va = alloc_readonly(GROUP_ID, size, 0);
+ if (va == -1) {
+ pr_info("alloc 4M memory %dth time failed.", count + 1);
+ return -1;
+ }
+ count++;
+ if (count % 100 == 0)
+ pr_info("memory allocated %dMB", 8 * count);
+ }
+
+ return 0;
+}
+
+static int testcase5(void)
+{
+ int ret = 0;
+ unsigned long size = PMD_SIZE;
+ unsigned long sp_flag = SP_DVPP | SP_PROT_RO | SP_PROT_FOCUS;
+
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID) < 0)
+ return -1;
+
+ if (wrap_sp_alloc(GROUP_ID, PMD_SIZE, sp_flag) == (void *)-1) {
+ pr_info("alloc for dvpp readonly memory failed as expected");
+ } else {
+ pr_info("alloc for dvpp readonly memory success unexpected");
+ ret = -1;
+ }
+
+ return ret;
+}
+
+static int testcase6(void)
+{
+ int ret = 0;
+ unsigned long size = PMD_SIZE;
+ unsigned long sp_flag = SP_PROT_RO | SP_PROT_FOCUS;
+
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID) < 0)
+ return -1;
+
+ if (wrap_sp_alloc(GROUP_ID, PMD_SIZE, sp_flag) == (void *)-1) {
+ pr_info("alloc for dvpp readonly memory failed unexpected");
+ return -1;
+ }
+
+ sleep(1);
+ sharepool_print();
+
+ return ret;
+
+}
+
+#define RD_NUM 10
+#define WR_NUM 10
+static int testcase7(void)
+{
+ int ret = 0;
+ int spg_id = 1;
+ unsigned long sp_flag = 0;
+ unsigned long size = PMD_SIZE;
+ void *addr;
+ void *addr_rd[RD_NUM];
+ void *addr_wr[WR_NUM];
+ char *caddr;
+ int count = 0;
+ unsigned long uva, kva;
+ struct sigaction sa = {0};
+ bool is_hugepage = true;
+ pid_t pid;
+
+ sa.sa_handler = sigsegv_handler;
+ sa.sa_flags |= SA_NODEFER;
+ sigaction(SIGSEGV, &sa, NULL);
+
+ /* A进程读写权限加组 */
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id);
+ if (ret < 0) {
+ pr_info("add group failed, errno %d", errno);
+ return ret;
+ }
+
+ /* 申请只读内存 N个 */
+ sp_flag |= SP_PROT_FOCUS;
+ sp_flag |= SP_PROT_RO;
+ if (is_hugepage)
+ sp_flag |= SP_HUGEPAGE;
+ pr_info("sp_flag: %lx", sp_flag);
+
+ for (int i = 0; i < RD_NUM; i++) {
+ addr = wrap_sp_alloc(spg_id, size, sp_flag);
+ if ((unsigned long)addr == -1) {
+ pr_info("alloc readonly memory failed, errno %d", errno);
+ return ret;
+ }
+ /* 是否在只读区间范围内 */
+ pr_info("address is %lx", addr);
+ if (!is_addr_ro_range((unsigned long)addr)) {
+ pr_info("address not in read only range.");
+ return -1;
+ }
+ addr_rd[i] = addr;
+
+ caddr = (char *)addr;
+ /* 尝试写,预期收到signal11 SIGSEGV */
+ ret = setjmp(sigsegv_env);
+ if (!ret) {
+ memset(caddr, 'A', size);
+ pr_info("memset success unexpected.");
+ return -1;
+ }
+ pr_info("memset failed expected.");
+ }
+
+ /* 申请读写内存 N个 */
+ sp_flag = 0;
+ if (is_hugepage)
+ sp_flag |= SP_HUGEPAGE;
+ pr_info("sp_flag: %lx", sp_flag);
+
+ for (int i = 0; i < WR_NUM; i++) {
+ addr = wrap_sp_alloc(spg_id, size, sp_flag);
+ if ((unsigned long)addr == -1) {
+ pr_info("alloc wr memory failed, errno %d", errno);
+ return ret;
+ }
+ addr_wr[i] = addr;
+
+ /* 尝试写,预期成功 */
+ caddr = (char *)addr;
+ memset(caddr, 'Q', size);
+ }
+
+ /* B进程正常加组 */
+ pid = fork();
+ if (pid == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id);
+ if (ret < 0) {
+ pr_info("add child to group failed");
+ exit(-1);
+ }
+
+ sigaction(SIGSEGV, &sa, NULL);
+
+ for (int i = 0; i < WR_NUM; i++) {
+ ret = setjmp(sigsegv_env);
+ if (!ret) {
+ memset(addr_rd[i], 'B', size);
+ pr_info("memset success unexpected.");
+ exit(-1);
+ }
+ pr_info("memset %d RD area failed expected.", i);
+ }
+
+ sharepool_print();
+
+ for (int i = 0; i < WR_NUM; i++) {
+ ret = setjmp(sigsegv_env);
+ if (!ret) {
+ pr_info("gonna memset addr 0x%lx", addr_wr[i]);
+ memset(addr_wr[i], 'B', size);
+ pr_info("memset %d WR area success expected.", i);
+ } else {
+ pr_info("memset %d WR area failed unexpected.", i);
+ exit(-1);
+ }
+ }
+
+ exit(0);
+ }
+
+ ret = 0;
+ WAIT_CHILD_STATUS(pid, out);
+
+out:
+ return ret;
+
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "进程以读写权限加组,申请只读内存,地址在预置地址空间范围内;u2k给内核,内核读写,预期成功;用户态进程再次读写,预期读成功,写失败.")
+ TESTCASE_CHILD(testcase2, "进程A加组并申请只读内存后,进程B以读写权限加组,并尝试读写这块内存,预期读成功,写失败。")
+ TESTCASE_CHILD(testcase3, "进程A循环申请只读内存,普通内存,dvpp内存,预期地址空间均相符")
+ TESTCASE_CHILD_MANUAL(testcase4, "进程A反复申请只读内存,直至地址空间耗尽")
+ TESTCASE_CHILD(testcase5, "尝试申请dvpp只读内存,预期失败")
+ TESTCASE_CHILD(testcase6, "申请只读内存并打印维测查看")
+ TESTCASE_CHILD(testcase7, "A进程加组,申请2块只读内存和2块读写内存;B进程加组,尝试写4块内存,预期前两块失败,后两块成功。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_dvpp_multi_16G_alloc.c b/tools/testing/sharepool/testcase/function_test/test_dvpp_multi_16G_alloc.c
new file mode 100644
index 000000000000..edc189df398c
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_dvpp_multi_16G_alloc.c
@@ -0,0 +1,690 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 20 01:38:40 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+#define ALLOC_LOOP 10
+
+static int dvpp_alloc_group(int spg_id, unsigned long size, struct sp_alloc_info *array, int array_num)
+{
+ int i, ret;
+ for (i = 0; i < array_num; i++) {
+ array[i].flag = SP_DVPP;
+ array[i].spg_id = spg_id;
+ array[i].size = size;
+
+ ret = ioctl_alloc(dev_fd, &array[i]);
+ if (ret < 0) {
+ pr_info("alloc DVPP failed, errno: %d", errno);
+ return -1;
+ }
+ memset(array[i].addr, 0, array[i].size);
+ }
+}
+
+/*
+ * 测试流程:
+ * 1、用户态进程A加组
+ * 2、申请10次DVPP共享内存
+ * 3、申请10次DVPP直调内存
+ * 预期结果:
+ * 1、以上操作均成功;
+ * 2、申请的地址不重复;
+ */
+static int testcase_dvpp_multi_16G_01(void)
+{
+ int ret, i, spg_id = 996;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = spg_id,
+ };
+
+ /* 1、用户态进程A加组 */
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ return -1;
+ }
+
+ /* 2、申请10次DVPP共享内存 */
+ dvpp_alloc_group(spg_id, 40960, alloc_group_info, ALLOC_LOOP);
+
+ /* 3、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ return -1;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ return -1;
+ }
+ }
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+}
+
+static int child(sem_t *sync, sem_t *childsync)
+{
+ int i, ret;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ /* 2、申请10次DVPP共享内存 */
+ dvpp_alloc_group(996, 40960, alloc_group_info, ALLOC_LOOP);
+
+ /* 3、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ sem_post(childsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ sem_post(sync);
+ break;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ sem_post(sync);
+ break;
+ }
+ }
+
+ sem_post(childsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+}
+
+/*
+ * 测试流程:
+ * 1、用户态进程A/B加入相同组,分别执行下面两个动作;
+ * 2、申请10次DVPP共享内存
+ * 3、申请10次DVPP直调内存
+ * 预期结果:
+ * 1、以上操作均成功;
+ * 2、申请的地址不重复;
+ */
+static int testcase_dvpp_multi_16G_02(void)
+{
+ int ret, status, i, spg_id = 996;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = spg_id,
+ };
+
+ char *sync_name = "/dvpp_pass_through";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *childsync_name = "/dvpp_pass_through_child";
+ sem_t *childsync = sem_open(childsync_name, O_CREAT, O_RDWR, 0);
+ if (childsync == SEM_FAILED) {
+ pr_info("sem_open child failed");
+ return -1;
+ }
+ sem_unlink(childsync_name);
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(sync, childsync));
+ }
+
+ /* 1、用户态进程A/B加相同组 */
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group child, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ /* 2、申请10次DVPP共享内存 */
+ dvpp_alloc_group(spg_id, 40960, alloc_group_info, ALLOC_LOOP);
+
+ /* 3、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+ }
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+ sem_post(sync);
+out:
+ status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status))
+ ret = -1;
+
+ return ret;
+}
+
+/*
+ * 测试流程:
+ * 1、用户态进程A/B加入不同组,分别执行下面两个动作;
+ * 2、申请10次DVPP共享内存
+ * 3、申请10次DVPP直调内存
+ * 预期结果:
+ * 1、以上操作均成功;
+ * 2、申请的地址不重复;
+ */
+static int testcase_dvpp_multi_16G_03(void)
+{
+ int ret, status, i, spg_id = 9116;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = spg_id,
+ };
+
+ char *sync_name = "/dvpp_pass_through";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *childsync_name = "/dvpp_pass_through_child";
+ sem_t *childsync = sem_open(childsync_name, O_CREAT, O_RDWR, 0);
+ if (childsync == SEM_FAILED) {
+ pr_info("sem_open child failed");
+ return -1;
+ }
+ sem_unlink(childsync_name);
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(sync, childsync));
+ }
+
+ /* 1、用户态进程A/B加相同组 */
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ag_info.pid = pid;
+ ag_info.spg_id = 996;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group child, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ /* 2、申请10次DVPP共享内存 */
+ dvpp_alloc_group(spg_id, 40960, alloc_group_info, ALLOC_LOOP);
+
+ /* 3、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+ }
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+ sem_post(sync);
+out:
+ status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status))
+ ret = -1;
+
+ return ret;
+}
+
+/*
+ * 测试流程:
+ * 1、申请10次DVPP直调内存
+ * 2、进程A加入共享组
+ * 3、申请10次DVPP直调内存
+ * 预期结果:
+ * 1、以上操作均成功;
+ * 2、申请的地址不重复;
+ */
+static int testcase_dvpp_multi_16G_04(void)
+{
+ int ret, i, spg_id = 996;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = spg_id,
+ };
+
+ /* 1、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ /* 2、用户态进程A加组 */
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ return -1;
+ }
+
+ /* 3、申请10次DVPP共享内存 */
+ dvpp_alloc_group(spg_id, 40960, alloc_group_info, ALLOC_LOOP);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ return -1;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ return -1;
+ }
+ }
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+}
+
+static int child05(sem_t *sync, sem_t *childsync)
+{
+ int ret;
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ sem_post(childsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+ sem_post(childsync);
+}
+
+/*
+ * 测试流程:
+ * 1、进程A申请10次DVPP直调内存
+ * 2、进程B加入共享组
+ * 3、进程A加入共享组
+ * 预期结果:
+ * 1、1和2操作成功,3加组失败;
+ */
+static int testcase_dvpp_multi_16G_05(void)
+{
+ int ret, status, i, spg_id = 9116;
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+
+ struct sp_add_group_info ag_info = {
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = spg_id,
+ };
+
+ char *sync_name = "/dvpp_pass_through";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *childsync_name = "/dvpp_pass_through_child";
+ sem_t *childsync = sem_open(childsync_name, O_CREAT, O_RDWR, 0);
+ if (childsync == SEM_FAILED) {
+ pr_info("sem_open child failed");
+ return -1;
+ }
+ sem_unlink(childsync_name);
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child05(sync, childsync));
+ }
+
+ /* 1、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ /* 2、用户态进程B加组 */
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ /* 3、用户态进程A加组 */
+ ag_info.pid = getpid();
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ pr_info("failed add group child, ret: %d, errno: %d", ret, errno);
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+ }
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+out:
+ status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status))
+ ret = -1;
+
+ return ret;
+}
+
+static int child06(sem_t *sync, sem_t *childsync)
+{
+ int i, ret;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ /* 4、申请10次DVPP共享内存 */
+ dvpp_alloc_group(996, 40960, alloc_group_info, ALLOC_LOOP);
+
+ /* 4、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ sem_post(childsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ sem_post(sync);
+ break;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ sem_post(sync);
+ break;
+ }
+ }
+
+ sem_post(childsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+}
+
+/*
+ * 测试流程:
+ * 1、进程A申请10次DVPP直调内存
+ * 2、进程A加入共享组
+ * 3、进程B加入共享组
+ * 4、进程A申请共享内存,进程B申请共享,直调内存
+ * 预期结果:
+ * 1、以上操作均执行成功
+ * 2、申请的内存地址不相同
+ */
+static int testcase_dvpp_multi_16G_06(void)
+{
+ int ret, status, i, spg_id = 996;
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+
+ struct sp_add_group_info ag_info = {
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = spg_id,
+ };
+
+ char *sync_name = "/dvpp_pass_through";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *childsync_name = "/dvpp_pass_through_child";
+ sem_t *childsync = sem_open(childsync_name, O_CREAT, O_RDWR, 0);
+ if (childsync == SEM_FAILED) {
+ pr_info("sem_open child failed");
+ return -1;
+ }
+ sem_unlink(childsync_name);
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child06(sync, childsync));
+ }
+
+ /* 1、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ /* 2、用户态进程A加共享组 */
+ ag_info.pid = getpid();
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ /* 3、用户态进程B加共享组 */
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0){
+ pr_info("failed add group child, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ /* 4 */
+ dvpp_alloc_group(spg_id, 40960, alloc_group_info, ALLOC_LOOP);
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+ }
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+ sem_post(sync);
+out:
+ status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status))
+ ret = -1;
+
+ return ret;
+}
+
+static int testcase1(void) { return testcase_dvpp_multi_16G_01(); }
+static int testcase2(void) { return testcase_dvpp_multi_16G_02(); }
+static int testcase3(void) { return testcase_dvpp_multi_16G_03(); }
+static int testcase4(void) { return testcase_dvpp_multi_16G_04(); }
+static int testcase5(void) { return testcase_dvpp_multi_16G_05(); }
+static int testcase6(void) { return testcase_dvpp_multi_16G_06(); }
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "用户态进程A加组,申请10次DVPP共享内存,再申请10次DVPP直调内存,预期成功")
+ TESTCASE_CHILD(testcase2, "用户态进程A/B加入相同组,分别申请10次DVPP共享内存,再申请10次DVPP直调内存,预期成功")
+ TESTCASE_CHILD(testcase3, "用户态进程A/B加入不同组,分别申请10次DVPP共享内存,再申请10次DVPP直调内存,预期成功")
+ TESTCASE_CHILD(testcase4, "申请10次DVPP直调内存,然后进程A加入共享组,申请10次DVPP直调内存")
+ TESTCASE_CHILD(testcase5, "1、进程A申请10次DVPP直调内存 2、进程B加入共享组 3、进程A加入共享组")
+ TESTCASE_CHILD(testcase6, "1、进程A申请10次DVPP直调内存 2、进程A加入共享组 3、进程B加入共享组 4、进程A申请共享内存,进程B申请共享,直调内存")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_dvpp_multi_16G_k2task.c b/tools/testing/sharepool/testcase/function_test/test_dvpp_multi_16G_k2task.c
new file mode 100644
index 000000000000..a938ffa0382f
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_dvpp_multi_16G_k2task.c
@@ -0,0 +1,604 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 20 01:38:40 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+#define ALLOC_LOOP 10
+
+static int dvpp_alloc_group(int spg_id, unsigned long size, struct sp_alloc_info *array, int array_num)
+{
+ int i, ret;
+ for (i = 0; i < array_num; i++) {
+ array[i].flag = SP_DVPP;
+ array[i].spg_id = spg_id;
+ array[i].size = size;
+
+ ret = ioctl_alloc(dev_fd, &array[i]);
+ if (ret < 0) {
+ pr_info("alloc DVPP failed, errno: %d", errno);
+ return -1;
+ }
+ memset(array[i].addr, 0, array[i].size);
+ }
+}
+
+static int dvpp_k2u_group(int spg_id, unsigned long size, unsigned long kva,
+ struct sp_make_share_info *array, int array_num)
+{
+ int i, ret;
+
+ for (i = 0; i < array_num; i++) {
+ array[i].kva = kva,
+ array[i].size = size,
+ array[i].spg_id = spg_id,
+ array[i].sp_flags = SP_DVPP,
+ array[i].pid = getpid(),
+
+ ret = ioctl_k2u(dev_fd, &array[i]);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ return -1;;
+ }
+ memset(array[i].addr, 0, array[i].size);
+ }
+}
+
+/*
+ * 测试流程:
+ * 1、用户态进程A加组
+ * 2、申请10次DVPP共享,直调内存
+ * 3、申请10次K2TASK内存
+ * 预期结果:
+ * 1、以上操作均成功;
+ * 2、申请的地址不重复;
+ */
+static int testcase_dvpp_multi_16G_01(void)
+{
+ int ret, i, group_id = 996;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+ struct sp_make_share_info k2u_task_info[ALLOC_LOOP];
+
+ struct vmalloc_info ka_info = {
+ .size = 40960,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ return -1;;
+ }
+
+ /* 1、用户态进程A加组 */
+ struct sp_add_group_info ag_info = {
+ .spg_id = group_id,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ /* 2、申请10次DVPP共享,直调内存 */
+ dvpp_alloc_group(group_id, 40960, alloc_group_info, ALLOC_LOOP);
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ /* 3、申请10次K2TASK内存 */
+ dvpp_k2u_group(SPG_ID_DEFAULT, 40960, ka_info.addr, k2u_task_info, ALLOC_LOOP);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ return -1;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ return -1;
+ }
+
+ ret = ioctl_unshare(dev_fd, &k2u_task_info[i]);
+ if (ret < 0) {
+ pr_info("unshare k2task failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+}
+
+/*
+ * 测试流程:
+ * 1、申请10次DVPP k2task内存
+ * 2、用户态进程A加组
+ * 3、申请10次直调、共享内存
+ * 预期结果:
+ * 1、以上操作均成功;
+ * 2、申请的地址不重复;
+ */
+static int testcase_dvpp_multi_16G_02(void)
+{
+ int ret, i, group_id = 996;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+ struct sp_make_share_info k2u_task_info[ALLOC_LOOP];
+
+ struct vmalloc_info ka_info = {
+ .size = 40960,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ return -1;;
+ }
+
+ /* 1、申请10次K2TASK内存 */
+ dvpp_k2u_group(SPG_ID_DEFAULT, 40960, ka_info.addr, k2u_task_info, ALLOC_LOOP);
+
+ /* 2、用户态进程A加组 */
+ struct sp_add_group_info ag_info = {
+ .spg_id = group_id,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ /* 2、申请10次DVPP共享,直调内存 */
+ dvpp_alloc_group(group_id, 40960, alloc_group_info, ALLOC_LOOP);
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ return -1;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ return -1;
+ }
+
+ ret = ioctl_unshare(dev_fd, &k2u_task_info[i]);
+ if (ret < 0) {
+ pr_info("unshare k2task failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+}
+
+static int child(sem_t *sync, sem_t *childsync)
+{
+ int i, ret;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+ struct sp_make_share_info k2u_task_info[ALLOC_LOOP];
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ struct vmalloc_info ka_info = {
+ .size = 40960,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ return -1;;
+ }
+
+ /* 2、申请10次K2TASK内存 */
+ dvpp_k2u_group(SPG_ID_DEFAULT, 40960, ka_info.addr, k2u_task_info, ALLOC_LOOP);
+
+ /* 3、申请10次DVPP共享内存 */
+ dvpp_alloc_group(996, 40960, alloc_group_info, ALLOC_LOOP);
+
+ /* 4、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ sem_post(childsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ sem_post(sync);
+ break;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ sem_post(sync);
+ break;
+ }
+
+ ret = ioctl_unshare(dev_fd, &k2u_task_info[i]);
+ if (ret < 0) {
+ pr_info("unshare k2task failed, errno: %d", errno);
+ sem_post(sync);
+ break;
+ }
+ }
+ ioctl_vfree(dev_fd, &ka_info);
+
+ sem_post(childsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+}
+
+/*
+ * 测试流程:
+ * 1、用户态进程A/B加入相同组,分别执行下面两个动作;
+ * 2、申请10次k2task内存
+ * 3、申请10次DVPP共享内存
+ * 4、申请10次DVPP直调内存
+ * 预期结果:
+ * 1、以上操作均成功;
+ * 2、申请的地址不重复;
+ */
+static int testcase_dvpp_multi_16G_03(void)
+{
+ int ret, status, i, spg_id = 996;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+ struct sp_make_share_info k2u_task_info[ALLOC_LOOP];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = spg_id,
+ };
+
+ char *sync_name = "/dvpp_pass_through";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *childsync_name = "/dvpp_pass_through_child";
+ sem_t *childsync = sem_open(childsync_name, O_CREAT, O_RDWR, 0);
+ if (childsync == SEM_FAILED) {
+ pr_info("sem_open child failed");
+ return -1;
+ }
+ sem_unlink(childsync_name);
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(sync, childsync));
+ }
+
+ struct vmalloc_info ka_info = {
+ .size = 40960,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ return -1;;
+ }
+
+ /* 1、用户态进程A/B加相同组 */
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group child, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ /* 2、申请10次K2TASK内存 */
+ dvpp_k2u_group(SPG_ID_DEFAULT, 40960, ka_info.addr, k2u_task_info, ALLOC_LOOP);
+
+ /* 3、申请10次DVPP共享内存 */
+ dvpp_alloc_group(spg_id, 40960, alloc_group_info, ALLOC_LOOP);
+
+ /* 4、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ret = ioctl_unshare(dev_fd, &k2u_task_info[i]);
+ if (ret < 0) {
+ pr_info("unshare k2task failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+ sem_post(sync);
+out:
+ status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status))
+ ret = -1;
+
+ return ret;
+}
+
+/*
+ * 测试流程:
+ * 1、用户态进程A/B加入不同组,分别执行下面两个动作;
+ * 2、申请10次k2task内存
+ * 3、申请10次DVPP共享内存
+ * 4、申请10次DVPP直调内存
+ * 预期结果:
+ * 1、以上操作均成功;
+ * 2、申请的地址不重复;
+ */
+static int testcase_dvpp_multi_16G_04(void)
+{
+ int ret, status, i, spg_id = 9116;
+ struct sp_alloc_info alloc_group_info[ALLOC_LOOP];
+ struct sp_alloc_info alloc_local_info[ALLOC_LOOP];
+ struct sp_make_share_info k2u_task_info[ALLOC_LOOP];
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = spg_id,
+ };
+
+ char *sync_name = "/dvpp_pass_through";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *childsync_name = "/dvpp_pass_through_child";
+ sem_t *childsync = sem_open(childsync_name, O_CREAT, O_RDWR, 0);
+ if (childsync == SEM_FAILED) {
+ pr_info("sem_open child failed");
+ return -1;
+ }
+ sem_unlink(childsync_name);
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(sync, childsync));
+ }
+
+ struct vmalloc_info ka_info = {
+ .size = 40960,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ return -1;;
+ }
+
+ /* 1、用户态进程A/B加不同组 */
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ag_info.pid = pid;
+ ag_info.spg_id = 996;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group child, ret: %d, errno: %d", ret, errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ /* 2、申请10次K2TASK内存 */
+ dvpp_k2u_group(SPG_ID_DEFAULT, 40960, ka_info.addr, k2u_task_info, ALLOC_LOOP);
+
+ /* 3、申请10次DVPP共享内存 */
+ dvpp_alloc_group(spg_id, 40960, alloc_group_info, ALLOC_LOOP);
+
+ /* 4、申请10次DVPP直调内存 */
+ dvpp_alloc_group(SPG_ID_DEFAULT, 40960, alloc_local_info, ALLOC_LOOP);
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ pr_info("Alloc DVPP memory\n");
+ sharepool_print();
+
+ for (i = 0; i < ALLOC_LOOP; i++) {
+ ret = ioctl_free(dev_fd, &alloc_group_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_local_info[i]);
+ if (ret < 0) {
+ pr_info("free local failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ ret = ioctl_unshare(dev_fd, &k2u_task_info[i]);
+ if (ret < 0) {
+ pr_info("unshare k2task failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+ pr_info("Free DVPP memory\n");
+ sharepool_print();
+ sem_post(sync);
+out:
+ status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status))
+ ret = -1;
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase_dvpp_multi_16G_01, "1、用户态进程A加组 2、申请10次DVPP共享,直调内存 3、申请10次K2TASK内存")
+ TESTCASE_CHILD(testcase_dvpp_multi_16G_02, "1、申请10次DVPP k2task内存 2、用户态进程A加组 3、申请10次直调、共享内存")
+ TESTCASE_CHILD(testcase_dvpp_multi_16G_03, "1、用户态进程A/B加入相同组,分别执行下面3个动作;2、申请10次k2task内存 3、申请10次DVPP共享内存 4、申请10次DVPP直调内存")
+ TESTCASE_CHILD(testcase_dvpp_multi_16G_04, "1、用户态进程A/B加入相同组,分别执行下面3个动作;2、申请10次k2task内存 3、申请10次DVPP共享内存 4、申请10次DVPP直调内存")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+#include "default_main.c"
+
diff --git a/tools/testing/sharepool/testcase/function_test/test_dvpp_pass_through.c b/tools/testing/sharepool/testcase/function_test/test_dvpp_pass_through.c
new file mode 100644
index 000000000000..010bf0e5bdf6
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_dvpp_pass_through.c
@@ -0,0 +1,191 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 20 01:38:40 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h> // for wait
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * 测试流程:
+ * 1、用户态进程A未加组,直接分配内存N。
+ * 2、进程B查询进程A的组id,尝试加组。
+ * 3、进程A调用u2k将内存N共享给内核,内核模块读N。
+ * 4、进程A释放内存N。
+ * 5、内核再次读内存。
+ * 预期结果:
+ * 1、预期进入DVPP直调场景。
+ * 2、查询失败,加组成功。
+ * 3、共享成功,内核读成功。
+ * 4、释放成功。
+ * 5、内核读成功。
+ */
+static int child(sem_t *sync, sem_t *childsync)
+{
+ int ret;
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ ret = ioctl_find_first_group(dev_fd, getpid());
+ if (!(ret < 0 && errno == ENODEV)) {
+ pr_info("unexpected parent group id. ret %d, errno %d", ret, errno);
+ ret = -1;
+ goto out;
+ } else {
+ pr_info("find first group failed as expected, ret: %d", errno);
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = getppid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 996,
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ ret = -1;
+ goto out;
+ } else {
+ ret = 0;
+ }
+
+out:
+ sem_post(childsync);
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret, status;
+
+ char *sync_name = "/dvpp_pass_through";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ ret = -1;
+ goto out;
+ }
+ sem_unlink(sync_name);
+
+ char *childsync_name = "/dvpp_pass_through_child";
+ sem_t *childsync = sem_open(childsync_name, O_CREAT, O_RDWR, 0);
+ if (childsync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ ret = -1;
+ goto out;
+ }
+ sem_unlink(childsync_name);
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(sync, childsync));
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_DVPP,
+ .spg_id = SPG_ID_DEFAULT,
+ .size = 10000,
+ };
+
+ alloc_info.flag = SP_HUGEPAGE;
+ alloc_info.size = PMD_SIZE;
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ sem_post(sync);
+ goto out;
+ }
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed, errno: %d", errno);
+ goto out;
+ }
+
+ memset((void *)alloc_info.addr, 'k', alloc_info.size);
+ struct karea_access_info kaccess_info = {
+ .mod = KAREA_CHECK,
+ .value = 'k',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+
+ ret = ioctl_karea_access(dev_fd, &kaccess_info);
+ if (ret < 0) {
+ pr_info("karea read failed, errno: %d", errno);
+ goto out;
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("unshare u2k failed, errno: %d", errno);
+ goto out;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ goto out;
+ }
+/*
+ ret = ioctl_karea_access(dev_fd, &kaccess_info);
+ if (ret < 0) {
+ pr_info("karea read failed, errno: %d", errno);
+ goto out;
+ }
+ */
+
+out:
+ status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status))
+ ret = -1;
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "1、用户态进程A未加组,直接分配内存N。 2、进程B查询进程A的组id,尝试加组。 3、进程A调用u2k将内存N共享给内核,内核模块读N。 4、进程A释放内存N。 5、内核再次读内存。预期结果:1、预期进入DVPP直调场景。2、查询失败,加组成功。 3、共享成功,内核读成功。4、释放成功。5、内核读成功。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_dvpp_readonly.c b/tools/testing/sharepool/testcase/function_test/test_dvpp_readonly.c
new file mode 100644
index 000000000000..efc51a9411b2
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_dvpp_readonly.c
@@ -0,0 +1,147 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 20 01:38:40 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+static unsigned long kva_size = 0x200000;
+static unsigned long kva_normal;
+static unsigned long kva_huge;
+
+/*
+ * 测试点:
+ * sp_alloc、k2u 加组,直调 大页、小页、DVPP、非DVPP
+ *
+ * 测试步骤:
+ * 1. 加组(或者直调不加组)
+ * 2. 申请用户内存(sp_alloc或k2u),附加只读属性
+ * 3. mprotect() 预期失败
+ * 4. memset 预期进程挂掉
+ */
+static int test_route(bool k2u, bool auto_group, unsigned long sp_flags)
+{
+ int spg_id = 0, ret;
+ unsigned long uva;
+ unsigned long size = 0x1000;
+
+ if (auto_group) {
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0) {
+ pr_info("add group failed, %d, para: %s, %s, %lx",
+ errno, k2u ? "k2u" : "sp_alloc",
+ auto_group ? "add_group" : "passthrough", sp_flags);
+ return -1;
+ }
+ }
+
+ if (k2u) {
+ unsigned long kva = (sp_flags & SP_HUGEPAGE_ONLY) ? kva_huge : kva_normal;
+ uva = wrap_k2u(kva, size, spg_id, sp_flags | SP_PROT_RO);
+ } else
+ uva = (unsigned long)wrap_sp_alloc(spg_id, size, sp_flags | SP_PROT_RO);
+
+ if (!uva) {
+ pr_info("alloc user memory failed, %d, para: %s, %s, %lx",
+ errno, k2u ? "k2u" : "sp_alloc",
+ auto_group ? "add_group" : "passthrough", sp_flags);
+ return -1;
+ }
+
+ ret = mprotect(uva, size, PROT_WRITE);
+ if (!(ret && errno == EACCES)) {
+ pr_info("mprotect failed, ret:%d, err:%d, para: %s, %s, %lx",
+ ret, errno, k2u ? "k2u" : "sp_alloc",
+ auto_group ? "add_group" : "passthrough", sp_flags);
+ return ret;
+ }
+ memset((void *)uva, 0, size);
+
+ // should never access this line
+ return -1;
+}
+
+static void pre_hook()
+{
+ kva_normal = wrap_vmalloc(kva_size, false);
+ if (!kva_normal)
+ exit(1);
+ kva_huge = wrap_vmalloc(kva_size, true);
+ if (!kva_huge) {
+ wrap_vfree(kva_normal);
+ exit(1);
+ }
+}
+#define pre_hook pre_hook
+
+static void post_hook()
+{
+ wrap_vfree(kva_huge);
+ wrap_vfree(kva_normal);
+}
+#define post_hook post_hook
+
+
+// sp_alloc,直调
+static int testcase1() { return test_route(false, false, 0); };
+static int testcase2() { return test_route(false, false, SP_DVPP); };
+static int testcase3() { return test_route(false, false, SP_HUGEPAGE_ONLY); };
+static int testcase4() { return test_route(false, false, SP_DVPP | SP_HUGEPAGE_ONLY); };
+// sp_alloc,非直调
+static int testcase5() { return test_route(false, true, 0); };
+static int testcase6() { return test_route(false, true, SP_DVPP); };
+static int testcase7() { return test_route(false, true, SP_HUGEPAGE_ONLY); };
+static int testcase8() { return test_route(false, true, SP_DVPP | SP_HUGEPAGE_ONLY); };
+// k2task
+static int testcase9() { return test_route(true, false, 0); };
+static int testcase10() { return test_route(true, false, SP_DVPP); };
+static int testcase11() { return test_route(true, false, SP_HUGEPAGE_ONLY); };
+static int testcase12() { return test_route(true, false, SP_DVPP | SP_HUGEPAGE_ONLY); };
+// k2spg
+static int testcase13() { return test_route(true, true, 0); };
+static int testcase14() { return test_route(true, true, SP_DVPP); };
+static int testcase15() { return test_route(true, true, SP_HUGEPAGE_ONLY); };
+static int testcase16() { return test_route(true, true, SP_DVPP | SP_HUGEPAGE_ONLY); };
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD_SIGNAL(testcase1, SIGSEGV, "sp_alloc 直调 普通页")
+ TESTCASE_CHILD_SIGNAL(testcase2, SIGSEGV, "sp_alloc 直调 dvpp")
+ TESTCASE_CHILD_SIGNAL(testcase3, SIGSEGV, "sp_alloc 直调 大页")
+ TESTCASE_CHILD_SIGNAL(testcase4, SIGSEGV, "sp_alloc 直调 dvpp 大页")
+ TESTCASE_CHILD_SIGNAL(testcase5, SIGSEGV, "sp_alloc 非直调 普通页")
+ TESTCASE_CHILD_SIGNAL(testcase6, SIGSEGV, "sp_alloc 非直调 dvpp")
+ TESTCASE_CHILD_SIGNAL(testcase7, SIGSEGV, "sp_alloc 非直调 大页")
+ TESTCASE_CHILD_SIGNAL(testcase8, SIGSEGV, "sp_alloc 非直调 dvpp 大页")
+ TESTCASE_CHILD_SIGNAL(testcase9, SIGSEGV, "k2task")
+ TESTCASE_CHILD_SIGNAL(testcase10, SIGSEGV, "k2task dvpp")
+ TESTCASE_CHILD_SIGNAL(testcase11, SIGSEGV, "k2task 大页")
+ TESTCASE_CHILD_SIGNAL(testcase12, SIGSEGV, "k2task dvpp 大页")
+ TESTCASE_CHILD_SIGNAL(testcase13, SIGSEGV, "k2spg")
+ TESTCASE_CHILD_SIGNAL(testcase14, SIGSEGV, "k2spg dvpp")
+ TESTCASE_CHILD_SIGNAL(testcase15, SIGSEGV, "k2spg 大页")
+ TESTCASE_CHILD_SIGNAL(testcase16, SIGSEGV, "k2spg dvpp 大页")
+};
+
+static struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_hugetlb_alloc_hugepage.c b/tools/testing/sharepool/testcase/function_test/test_hugetlb_alloc_hugepage.c
new file mode 100644
index 000000000000..12fc8ab52cae
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_hugetlb_alloc_hugepage.c
@@ -0,0 +1,113 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2022. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Jan 04 07:33:23 2022
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 测试点在于对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是
+ * 通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理。
+ *
+ * 申请大页内存,读写访问,做u2k,内核check、写,用户态读
+ * 预期用户态读写成功,u2k成功,内核写成功,用户态check成功
+ */
+static int testcase_route(int flags, unsigned long len)
+{
+ int i, ret;
+ char *addr;
+
+ addr = ioctl_alloc_huge_memory(0, flags, 0, len);
+ if (!addr) {
+ pr_info("alloc huge memory failed, %d", errno);
+ return -1;
+ } else
+ pr_info("alloc huge memory success, %p, size:%#lx, flags: %d", addr, len, flags);
+
+ memset(addr, 'b', len);
+
+ for (i = 0; i < len; i++) {
+ if (addr[i] != 'b') {
+ pr_info("memset resut check failed, i:%d, %c", i, addr[i]);
+ return -1;
+ }
+ }
+ pr_info("check memset sueccess");
+
+ unsigned long kaddr = wrap_u2k(addr, len);
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'b',
+ .addr = kaddr,
+ .size = len,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return -1;
+ }
+
+ karea_info.mod = KAREA_SET;
+ karea_info.value = 'c';
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ return -1;
+ }
+
+ for (i = 0; i < len; i++) {
+ if (addr[i] != 'c') {
+ pr_info("memset resut check failed, i:%d, %c", i, addr[i]);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+#define TEST_FUNC(num, flags, len) \
+static int testcase##num() \
+{ \
+ return testcase_route(flags, len); \
+}
+
+TEST_FUNC(1, 0, 0x100)
+TEST_FUNC(2, 0, 0x200000)
+TEST_FUNC(3, 0, 0x2000000)
+TEST_FUNC(4, 1, 0x100)
+TEST_FUNC(5, 1, 0x200000)
+TEST_FUNC(6, 1, 0x2000000)
+TEST_FUNC(7, 2, 0x100)
+TEST_FUNC(8, 2, 0x200000)
+TEST_FUNC(9, 2, 0x2000000)
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理")
+ TESTCASE_CHILD(testcase2, "对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理")
+ TESTCASE_CHILD(testcase3, "对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理")
+ TESTCASE_CHILD(testcase4, "对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理")
+ TESTCASE_CHILD(testcase5, "对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理")
+ TESTCASE_CHILD(testcase6, "对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理")
+ TESTCASE_CHILD(testcase7, "对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理")
+ TESTCASE_CHILD(testcase8, "对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理")
+ TESTCASE_CHILD(testcase9, "对用户态大页做u2k。DVPP流程中有这种场景,其中用户态大页内存是通过底软的接口申请的,对应的vma没有标记成大页,需要特殊处理")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_k2u.c b/tools/testing/sharepool/testcase/function_test/test_k2u.c
new file mode 100644
index 000000000000..ebae2395ac5d
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_k2u.c
@@ -0,0 +1,804 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Nov 23 02:17:32 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/shm.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+#define tc1_MSG_KEY 20
+#define tc1_MSG_TYPE 100
+struct msgbuf_alloc_info {
+ long type;
+ union {
+ struct sp_alloc_info alloc_info;
+ struct sp_make_share_info share_info;
+ };
+};
+
+/*
+ * 内核模块分配并写内存N,k2u共享给一个进程(进程未加组),该进程读内存N成功,内核模块释放内存N。
+ */
+static int testcase1(void)
+{
+ int ret;
+
+ struct vmalloc_info ka_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto out_free;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto out_free;
+ }
+
+ char *buf = (char *)k2u_info.addr;
+ for (int i = 0; i < k2u_info.size; i++) {
+ if (buf[i] != 'b') {
+ pr_info("check k2u context failed");
+ ret = -1;
+ break;
+ }
+ }
+
+ if (ioctl_unshare(dev_fd, &k2u_info)) {
+ pr_info("unshare memory failed, errno: %d", errno);
+ ret = -1;
+ }
+
+out_free:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret < 0 ? -1 : 0;
+}
+
+/*
+ * 内核模块分配并写内存N,k2u共享给一个进程(进程已加组)。
+ * 该进程读内存成功。新进程B加入该组,进程B读内存N成功。内核模块释放内存N。
+ * 内核执行unshare操作,用户态进程不能访问。
+ */
+static jmp_buf testcase2_env;
+static int testcase2_sigsegv_result = -1;
+static void testcase2_sigsegv_handler(int num)
+{
+ pr_info("segment fault occurs");
+ testcase2_sigsegv_result = 0;
+ longjmp(testcase2_env, 1);
+}
+
+static int testcase2(void)
+{
+ int ret, status = 0, group_id = 10;
+ pid_t pid;
+
+ char *sync_name = "/testcase2_k2u";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *sync_name2 = "/testcase2_k2u2";
+ sem_t *sync2 = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync2 == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name2);
+
+ struct vmalloc_info ka_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto out;
+ }
+
+ struct sp_add_group_info ag_info = {
+ .spg_id = group_id,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ goto out;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = group_id,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto out;
+ }
+
+ char *buf = (char *)k2u_info.addr;
+ for (int i = 0; i < k2u_info.size; i++) {
+ if (buf[i] != 'b') {
+ pr_info("check k2u context failed");
+ ret = -1;
+ goto out;
+ }
+ }
+
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ ret = -1;
+ goto out;
+ } else if (pid == 0) {
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ ret = ioctl_find_first_group(dev_fd, getpid());
+ if (ret != group_id) {
+ pr_info("child: unexpected group_id: %d", ret);
+ sem_post(sync2);
+ exit(-1);
+ }
+
+ char *buf = (char *)k2u_info.addr;
+ for (int i = 0; i < k2u_info.size; i++) {
+ if (buf[i] != 'b') {
+ pr_info("child: check k2u context failed, buf:%lx, i:%d, buf[i]:%d",
+ buf, i, (int)buf[i]);
+ sem_post(sync2);
+ exit(-1);
+ }
+ }
+ sem_post(sync2);
+ exit(0);
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ sem_post(sync);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ goto out_wait;
+ }
+
+ do {
+ ret = sem_wait(sync2);
+ } while (ret < 0 && errno == EINTR);
+
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ goto out_wait;
+ }
+
+ testcase2_sigsegv_result = -1;
+ struct sigaction sa = {0};
+ sa.sa_handler = testcase2_sigsegv_handler;
+ sigaction(SIGSEGV, &sa, NULL);
+ ret = setjmp(testcase2_env);
+ if (!ret) {
+ *(char *)k2u_info.addr = 'a';
+ pr_info("setjmp success, set char as 'a' success.");
+ }
+
+ if (testcase2_sigsegv_result) {
+ pr_info("ioctl unshare has no effect");
+ ret = -1;
+ goto out_wait;
+ } else
+ ret = 0;
+
+out_wait:
+ waitpid(pid, &status, 0);
+ if (ret || !WIFEXITED(status) || WEXITSTATUS(status))
+ ret = -1;
+ if (WIFSIGNALED(status))
+ pr_info("child killed by signal: %d", WTERMSIG(status));
+
+out:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+/*
+ * 内核模块分配并写内存N,k2u共享给一个进程(进程已加组),
+ * 该组内所有进程读内存N成功。内核模块释放内存N。
+ * 内核执行unshare操作,用户态进程不能访问。
+ */
+static int childprocess3(sem_t *sync, sem_t *childsync)
+{
+ int ret;
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ int msgid = msgget(tc1_MSG_KEY, 0);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ sem_post(childsync);
+ return -1;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ struct sp_make_share_info *k2u_info = &msgbuf.share_info;
+ ret = msgrcv(msgid, &msgbuf, sizeof(*k2u_info), tc1_MSG_TYPE, IPC_NOWAIT);
+ if (ret < 0) {
+ pr_info("msgrcv failed, errno: %d", errno);
+ sem_post(childsync);
+ return -1;
+ }
+
+ char *buf = (char *)k2u_info->addr;
+ for (int i = 0; i < k2u_info->size; i++) {
+ if (buf[i] != 'p') {
+ pr_info("child: check k2u context failed, buf:%lx, i:%d, buf[i]:%d",
+ buf, i, (int)buf[i]);
+ sem_post(childsync);
+ return -1;
+ }
+ }
+ sem_post(childsync);
+ return 0;
+}
+
+static int testcase3(void)
+{
+ int ret, status = 0, group_id = 18;
+ pid_t pid;
+
+ char *sync_name = "/testcase2_k2u";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *sync_name2 = "/testcase2_k2u2";
+ sem_t *childsync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (childsync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name2);
+
+ struct vmalloc_info ka_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'p',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto out;
+ }
+
+ struct sp_add_group_info ag_info = {
+ .spg_id = group_id,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ goto out;
+ }
+
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ goto out;
+ } else if (pid == 0) {
+ exit(childprocess3(sync, childsync));
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ goto out_fork;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = group_id,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto out;
+ }
+
+ int msgid = msgget(tc1_MSG_KEY, IPC_CREAT | 0666);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto out_fork;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ msgbuf.type = tc1_MSG_TYPE;
+ memcpy(&msgbuf.share_info, &k2u_info, sizeof(k2u_info));
+ ret = msgsnd(msgid, &msgbuf, sizeof(k2u_info), 0);
+ if (ret < 0) {
+ pr_info("msgsnd failed, errno: %d", errno);
+ goto out_fork;
+ }
+
+ sem_post(sync);
+
+ char *buf = (char *)k2u_info.addr;
+ for (int i = 0; i < k2u_info.size; i++) {
+ if (buf[i] != 'p') {
+ pr_info("check k2u context failed");
+ ret = -1;
+ goto out_fork;
+ }
+ }
+
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ goto out_fork;
+ }
+
+ testcase2_sigsegv_result = -1;
+ struct sigaction sa = {0};
+ sa.sa_handler = testcase2_sigsegv_handler;
+ sigaction(SIGSEGV, &sa, NULL);
+ ret = setjmp(testcase2_env);
+ if (!ret)
+ *(char *)k2u_info.addr = 'a';
+ if (testcase2_sigsegv_result) {
+ pr_info("ioctl unshare has no effect");
+ ret = -1;
+ goto out_fork;
+ }
+
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("childprocess3 exits unexpected");
+ ret = -1;
+ } else
+ ret = 0;
+ goto out;
+
+out_fork:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+out:
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+/*
+ * 内核模块分配并写内存N,k2u共享给一个组(进程已加组),
+ * 该组内所有进程读N成功。进程B加入该组,进程B读N成功。内核模块停止共享,
+ * 该组内所有进程读N失败。内核模块再次k2u共享给该组(换用B的pid),该组内
+ * 所有进程读N成功。内核模块释放内存N。
+ */
+
+static pid_t fork_and_add_group(int idx, int group_id, int (*child)(int, sem_t *, sem_t *),
+ sem_t *sync, sem_t *childsync)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(idx, sync, childsync));
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return -1;
+ } else
+ return pid;
+}
+
+static int per_test_init(int i, sem_t **sync, sem_t **childsync)
+{
+ char buf[100];
+ sprintf(buf, "/test_k2u%d", i);
+ *sync = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (*sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+
+ sprintf(buf, "/test_k2u_child%d", i);
+ *childsync = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (*childsync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+
+ return 0;
+}
+
+#define TEST4_SHM_KEY 1348
+#define TEST4_PROC_NUM 5
+
+struct shm_data {
+ struct sp_make_share_info k2u_info;
+ int results[TEST4_PROC_NUM];
+};
+
+static int childprocess4(int idx, sem_t *sync, sem_t *childsync)
+{
+ int ret;
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ int shmid = shmget(TEST4_SHM_KEY, sizeof(struct shm_data), IPC_CREAT | 0666);
+ if (shmid < 0) {
+ pr_info("shmget failed, errno: %d", errno);
+ goto error;
+ }
+
+ struct shm_data *shmd = shmat(shmid, NULL, 0);
+ if (shmd == (void *)-1) {
+ pr_info("shmat failed, errno: %d", errno);
+ goto error;
+ }
+
+ ret = ioctl_find_first_group(dev_fd, getpid());
+ if (ret < 0) {
+ pr_info("get group id failed");
+ goto error;
+ }
+
+ if (!ioctl_judge_addr(dev_fd, shmd->k2u_info.addr)) {
+ pr_info("unexpected k2u addr: 0x%lx", shmd->k2u_info.addr);
+ goto error;
+ }
+
+ char *buf = (char *)shmd->k2u_info.addr;
+ for (int i = 0; i < shmd->k2u_info.size; i++) {
+ if (buf[i] != 'x') {
+ pr_info("memory check failed");
+ goto error;
+ }
+ }
+ shmd->results[idx] = 0;
+
+ sem_post(childsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ testcase2_sigsegv_result = -1;
+ struct sigaction sa = {0};
+ sa.sa_handler = testcase2_sigsegv_handler;
+ sigaction(SIGSEGV, &sa, NULL);
+ ret = setjmp(testcase2_env);
+ if (!ret)
+ *(char *)shmd->k2u_info.addr = 'a';
+ if (testcase2_sigsegv_result) {
+ pr_info("ioctl unshare has no effect");
+ goto error;
+ }
+ shmd->results[idx] = 0;
+
+ sem_post(childsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ if (!ioctl_judge_addr(dev_fd, shmd->k2u_info.addr)) {
+ pr_info("unexpected k2u addr: 0x%lx", shmd->k2u_info.addr);
+ goto error;
+ }
+
+ buf = (char *)shmd->k2u_info.addr;
+ for (int i = 0; i < shmd->k2u_info.size; i++) {
+ if (buf[i] != 'l') {
+ pr_info("memory check failed");
+ goto error;
+ }
+ }
+ shmd->results[idx] = 0;
+ sem_post(childsync);
+
+ return 0;
+
+error:
+ sem_post(childsync);
+ return -1;
+}
+
+static int testcase4(void)
+{
+ int child_num, i, ret;
+ int group_id = 15;
+
+ sem_t *syncs[TEST4_PROC_NUM];
+ sem_t *syncchilds[TEST4_PROC_NUM];
+ pid_t childs[TEST4_PROC_NUM] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ int shmid = shmget(TEST4_SHM_KEY, sizeof(struct shm_data), IPC_CREAT | 0666);
+ if (shmid < 0) {
+ pr_info("shmget failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct shm_data *shmd = shmat(shmid, NULL, 0);
+ if (shmd == (void *)-1) {
+ pr_info("shmat failed, errno: %d", errno);
+ return -1;
+ }
+ memset(shmd, 0, sizeof(*shmd));
+ for (i = 0; i < TEST4_PROC_NUM; i++)
+ shmd->results[i] = -1;
+ for (i = 0; i < TEST4_PROC_NUM - 1; i++) {
+ if (per_test_init(i, syncs + i, syncchilds + i)) {
+ child_num = i;
+ goto unfork;
+ }
+ pid_t pid = fork_and_add_group(i, group_id, childprocess4, syncs[i], syncchilds[i]);
+ if (pid < 0) {
+ child_num = i;
+ goto unfork;
+ }
+ childs[i] = pid;
+ }
+ child_num = i;
+
+ struct vmalloc_info ka_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ goto unfork;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'x',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto vfree;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = group_id,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto vfree;
+ }
+
+ memcpy(&shmd->k2u_info, &k2u_info, sizeof(k2u_info));
+
+ for (i = 0; i < child_num; i++)
+ sem_post(syncs[i]);
+ for (i = 0; i < child_num; i++) {
+ do {
+ ret = sem_wait(syncchilds[i]);
+ } while (ret < 0 && errno == EINTR);
+ if (shmd->results[i]) {
+ pr_info("test4 child%d read k2u memory faild", i);
+ goto unshare;
+ }
+ shmd->results[i] = -1;
+ }
+
+// TODO: 存在问题,后加组的进程不能共享k2spg的内存,待修复后删掉在测试
+#if 1
+ // 创建新进程B,并加组,判断进程B是否能读共享内存成功
+ if (per_test_init(child_num, syncs + child_num, syncchilds + child_num))
+ goto unshare;
+ childs[child_num] = fork_and_add_group(child_num, group_id, childprocess4,
+ syncs[child_num], syncchilds[child_num]);
+ if (childs[child_num] < 0)
+ goto unshare;
+ child_num++;
+
+ sem_post(syncs[child_num - 1]);
+ do {
+ ret = sem_wait(syncchilds[child_num - 1]);
+ } while (ret < 0 && errno == EINTR);
+ if (shmd->results[child_num - 1]) {
+ pr_info("test4 child%d read k2u memory faild", child_num - 1);
+ goto unshare;
+ }
+ shmd->results[child_num - 1] = -1;
+#endif
+
+ // unshare 之后其他进程读失败
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl unshare failed, errno: %d", errno);
+ goto unshare;
+ }
+
+ for (i = 0; i < child_num; i++)
+ sem_post(syncs[i]);
+ for (i = 0; i < child_num; i++) {
+ do {
+ ret = sem_wait(syncchilds[i]);
+ } while (ret < 0 && errno == EINTR);
+ if (shmd->results[i]) {
+ pr_info("test4 child%d read k2u memory faild", i);
+ goto unshare;
+ }
+ shmd->results[i] = -1;
+ }
+
+ // 再次调用k2u,其他进程读内存成功
+ karea_info.value = 'l';
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto vfree;
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto vfree;
+ }
+ memcpy(&shmd->k2u_info, &k2u_info, sizeof(k2u_info));
+
+ for (i = 0; i < child_num; i++)
+ sem_post(syncs[i]);
+ for (i = 0; i < child_num; i++) {
+ do {
+ ret = sem_wait(syncchilds[i]);
+ } while (ret < 0 && errno == EINTR);
+ if (shmd->results[i]) {
+ pr_info("test4 child%d read k2u memory faild", i);
+ goto unshare;
+ }
+ shmd->results[i] = -1;
+ }
+
+ for (i = 0; i < child_num; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], NULL, 0);
+ }
+
+ ret = ioctl_vfree(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("ioctl_vfree failed.");
+ return -1;
+ }
+
+ return 0;
+
+unshare:
+ ioctl_unshare(dev_fd, &k2u_info);
+vfree:
+ ioctl_vfree(dev_fd, &ka_info);
+unfork:
+ for (i = 0; i < child_num; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], NULL, 0);
+ }
+ return -1;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "内核模块分配并写内存N,k2u共享给一个进程(进程未加组),该进程读内存N成功,内核模块释放内存N。")
+ TESTCASE_CHILD(testcase2, "内核模块分配并写内存N,k2u共享给一个进程(进程已加组)。该进程读内存成功。新进程B加入该组,进程B读内存N成功。内核模块释放内存N。内核执行unshare操作,用户态进程不能访问。")
+ TESTCASE_CHILD(testcase3, "内核模块分配并写内存N,k2u共享给一个进程(进程已加组),该组内所有进程读内存N成功。内核模块释放内存N。内核执行unshare操作,用户态进程不能访问。")
+ TESTCASE_CHILD(testcase4, "内核模块分配并写内存N,k2u共享给一个组(进程已加组),该组内所有进程读N成功。进程B加入该组,进程B读N成功。内核模块停止共享,该组内所有进程读N失败。内核模块再次k2u共享给该组(换用B的pid),该组内所有进程读N成功。内核模块释放内存N。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_mm_mapped_to_multi_groups.c b/tools/testing/sharepool/testcase/function_test/test_mm_mapped_to_multi_groups.c
new file mode 100644
index 000000000000..1144655a97b9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_mm_mapped_to_multi_groups.c
@@ -0,0 +1,435 @@
+#include <stdlib.h>
+#include <pthread.h>
+#include <stdbool.h>
+#include "sharepool_lib.h"
+
+#define PROCESS_NUM 20
+#define THREAD_NUM 20
+#define GROUP_NUM 50
+#define ALLOC_TYPES 4
+
+static pthread_mutex_t mutex;
+static int group_ids[GROUP_NUM];
+static int add_success, add_fail;
+
+int query_func(int *group_num, int *ids)
+{
+ int ret = 0;
+ // query groups
+ int spg_ids[GROUP_NUM];
+ if (!ids)
+ ids = spg_ids;
+ *group_num = GROUP_NUM;
+ struct sp_group_id_by_pid_info find_group_info = {
+ .num = group_num,
+ .spg_ids = ids,
+ .pid = getpid(),
+ };
+ ret = ioctl_find_group_by_pid(dev_fd, &find_group_info);
+ if (ret < 0) {
+ pr_info("find group id by pid failed");
+ return ret;
+ } else {
+ return 0;
+ }
+}
+
+int work_func(int group_id)
+{
+ int ret = 0, i;
+ pid_t pid;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[ALLOC_TYPES] = {0};
+ struct sp_make_share_info u2k_info[ALLOC_TYPES] = {0}, k2u_info = {0}, k2u_huge_info = {0};
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ char *addr;
+
+ /* check sp group */
+ pid = getpid();
+
+ // 大页
+ alloc_info[0].flag = SP_HUGEPAGE;
+ alloc_info[0].spg_id = group_id;
+ alloc_info[0].size = 2 * PMD_SIZE;
+
+ // 大页 DVPP
+ alloc_info[1].flag = SP_DVPP | SP_HUGEPAGE;
+ alloc_info[1].spg_id = group_id;
+ alloc_info[1].size = 2 * PMD_SIZE;
+
+ // 普通页 DVPP
+ alloc_info[2].flag = SP_DVPP;
+ alloc_info[2].spg_id = group_id;
+ alloc_info[2].size = 4 * PAGE_SIZE;
+
+ // 普通页
+ alloc_info[3].flag = 0;
+ alloc_info[3].spg_id = group_id;
+ alloc_info[3].size = 4 * PAGE_SIZE;
+
+ /* alloc & u2k */
+ for (i = 0; i < ALLOC_TYPES; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("sp_alloc return addr %lx\n", alloc_info[i].addr);
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(dev_fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("addr %lx is a valid share pool addr\n", alloc_info[i].addr);
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ pr_info("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("u2k return addr %lx, check memory content succ.\n",
+ // u2k_info[i].addr);
+ }
+ }
+ //pr_info("\n");
+ }
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = group_id;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = group_id;
+
+ /* k2u */
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ret = ioctl_vfree(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vfree small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vfree(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vfree huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* unshare kva & sp_free*/
+ for (i = 0; i < ALLOC_TYPES; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ //close_device(dev_fd);
+ return 0;
+
+error:
+ //close_device(dev_fd);
+ return -1;
+}
+
+void *thread_query_and_work(void *arg)
+{
+ int ret = -1;
+ int group_num = 0;
+ int ids[GROUP_NUM];
+
+ for (int i = 0; i < 10 && ret; i++)
+ ret = query_func(&group_num, ids);
+
+ if (ret) {
+ pr_info("query_func failed: %d", ret);
+ pthread_exit((void *)0);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ ret = work_func(0);
+ if (ret != 0) {
+ pr_info("\nthread %lu finish running with error, spg_id: %d\n", pthread_self(), 0);
+ pthread_exit((void *)1);
+ }
+ }
+
+ pthread_exit((void *)0);
+}
+
+void *thread_add_group(void *arg)
+{
+ int ret = 0;
+ for(int i = 0; i < GROUP_NUM; i++)
+ {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, i + 1);
+ if (pthread_mutex_lock(&mutex) != 0) {
+ pr_info("get pthread mutex failed.");
+ }
+ if (ret < 0)
+ add_fail++;
+ else
+ add_success++;
+ pthread_mutex_unlock(&mutex);
+ }
+ pthread_exit((void *)0);
+}
+
+static int process_routine(void)
+{
+ int ret = 0;
+
+ // threads for alloc and u2k k2u
+ pthread_t tid1[THREAD_NUM];
+ for (int i = 0; i < THREAD_NUM; i++) {
+ ret = pthread_create(tid1 + i, NULL, thread_query_and_work, NULL);
+ if (ret < 0) {
+ pr_info("thread create failed.");
+ return -1;
+ }
+ }
+
+ // N threads for add M groups, N*M attempts, only M attempts shall success
+ pthread_t tid2[THREAD_NUM];
+ for (int j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_create(tid2 + j, NULL, thread_add_group, NULL);
+ if (ret < 0) {
+ pr_info("thread create failed.");
+ return -1;
+ }
+ }
+
+ // wait for add_group threads to return
+ for (int j = 0; j < THREAD_NUM; j++) {
+ void *tret;
+ ret = pthread_join(tid2[j], &tret);
+ if (ret < 0) {
+ pr_info("thread join failed.");
+ ret = -1;
+ }
+ if ((long)tret != 0) {
+ pr_info("thread %d failed.", j);
+ ret = -1;
+ } else {
+ pr_info("add group thread %d return success!!", j);
+ }
+ }
+
+ // wait for work threads to return
+ for (int i = 0; i < THREAD_NUM; i++) {
+ void *tret;
+ ret = pthread_join(tid1[i], &tret);
+ if (ret < 0) {
+ pr_info("thread join failed.");
+ ret = -1;
+ }
+ if ((long)tret != 0) {
+ pr_info("thread %d failed.", i);
+ ret = -1;
+ } else {
+ pr_info("work thread %d return success!!", i);
+ }
+ }
+
+ return ret;
+}
+
+/* testcase1: 10个线程对所有加入的组执行查询 + alloc routine,另外10个线程负责加入新组*/
+static int testcase1(void)
+{
+ int ret = 0;
+
+ ret = process_routine();
+
+ int group_query_final;
+ query_func(&group_query_final, NULL);
+ pr_info("group query final is %d", group_query_final);
+ if (group_query_final != GROUP_NUM)
+ ret = -1;
+
+ pr_info("add_success: %d, add_fail: %d", add_success, add_fail);
+ if (add_success != GROUP_NUM || (add_fail + add_success) != THREAD_NUM * GROUP_NUM)
+ ret = -1;
+
+ add_success = add_fail = 0;
+ return ret;
+}
+
+/* testcase2: fork 新进程,令其一边调用alloc/k2u/u2k等API,一边开新线程加入新组 */
+static int testcase2(void)
+{
+ int ret = 0;
+
+ ret = process_routine();
+
+ // fork child processes, they should not copy parent's group
+ pid_t childs[PROCESS_NUM];
+ for (int k = 0; k < PROCESS_NUM; k++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed.");
+ return -1;
+ } else if (pid == 0) {
+ exit(process_routine());
+ }
+ childs[k] = pid;
+ }
+
+ for (int k = 0; k < PROCESS_NUM; k++) {
+ int status;
+ if (waitpid(childs[k], &status, 0) < 0) {
+ pr_info("waitpid failed");
+ ret = -1;
+ }
+ if (status != 0) {
+ pr_info("child process %d pid %d exit unexpected, return value = %d", k, childs[k], status);
+ ret = -1;
+ } else {
+ pr_info("process %d exit success", k);
+ }
+ childs[k] = 0;
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "10个线程对所有加入的组执行查询 + alloc routine,另外10个线程负责加入新组")
+ TESTCASE_CHILD(testcase2, "进程一边调用alloc/k2u/u2k等API,一边开新线程加入新组")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_non_dvpp_group.c b/tools/testing/sharepool/testcase/function_test/test_non_dvpp_group.c
new file mode 100644
index 000000000000..008b0f803cb9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_non_dvpp_group.c
@@ -0,0 +1,167 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2022. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Apr 26 09:20:13 2022
+ */
+
+/*
+ * 针对non_dvpp组进行测试
+ * 1. 进程加组,设置non_dvpp标志,申请普通内存,申请成功
+ * 2. 进程加组,设置non_dvpp标志,申请dvpp内存,申请失败
+ * 3. 进程加组,设置non_dvpp标志,k2u 普通内存,成功
+ * 4. 进程加组,设置non_dvpp标志,k2u DVPP内存,失败
+ * 5. 进程加两个组,分别为普通组和non_dvpp组,两个组分别申请普通内存和DVPP内存,调整申请顺序和加组顺序来一遍
+ * 6. 进程加两个组,分别为普通组和non_dvpp组,两个组分别申请普通内存和DVPP内存
+ * 大页重复一遍
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/shm.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+// 加组内存
+static int case1(unsigned long flag)
+{
+ int ret = wrap_add_group_non_dvpp(getpid(), PROT_READ|PROT_WRITE, SPG_ID_AUTO);
+ if (ret < 0) {
+ pr_info("add group failed: %d", ret);
+ abort();
+ }
+
+ char *buf = wrap_sp_alloc(ret, 1024, flag);
+ if (buf == (void *)-1)
+ return -1;
+
+ *buf = 'a';
+
+ return 0;
+}
+
+static int testcase1(void) { return case1(0); } // 小页,普通内存,成功
+static int testcase2(void) { return case1(SP_HUGEPAGE_ONLY); } // 大页,普通内存,成功
+static int testcase3(void) { return !case1(SP_DVPP); } // 小页,DVPP内存,失败
+static int testcase4(void) { return !case1(SP_DVPP|SP_HUGEPAGE_ONLY); } // 大页,DVPP内存,失败
+
+// 直调内存
+static int case2(unsigned long flag)
+{
+ int ret = wrap_add_group_non_dvpp(getpid(), PROT_READ|PROT_WRITE, SPG_ID_AUTO);
+ if (ret < 0) {
+ pr_info("add group failed: %d", ret);
+ abort();
+ }
+
+ char *buf = wrap_sp_alloc(SPG_ID_DEFAULT, 1024, flag);
+ if (buf == (void *)-1)
+ return -1;
+
+ *buf = 'a';
+
+ return 0;
+}
+
+static int testcase5(void) { return case2(0); } // 小页,普通内存,成功
+static int testcase6(void) { return case2(SP_HUGEPAGE_ONLY); } // 大页,普通内存,成功
+static int testcase7(void) { return case2(SP_DVPP); } // 小页,DVPP内存,成功
+static int testcase8(void) { return case2(SP_DVPP|SP_HUGEPAGE_ONLY); } // 大页,DVPP内存,成功
+
+// k2group
+static int case3(unsigned long flag)
+{
+ int ret = wrap_add_group_non_dvpp(getpid(), PROT_READ|PROT_WRITE, SPG_ID_AUTO);
+ if (ret < 0) {
+ pr_info("add group failed: %d", ret);
+ abort();
+ }
+
+ unsigned long kva = wrap_vmalloc(1024, flag & SP_HUGEPAGE_ONLY);
+ if (!kva) {
+ pr_info("alloc kva failed: %#lx", flag);
+ abort();
+ }
+
+ char *buf = (char *)wrap_k2u(kva, 1024, ret, flag & ~SP_HUGEPAGE_ONLY);
+ if (!buf)
+ return -1;
+
+ *buf = 'a';
+
+ return 0;
+}
+
+static int testcase9(void) { return case3(0); } // 小页,普通内存,成功
+static int testcase10(void) { return case3(SP_HUGEPAGE_ONLY); } // 大页,普通内存,成功
+static int testcase11(void) { return !case3(SP_DVPP); } // 小页,DVPP内存,失败
+static int testcase12(void) { return !case3(SP_DVPP|SP_HUGEPAGE_ONLY); } // 大页,DVPP内存,失败
+
+// k2task
+static int case4(unsigned long flag)
+{
+ int ret = wrap_add_group_non_dvpp(getpid(), PROT_READ|PROT_WRITE, SPG_ID_AUTO);
+ if (ret < 0) {
+ pr_info("add group failed: %d", ret);
+ abort();
+ }
+
+ unsigned long kva = wrap_vmalloc(1024, flag & SP_HUGEPAGE_ONLY);
+ if (!kva) {
+ pr_info("alloc kva failed: %#lx", flag);
+ abort();
+ }
+
+ char *buf = (char *)wrap_k2u(kva, 1024, SPG_ID_DEFAULT, flag & ~SP_HUGEPAGE_ONLY);
+ if ((unsigned long)buf < 0)
+ return -1;
+
+ *buf = 'a';
+
+ return 0;
+}
+static int testcase13(void) { return case4(0); } // 小页,普通内存,成功
+static int testcase14(void) { return case4(SP_HUGEPAGE_ONLY); } // 大页,普通内存,成功
+static int testcase15(void) { return case4(SP_DVPP); } // 小页,DVPP内存,成功
+static int testcase16(void) { return case4(SP_DVPP|SP_HUGEPAGE_ONLY); } // 大页,DVPP内存,成功
+
+static struct testcase_s testcases[] = {
+// TESTCASE_CHILD(testcase1, true)
+// TESTCASE_CHILD(testcase2, true)
+// TESTCASE_CHILD(testcase3, false)
+// TESTCASE_CHILD(testcase4, false)
+// TESTCASE_CHILD(testcase5, true)
+// TESTCASE_CHILD(testcase6, true)
+// TESTCASE_CHILD(testcase7, true)
+// TESTCASE_CHILD(testcase8, true)
+// TESTCASE_CHILD(testcase9, true)
+// TESTCASE_CHILD(testcase10, true)
+// TESTCASE_CHILD(testcase11, false)
+// TESTCASE_CHILD(testcase12, false)
+// TESTCASE_CHILD(testcase13, true)
+// TESTCASE_CHILD(testcase14, true)
+// TESTCASE_CHILD(testcase15, true)
+// TESTCASE_CHILD(testcase16, true)
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_sp_ro.c b/tools/testing/sharepool/testcase/function_test/test_sp_ro.c
new file mode 100644
index 000000000000..769feb7afa94
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_sp_ro.c
@@ -0,0 +1,719 @@
+#include "sharepool_lib.h"
+#include "sem_use.h"
+#include <stdlib.h>
+#include <errno.h>
+#include <assert.h>
+#include <pthread.h>
+#include <sys/types.h>
+
+#define PROC_NUM 8
+#define THREAD_NUM 5
+#define GROUP_NUM 16
+#define ALLOC_TYPE 4
+#define REPEAT_TIMES 2
+#define ALLOC_SIZE PAGE_SIZE
+#define PROT (PROT_READ | PROT_WRITE)
+
+static int group_ids[GROUP_NUM];
+static int default_id = 1;
+static int semid;
+
+static int add_multi_group();
+static int check_multi_group();
+static int delete_multi_group();
+static int process();
+void *thread_and_process_helper(int group_id);
+void *del_group_thread(void *arg);
+void *del_proc_from_group(void *arg);
+
+// SP_PROT_FOCUS 内存申请测试
+static int testcase1(void)
+{
+ int ret = 0;
+ void *pret;
+ unsigned long page_size = PAGE_SIZE;
+
+ // add process to group
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ // alloc memory with SP_PROT_FOCUS | SP_PROT_RO
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS | SP_PROT_RO);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ ret = -1;
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ ret = 0;
+ }
+
+ // mprotect() should fail
+ ret = mprotect(pret, page_size, PROT_WRITE);
+ if (!(ret && errno == EACCES)) {
+ pr_info("mprotect should fail, %d, %d\n", ret, errno);
+ return ret;
+ }
+
+ // memset should fail and generate a SIGSEGV
+ memset((void *)pret, 0, page_size);
+
+ return -1;
+}
+
+// SP_PROT_FOCUS 单独使用测试
+static int testcase2(void)
+{
+ int ret = 0;
+ void *pret;
+ unsigned long page_size = PAGE_SIZE;
+
+ // add process to group
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ // alloc memory with SP_PROT_FOCUS, should fail
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ ret = 0;
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ ret = -1;
+ }
+
+ return ret;
+}
+
+// SP_PROT_FOCUS | SP_HUGEPAGE 错误使用测试
+static int testcase3(void)
+{
+ int ret = 0;
+ void *pret;
+ unsigned long page_size = PAGE_SIZE;
+
+ // add process to group
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ // alloc memory with SP_PROT_FOCUS | SP_HUGEPAGE, should fail
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS | SP_HUGEPAGE);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ ret = 0;
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ ret = -1;
+ }
+
+ return ret;
+}
+
+// SP_PROT_FOCUS | SP_DVPP 错误使用测试
+static int testcase4(void)
+{
+ int ret = 0;
+ void *pret;
+ unsigned long page_size = PAGE_SIZE;
+
+ // add process to group
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ // alloc memory with SP_PROT_FOCUS | SP_DVPP, should fail
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS | SP_DVPP);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ ret = 0;
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ ret = -1;
+ }
+
+ return ret;
+}
+
+// SP_PROT_FOCUS | SP_PROT_RO | SP_HUGEPAGE 联合使用测试,预计成功
+static int testcase5(void)
+{
+ int ret = 0;
+ void *pret;
+ unsigned long page_size = PAGE_SIZE;
+
+ // add process to group
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ // alloc memory with SP_PROT_FOCUS | SP_PROT_RO | SP_HUGEPAGE, should succes
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS | SP_PROT_RO | SP_HUGEPAGE);
+ pr_info("pret is %d", pret);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ ret = -1;
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ ret = 0;
+ }
+
+ return ret;
+}
+
+// SP_PROT_FOCUS | SP_PROT_RO | SP_DVPP 错误使用测试
+static int testcase6(void)
+{
+ int ret = 0;
+ void *pret;
+ unsigned long page_size = PAGE_SIZE;
+
+ // add process to group
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ // alloc memory with SP_PROT_FOCUS | SP_PROT_RO | SP_DVPP, should fail
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS | SP_PROT_RO | SP_DVPP);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ ret = 0;
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ ret = -1;
+ }
+
+ return ret;
+}
+
+// SP_RO区域内存上限测试
+static int testcase7(void)
+{
+ int ret = 0;
+ void *pret;
+ unsigned long page_size = PAGE_SIZE;
+ unsigned long sp_ro_1GB = 1073741824;
+
+ // add process to group
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ // alloc 64GB memory in SP_RO area
+ for (int i = 0; i < 64; i++){
+ pret = wrap_sp_alloc(default_id, sp_ro_1GB, SP_PROT_FOCUS | SP_PROT_RO);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc 1GB %d time failed.", getpid(),i+1);
+ ret = -1;
+ return ret;
+ } else {
+ pr_info("process %d alloc 1GB %d time success.", getpid(),i+1);
+ ret = 0;
+ }
+ }
+
+ // alloc another 4k memory in SP_RO area
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS | SP_PROT_RO);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc another 4k failed.", getpid());
+ ret = 0;
+ } else {
+ pr_info("process %d alloc another 4k success.", getpid());
+ ret = -1;
+ }
+
+ return ret;
+}
+
+// SP_RO区域内存最小粒度测试
+static int testcase8(void)
+{
+ int ret = 0;
+ void *pret;
+ unsigned long page_size = PAGE_SIZE;
+ int times = 32768;
+
+ // add process to group
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ // min memory block in SP_RO area is 2M, alloc 32768 times memory in SP_RO area can exhaust whole 64GB memory
+ for (int i = 0; i < times; i++){
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS | SP_PROT_RO);
+ unsigned long addr_to_print = (unsigned long)pret;
+ pr_info("memory address is %lx",addr_to_print);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc memory %d time failed.", getpid(),i+1);
+ ret = -1;
+ return ret;
+ } else {
+ pr_info("process %d alloc memory %d time success.", getpid(),i+1);
+ ret = 0;
+ }
+ }
+
+ // alloc another 4k memory in SP_RO area
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS | SP_PROT_RO);
+
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc another 4k failed.", getpid());
+ ret = 0;
+ } else {
+ pr_info("process %d alloc another 4k success.", getpid());
+ ret = -1;
+ }
+
+ return ret;
+}
+
+//多进程SP_RO内存申请释放并发压力测试
+static int testcase9(void)
+{
+ int proc_num = 1000;
+ int prints_num = 3;
+ int ret = 0;
+ unsigned long page_size = PAGE_SIZE;
+ int childs[proc_num];
+ int prints[prints_num];
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("parent %d add into group failed. errno: %d", getpid(), ret);
+ return -1;
+ }
+
+ // create process alloc and free SP_RO memory
+ for (int i = 0; i < proc_num; i++) {
+ int pid = fork();
+ if (pid == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+
+ while (1) {
+ void * pret;
+ pret = wrap_sp_alloc(default_id, page_size, SP_PROT_FOCUS | SP_PROT_RO);
+ if (pret == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ }
+ ret = wrap_sp_free(pret);
+ if (ret < 0) {
+ pr_info("process %d free failed.", getpid());
+ } else {
+ pr_info("process %d free success.", getpid());
+ }
+ }
+ }
+ else {
+ childs[i] = pid;
+ }
+ }
+
+ // print sharepool maintenance interface
+ for(int i = 0; i < prints_num; i++){
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ sharepool_print();
+ usleep(10000);
+ }
+ }
+ else {
+ prints[i] = pid;
+ }
+ }
+
+ sleep(1);
+
+ // clear the process
+ for (int i = 0; i < PROC_NUM; i++) {
+ kill(childs[i], SIGKILL);
+ int status;
+ waitpid(childs[i], &status, 0);
+ }
+ for (int i = 0; i < prints_num; i++) {
+ kill(prints[i], SIGKILL);
+ int status;
+ waitpid(prints[i], &status, 0);
+ }
+
+ return 0;
+}
+
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD_SIGNAL(testcase1, SIGSEGV, "SP_PROT_FOCUS 内存申请测试")
+ TESTCASE_CHILD(testcase2, "SP_PROT_FOCUS 单独使用测试")
+ TESTCASE_CHILD(testcase3, "SP_PROT_FOCUS | SP_HUGEPAGE 错误使用测试")
+ TESTCASE_CHILD(testcase4, "SP_PROT_FOCUS | SP_DVPP 错误使用测试")
+ TESTCASE_CHILD(testcase5, "SP_PROT_FOCUS | SP_PROT_RO | SP_HUGEPAGE 错误使用测试")
+ TESTCASE_CHILD(testcase6, "SP_PROT_FOCUS | SP_PROT_RO | SP_DVPP 错误使用测试")
+ TESTCASE_CHILD(testcase7, "SP_RO区域内存上限测试")
+ TESTCASE_CHILD(testcase8, "SP_RO区域内存最小粒度测试")
+ //TESTCASE_CHILD(testcase9, "多进程SP_RO内存申请释放并发压力测试")
+};
+
+
+static int add_multi_group()
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+ for (int i = 0; i < GROUP_NUM; i++) {
+ group_ids[i] = i + 1;
+ ag_info.spg_id = group_ids[i];
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("proc%d add group%d failed", getpid(), group_ids[i]);
+ return -1;
+ }
+ }
+
+ return ret;
+}
+
+static int check_multi_group()
+{
+ int ret;
+ // query groups
+ int spg_ids[GROUP_NUM];
+ int group_num = GROUP_NUM;
+ struct sp_group_id_by_pid_info find_group_info = {
+ .num = &group_num,
+ .spg_ids = spg_ids,
+ .pid = getpid(),
+ };
+ ret = ioctl_find_group_by_pid(dev_fd, &find_group_info);
+ if (ret < 0) {
+ pr_info("find group id by pid failed");
+ return ret;
+ } else
+ for (int i = 0; i < GROUP_NUM; i++)
+ if (spg_ids[i] != group_ids[i]) {
+ pr_info("group id %d not consistent", i);
+ ret = -1;
+ }
+ return ret;
+}
+
+static int delete_multi_group()
+{
+ int ret = 0;
+ int fail = 0, suc = 0;
+ // delete from all groups
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = wrap_del_from_group(getpid(), group_ids[i]);
+ if (ret < 0) {
+ //pr_info("process %d delete from group %d failed, errno: %d", getpid(), group_ids[i], errno);
+ fail++;
+ }
+ else {
+ pr_info("process %d delete from group %d success", getpid(), group_ids[i]);
+ suc++;
+ }
+ }
+
+ return fail;
+}
+
+static int process()
+{
+ int ret = 0;
+ for (int j = 0; j < REPEAT_TIMES; j++) {
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = thread_and_process_helper(group_ids[i]);
+ if (ret < 0) {
+ pr_info("thread_and_process_helper failed");
+ return -1;
+ }
+ }
+ }
+
+ return ret;
+}
+
+static int try_del_from_group(int group_id)
+{
+ int ret = wrap_del_from_group(getpid(), group_id);
+ return -errno;
+}
+
+void *thread_and_process_helper(int group_id)
+{
+ int ret = 0, i;
+ pid_t pid;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[ALLOC_TYPE] = {0};
+ struct sp_make_share_info u2k_info[ALLOC_TYPE] = {0}, k2u_info = {0}, k2u_huge_info = {0};
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ char *addr;
+
+ /* check sp group */
+ pid = getpid();
+
+ // 大页
+ alloc_info[0].flag = SP_HUGEPAGE;
+ alloc_info[0].spg_id = group_id;
+ alloc_info[0].size = 2 * PMD_SIZE;
+
+ // 大页 DVPP
+ alloc_info[1].flag = SP_DVPP | SP_HUGEPAGE;
+ alloc_info[1].spg_id = group_id;
+ alloc_info[1].size = 2 * PMD_SIZE;
+
+ // 普通页 DVPP
+ alloc_info[2].flag = SP_DVPP;
+ alloc_info[2].spg_id = group_id;
+ alloc_info[2].size = 4 * PAGE_SIZE;
+
+ // 普通页
+ alloc_info[3].flag = 0;
+ alloc_info[3].spg_id = group_id;
+ alloc_info[3].size = 4 * PAGE_SIZE;
+
+ /* alloc & u2k */
+ for (i = 0; i < ALLOC_TYPE; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("sp_alloc return addr %lx\n", alloc_info[i].addr);
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(dev_fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("addr %lx is a valid share pool addr\n", alloc_info[i].addr);
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ pr_info("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("u2k return addr %lx, check memory content succ.\n",
+ // u2k_info[i].addr);
+ }
+ }
+ }
+ assert(try_del_from_group(group_id) == -EINVAL);
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = group_id;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = group_id;
+
+ /* k2u */
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+ assert(try_del_from_group(group_id) == -EINVAL);
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ ioctl_vfree(dev_fd, &vmalloc_huge_info);
+ assert(try_del_from_group(group_id) == -EINVAL);
+
+ /* unshare kva & sp_free*/
+ for (i = 0; i < ALLOC_TYPE; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ return 0;
+
+error:
+ return -1;
+}
+
+void *del_group_thread(void *arg)
+{
+ int ret = 0;
+ int i = (int)arg;
+
+ sem_inc_by_one(semid);
+ sem_check_zero(semid);
+
+ pr_info("thread %d now tries to exit from group %d", getpid() + i + 1, default_id);
+ ret = wrap_del_from_group(getpid() + i + 1, default_id);
+ if (ret < 0)
+ pthread_exit((void *)-1);
+ pthread_exit((void *)0);
+}
+
+void *del_proc_from_group(void *arg)
+{
+ sem_dec_by_one(semid);
+ pthread_exit((void *)(wrap_del_from_group((int)arg, default_id)));
+}
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_two_user_process.c b/tools/testing/sharepool/testcase/function_test/test_two_user_process.c
new file mode 100644
index 000000000000..62d44eaf154f
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_two_user_process.c
@@ -0,0 +1,626 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 18 06:46:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h> // exit
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+
+static jmp_buf testcase1_env;
+static int testcase1_sigsegv_result = -1;
+static void testcase1_sigsegv_handler(int num)
+{
+ pr_info("segment fault occurs");
+ testcase1_sigsegv_result = 0;
+ longjmp(testcase1_env, 1);
+}
+
+/*
+ * 测试点1:用户态进程A加组后分配内存,A创建B,并且给B加组,进程B加组成功后写内存成功,
+ * 进程A读内存成功,然后释放内存,进程B写失败。
+ * (申请者读,free,其他进程写)
+ */
+static int testcase1_grandchild1(sem_t *sync, sem_t *grandsync, int group_id,
+ unsigned long addr, unsigned long size)
+{
+ int ret;
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ ret = ioctl_find_first_group(dev_fd, getpid());
+ if (ret != group_id) {
+ pr_info("unexpected group_id: %d", ret);
+ sem_post(grandsync);
+ return -1;
+ }
+
+ if (!ioctl_judge_addr(dev_fd, addr)) {
+ pr_info("invalid address");
+ sem_post(grandsync);
+ return -1;
+ }
+
+ memset((void *)addr, 'm', size);
+
+ sem_post(grandsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ testcase1_sigsegv_result = -1;
+ struct sigaction sa = {0};
+ sa.sa_handler = testcase1_sigsegv_handler;
+ sigaction(SIGSEGV, &sa, NULL);
+ ret = setjmp(testcase1_env);
+ if (!ret)
+ *(char *)addr = 'a';
+ if (testcase1_sigsegv_result) {
+ pr_info("sp_free has no effect");
+ ret = -1;
+ } else
+ ret = 0;
+
+ return ret;
+}
+
+static int testcase1_child1(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = alloc_info->spg_id;
+
+ char *sync_name = "/testcase1_sync1";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *grandsync_name = "/testcase1_grandsync1";
+ sem_t *grandsync = sem_open(grandsync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(grandsync_name);
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ return -1;
+ }
+
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ return -1;
+ } else if (pid == 0) {
+ exit(testcase1_grandchild1(sync, grandsync, group_id, alloc_info->addr, alloc_info->size));
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ sem_post(sync);
+ ret = sem_wait(grandsync);
+ if (ret < 0) {
+ pr_info("sem wait failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ char *buf = (char *)alloc_info->addr;
+ for (unsigned long i = 0; i < alloc_info->size; i++) {
+ if (buf[i] != 'm') {
+ pr_info("data check failed");
+ goto error_out;
+ }
+ }
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ sem_post(sync);
+
+ int status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status))
+ return -1;
+
+ ret = ioctl_find_first_group(dev_fd, pid);
+ if (!(ret < 0)) {
+ pr_info("ioctl_find_group_by_pid failed, ret: %d, errno: %d", ret, errno);
+ return -1;
+ } else
+ return 0;
+
+error_out:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+
+ return ret;
+}
+
+/*
+ * 测试点2:用户态进程A加组,A创建B,并且给B加组,A申请内存,并写入内容,
+ * B读内存成功,然后释放内存N,进程A写失败。
+ * (申请者写,其他进程读,free)
+ */
+#define tc1_MSG_KEY 20
+#define tc1_MSG_TYPE 100
+struct msgbuf_alloc_info {
+ long type;
+ struct sp_alloc_info alloc_info;
+};
+
+static int testcase1_grandchild2(sem_t *sync, sem_t *grandsync, int group_id)
+{
+ int ret;
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ ret = ioctl_find_first_group(dev_fd, getpid());
+ if (ret != group_id) {
+ pr_info("unexpected group_id: %d", ret);
+ goto error_out;
+ }
+
+ int msgid = msgget(tc1_MSG_KEY, 0);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ struct sp_alloc_info *alloc_info = &msgbuf.alloc_info;
+ ret = msgrcv(msgid, &msgbuf, sizeof(*alloc_info), tc1_MSG_TYPE, IPC_NOWAIT);
+ if (ret < 0) {
+ pr_info("msgrcv failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ if (!ioctl_judge_addr(dev_fd, alloc_info->addr)) {
+ pr_info("invalid address");
+ goto error_out;
+ }
+
+ char *buf = (char *)alloc_info->addr;
+ for (unsigned long i = 0; i < alloc_info->size; i++) {
+ if (buf[i] != 'a') {
+ pr_info("data check failed");
+ goto error_out;
+ }
+ }
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ sem_post(grandsync);
+
+ return 0;
+
+error_out:
+ sem_post(grandsync);
+ return -1;
+}
+
+static int testcase1_child2(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = ++(alloc_info->spg_id);
+
+ char *sync_name = "/testcase1_sync2";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *grandsync_name = "/testcase1_grandsync2";
+ sem_t *grandsync = sem_open(grandsync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(grandsync_name);
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ return -1;
+ }
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ return -1;
+ } else if (pid == 0) {
+ exit(testcase1_grandchild2(sync, grandsync, group_id));
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ int msgid = msgget(tc1_MSG_KEY, IPC_CREAT | 0666);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ msgbuf.type = tc1_MSG_TYPE;
+ memcpy(&msgbuf.alloc_info, alloc_info, sizeof(*alloc_info));
+ ret = msgsnd(msgid, &msgbuf, sizeof(*alloc_info), 0);
+ if (ret < 0) {
+ pr_info("msgsnd failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ memset((void *)alloc_info->addr, 'a', alloc_info->size);
+
+ sem_post(sync);
+ ret = sem_wait(grandsync);
+ if (ret < 0) {
+ pr_info("sem wait failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ testcase1_sigsegv_result = -1;
+ struct sigaction sa = {0};
+ sa.sa_handler = testcase1_sigsegv_handler;
+ sigaction(SIGSEGV, &sa, NULL);
+ ret = setjmp(testcase1_env);
+ if (!ret)
+ *(char *)alloc_info->addr = 'a';
+ if (testcase1_sigsegv_result) {
+ pr_info("sp_free has no effect");
+ ret = -1;
+ goto error_out;
+ }
+
+ waitpid(pid, NULL, 0);
+ ret = ioctl_find_first_group(dev_fd, pid);
+ if (!(ret < 0)) {
+ pr_info("ioctl_find_group_by_pid failed, ret: %d, errno: %d", ret, errno);
+ ret = -1;
+ return ret;
+ } else
+ return 0;
+
+error_out:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+
+ return ret;
+}
+
+/*
+ * 测试点3:用户态进程,A创建B,并且给B和A加组,B申请内存,并写入内容,
+ * A读内存成功,然后释放内存,进程B写失败。
+ * (其他进程申请内存,写入,owner读,free)
+ */
+static int testcase1_grandchild3(sem_t *sync, sem_t *grandsync, struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = alloc_info->spg_id;
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ ret = ioctl_find_first_group(dev_fd, getpid());
+ if (ret != group_id) {
+ pr_info("unexpected group_id: %d", ret);
+ goto error_out;
+ }
+
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ int msgid = msgget(tc1_MSG_KEY, IPC_CREAT | 0666);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ msgbuf.type = tc1_MSG_TYPE;
+ memcpy(&msgbuf.alloc_info, alloc_info, sizeof(*alloc_info));
+ ret = msgsnd(msgid, &msgbuf, sizeof(*alloc_info), 0);
+ if (ret < 0) {
+ pr_info("msgsnd failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ memset((void *)alloc_info->addr, 'x', alloc_info->size);
+
+ sem_post(grandsync);
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ sem_post(grandsync);
+
+ return 0;
+
+error_out:
+ sem_post(grandsync);
+ return -1;
+}
+
+static int testcase1_child3(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = alloc_info->spg_id;
+
+ char *sync_name = "/testcase1_sync3";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *grandsync_name = "/testcase1_grandsync3";
+ sem_t *grandsync = sem_open(grandsync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(grandsync_name);
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ return -1;
+ } else if (pid == 0) {
+ exit(testcase1_grandchild3(sync, grandsync, alloc_info));
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ sem_post(sync);
+ ret = sem_wait(grandsync);
+ if (ret < 0) {
+ pr_info("sem wait failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ int msgid = msgget(tc1_MSG_KEY, 0);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+ struct msgbuf_alloc_info msgbuf = {0};
+ alloc_info = &msgbuf.alloc_info;
+ ret = msgrcv(msgid, &msgbuf, sizeof(*alloc_info), tc1_MSG_TYPE, IPC_NOWAIT);
+ if (ret < 0) {
+ pr_info("msgrcv failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ if (!ioctl_judge_addr(dev_fd, alloc_info->addr)) {
+ pr_info("invalid address");
+ goto error_out;
+ }
+
+ char *buf = (char *)alloc_info->addr;
+ for (unsigned long i = 0; i < alloc_info->size; i++) {
+ if (buf[i] != 'x') {
+ pr_info("data check failed");
+ goto error_out;
+ }
+ }
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(grandsync);
+ } while (ret < 0 && errno == EINTR);
+
+ testcase1_sigsegv_result = -1;
+ struct sigaction sa = {0};
+ sa.sa_handler = testcase1_sigsegv_handler;
+ sigaction(SIGSEGV, &sa, NULL);
+ ret = setjmp(testcase1_env);
+ if (!ret)
+ *(char *)alloc_info->addr = 'a';
+ if (testcase1_sigsegv_result) {
+ pr_info("sp_free has no effect");
+ ret = -1;
+ goto error_out;
+ }
+
+ waitpid(pid, NULL, 0);
+ ret = ioctl_find_first_group(dev_fd, pid);
+ if (!(ret < 0)) {
+ pr_info("ioctl_find_group_by_pid failed, ret: %d, errno: %d", ret, errno);
+ ret = -1;
+ return ret;
+ }
+
+ return 0;
+
+error_out:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+
+ return ret;
+}
+
+static int testcase1(void)
+{
+ struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = 0,
+ .spg_id = 10,
+ .size = 100 * PAGE_SIZE,
+ },
+ {
+ .flag = SP_HUGEPAGE,
+ .spg_id = 12,
+ .size = 10 * PMD_SIZE,
+ },
+ {
+ .flag = SP_DVPP,
+ .spg_id = 19,
+ .size = 100000,
+ },
+ {
+ .flag = SP_DVPP | SP_HUGEPAGE,
+ .spg_id = 19,
+ .size = 10000000,
+ },
+ };
+
+ int (*child_funcs[])(struct sp_alloc_info *) = {
+ testcase1_child1,
+ testcase1_child2,
+ testcase1_child3,
+ };
+
+ for (int j = 0; j < sizeof(child_funcs) / sizeof(child_funcs[0]); j++) {
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ return -1;
+ } else if (pid == 0) {
+ exit(child_funcs[j](alloc_infos + i));
+ }
+
+ int status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("testcase1 failed!!, i: %d, j: %d", i, j);
+ return -1;
+ }
+ }
+ }
+
+ pr_info("testcase1 success!!");
+ return 0;
+}
+
+/*
+int main()
+{
+ int ret = 0;
+ dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ ret += testcase1();
+
+ close_device(dev_fd);
+ return ret;
+}
+*/
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "测试点1:用户态进程A加组后分配内存,A创建B,并且给B加组,进程B加组成功后写内存成功,进程A读内存成功,然后释放内存,进程B写失败。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/function_test/test_u2k.c b/tools/testing/sharepool/testcase/function_test/test_u2k.c
new file mode 100644
index 000000000000..c8d525db8929
--- /dev/null
+++ b/tools/testing/sharepool/testcase/function_test/test_u2k.c
@@ -0,0 +1,490 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Sat Nov 21 02:21:35 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+
+#define tc1_MSG_KEY 20
+#define tc1_MSG_TYPE 100
+struct msgbuf_alloc_info {
+ long type;
+ union {
+ struct sp_alloc_info alloc_info;
+ struct sp_make_share_info share_info;
+ };
+};
+
+/*
+ * 用户态进程A加组后分配并写内存N,u2k共享给内核,内核模块读N成功,
+ * 进程B加组后读N成功。进程A停止共享N后,内核模块读N失败,进程B读N成功。进程A释放内存N。
+ */
+static int grandchild1(sem_t *sync, sem_t *childsync)
+{
+ int ret;
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ int msgid = msgget(tc1_MSG_KEY, 0);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ struct sp_alloc_info *alloc_info = &msgbuf.alloc_info;
+ ret = msgrcv(msgid, &msgbuf, sizeof(*alloc_info), tc1_MSG_TYPE, IPC_NOWAIT);
+ if (ret < 0) {
+ pr_info("msgrcv failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ ret = ioctl_find_first_group(dev_fd, getpid());
+ if (ret != alloc_info->spg_id) {
+ pr_info("unexpected group_id: %d", ret);
+ goto error_out;
+ }
+
+ if (!ioctl_judge_addr(dev_fd, alloc_info->addr)) {
+ pr_info("invalid address");
+ goto error_out;
+ }
+
+ char *buf = (char *)alloc_info->addr;
+ for (int i = 0; i < alloc_info->size; i++) {
+ if (buf[i] != 'z') {
+ pr_info("memory check failed");
+ goto error_out;
+ }
+ }
+
+ sem_post(childsync);
+ return 0;
+
+error_out:
+ sem_post(childsync);
+ return -1;
+}
+
+static int child1(pid_t pid, sem_t *sync, sem_t *childsync)
+{
+ int group_id = 10, ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = group_id,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ goto error_out;
+ }
+ memset((void *)alloc_info.addr, 'z', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'z',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ goto error_out;
+ }
+
+ pr_info("unshare u2k");
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ /*
+ * unshare之后再访问内存会导致内核挂掉,在手工测试的时候执行
+ */
+#if 0
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ goto error_out;
+ }
+
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ goto error_out;
+ }
+
+ pr_info("recheck karea");
+ karea_info.mod = KAREA_SET;
+ karea_info.value = 'a';
+ karea_info.size = 1;
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ goto error_out;
+ }
+ pr_info("after karea set, the value is '%c%c'",
+ ((char *)alloc_info.addr)[0], ((char *)alloc_info.addr)[3]);
+#endif
+
+ int msgid = msgget(tc1_MSG_KEY, IPC_CREAT | 0666);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ msgbuf.type = tc1_MSG_TYPE;
+ memcpy(&msgbuf.alloc_info, &alloc_info, sizeof(alloc_info));
+ ret = msgsnd(msgid, &msgbuf, sizeof(alloc_info), 0);
+ if (ret < 0) {
+ pr_info("msgsnd failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ int status = 0;
+ waitpid(pid, &status, 0);
+
+ return !WIFEXITED(status) || WEXITSTATUS(status) ? -1 : 0;
+
+error_out:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return -1;
+}
+
+static int grandchild2(sem_t *sync, sem_t *childsync)
+{
+ int ret;
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ int msgid = msgget(tc1_MSG_KEY, 0);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf1 = {0};
+ struct sp_alloc_info *alloc_info = &msgbuf1.alloc_info;
+ ret = msgrcv(msgid, &msgbuf1, sizeof(*alloc_info), tc1_MSG_TYPE, IPC_NOWAIT);
+ if (ret < 0) {
+ pr_info("msgrcv failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf2 = {0};
+ struct sp_make_share_info *u2k_info = &msgbuf2.share_info;
+ ret = msgrcv(msgid, &msgbuf2, sizeof(*u2k_info), tc1_MSG_TYPE, IPC_NOWAIT);
+ if (ret < 0) {
+ pr_info("msgrcv failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ ret = ioctl_find_first_group(dev_fd, getpid());
+ if (ret != alloc_info->spg_id) {
+ pr_info("unexpected group_id: %d", ret);
+ goto error_out;
+ }
+
+ if (!ioctl_judge_addr(dev_fd, alloc_info->addr)) {
+ pr_info("invalid address");
+ goto error_out;
+ }
+
+ char *buf = (char *)alloc_info->addr;
+ for (int i = 0; i < alloc_info->size; i++) {
+ if (buf[i] != 'k') {
+ pr_info("memory check failed");
+ goto error_out;
+ }
+ }
+
+ pr_info("unshare u2k");
+ ret = ioctl_unshare(dev_fd, u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ /*
+ * unshare之后再访问内存会导致内核挂掉,在手工测试的时候执行
+ */
+#if 0
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'k',
+ .addr = u2k_info->addr,
+ .size = u2k_info->size,
+ };
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ goto error_out;
+ }
+
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ goto error_out;
+ }
+
+ pr_info("recheck karea");
+ karea_info.mod = KAREA_SET;
+ karea_info.value = 'a';
+ karea_info.size = 1;
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ goto error_out;
+ }
+ pr_info("after karea set, the value is '%c%c'",
+ ((char *)alloc_info->addr)[0], ((char *)alloc_info->addr)[3]);
+#endif
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ sem_post(childsync);
+ return 0;
+
+error_out:
+ sem_post(childsync);
+ return -1;
+}
+
+static int child2(pid_t pid, sem_t *sync, sem_t *childsync)
+{
+ int group_id = 10, ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = group_id,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ goto error_out;
+ }
+ memset((void *)alloc_info.addr, 'k', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'k',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ goto error_out;
+ }
+
+ int msgid = msgget(tc1_MSG_KEY, IPC_CREAT | 0666);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ msgbuf.type = tc1_MSG_TYPE;
+ memcpy(&msgbuf.alloc_info, &alloc_info, sizeof(alloc_info));
+ ret = msgsnd(msgid, &msgbuf, sizeof(alloc_info), 0);
+ if (ret < 0) {
+ pr_info("msgsnd failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ memcpy(&msgbuf.share_info, &u2k_info, sizeof(u2k_info));
+ ret = msgsnd(msgid, &msgbuf, sizeof(u2k_info), 0);
+ if (ret < 0) {
+ pr_info("msgsnd failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ sem_post(sync);
+ do {
+ ret = sem_wait(childsync);
+ } while (ret < 0 && errno == EINTR);
+
+ int status = 0;
+ waitpid(pid, &status, 0);
+
+ return !WIFEXITED(status) || WEXITSTATUS(status) ? -1 : 0;
+
+error_out:
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return -1;
+}
+static int per_test_init(int i, sem_t **sync, sem_t **childsync)
+{
+ char buf[100];
+ sprintf(buf, "/test_u2k%d", i);
+ *sync = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (*sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+
+ sprintf(buf, "/test_u2k_child%d", i);
+ *childsync = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (*childsync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+
+ return 0;
+}
+
+static int testcase(int i, int (*child)(pid_t, sem_t *, sem_t *), int (*grandchild)(sem_t *, sem_t *))
+{
+ sem_t *sync, *childsync;
+ if (per_test_init(i, &sync, &childsync))
+ return -1;
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed i=%d", i);
+ return -1;
+ } else if (pid == 0) {
+ exit(grandchild(sync, childsync));
+ }
+
+ return child(pid, sync, childsync);
+}
+
+static struct {
+ int (*child)(pid_t, sem_t *, sem_t *);
+ int (*grandchild)(sem_t *, sem_t *);
+} functions[] = {
+ {
+ .child = child1,
+ .grandchild = grandchild1,
+ },
+ {
+ .child = child2,
+ .grandchild = grandchild2,
+ },
+};
+
+static int testcase1(void) { return testcase(0, functions[0].child, functions[0].grandchild); }
+static int testcase2(void) { return testcase(1, functions[1].child, functions[1].grandchild); }
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "用户态进程A加组后分配并写内存N,u2k共享给内核,内核模块读N成功,进程B加组后读N成功。进程A停止共享N后,内核模块读N失败,进程B读N成功。进程A释放内存N。")
+ TESTCASE_CHILD(testcase2, "用户态进程A加组后分配并写内存N,u2k共享给内核,内核模块读N成功,进程B加组后读N成功。进程A停止共享N后,内核模块读N失败,进程B读N成功。进程A释放内存N。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/generate_list.sh b/tools/testing/sharepool/testcase/generate_list.sh
new file mode 100755
index 000000000000..caba42837a54
--- /dev/null
+++ b/tools/testing/sharepool/testcase/generate_list.sh
@@ -0,0 +1,46 @@
+#!/bin/bash
+
+# 在顶层文件夹中
+ # 收集子文件夹中的list文件
+ # 收集文件夹中.c文件的comment
+ # 形成一个list文件
+name=tc_list
+collect_comments()
+{
+ curdir=$1
+ echo $curdir
+
+ cd $curdir
+ rm -rf $name
+
+ subdirs=`ls -d */`
+
+# echo "" >> $name
+ echo "===============================" >> $name
+ echo $curdir >> $name
+ echo "===============================" >> $name
+
+ for dir in $subdirs
+ do
+ dir=${dir%*/}
+ local tmp_dir=$dir
+ collect_comments $dir
+ cat $tmp_dir/$name >> $name
+ echo "" >> $name
+ done
+
+ cfiles=`ls | grep '\.'c`
+ echo $cfiles
+
+ for cfile in $cfiles
+ do
+ echo $cfile >> $name
+ grep "TESTCASE" $cfile -r >> $name
+ echo "" >> $name
+ done
+
+ cd ..
+ echo "back to `pwd`"
+}
+
+collect_comments `pwd`
diff --git a/tools/testing/sharepool/testcase/performance_test/Makefile b/tools/testing/sharepool/testcase/performance_test/Makefile
new file mode 100644
index 000000000000..258bd6582414
--- /dev/null
+++ b/tools/testing/sharepool/testcase/performance_test/Makefile
@@ -0,0 +1,17 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+testcases:=test_perf_sp_alloc \
+ test_perf_sp_k2u \
+ test_perf_sp_add_group \
+ test_perf_process_kill
+
+default: $(testcases)
+
+install: $(testcases) performance_test.sh
+ mkdir -p $(TOOL_BIN_DIR)/performance_test
+ cp $(testcases) $(TOOL_BIN_DIR)/performance_test
+ cp performance_test.sh $(TOOL_BIN_DIR)
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/performance_test/performance_test.sh b/tools/testing/sharepool/testcase/performance_test/performance_test.sh
new file mode 100755
index 000000000000..d3944d7431e7
--- /dev/null
+++ b/tools/testing/sharepool/testcase/performance_test/performance_test.sh
@@ -0,0 +1,5 @@
+#!/bin/sh
+
+./performance_test/test_perf_sp_alloc | grep result | awk -F ']' '{print $2}'
+./performance_test/test_perf_sp_k2u | grep result | awk -F ']' '{print $2}'
+./performance_test/test_perf_sp_add_group | grep result | awk -F ']' '{print $2}'
diff --git a/tools/testing/sharepool/testcase/performance_test/test_perf_process_kill.c b/tools/testing/sharepool/testcase/performance_test/test_perf_process_kill.c
new file mode 100644
index 000000000000..e6abbf8c2d1d
--- /dev/null
+++ b/tools/testing/sharepool/testcase/performance_test/test_perf_process_kill.c
@@ -0,0 +1,174 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Jan 18 08:58:29 2021
+ */
+
+#include <stdlib.h>
+#include <time.h>
+#include "sharepool_lib.h"
+#include "sem_use.h"
+
+#define ALLOC_SIZE (6UL * 1024UL * 1024UL * 1024UL)
+#define CHILD_PROCESS_NUM 12
+
+static int semid;
+
+struct sp_alloc_memory_type {
+ unsigned long normal;
+ unsigned long huge;
+};
+
+struct sp_alloc_memory_type test_memory_types[] = {
+ {
+ .normal = 0,
+ .huge = ALLOC_SIZE,
+ },
+ {
+ .normal = ALLOC_SIZE * (1.0 / 6),
+ .huge = ALLOC_SIZE * (5.0 / 6),
+ },
+ {
+ .normal = ALLOC_SIZE * (3.0 / 6),
+ .huge = ALLOC_SIZE * (3.0 / 6),
+ },
+ {
+ .normal = ALLOC_SIZE * (5.0 / 6),
+ .huge = ALLOC_SIZE * (1.0 / 6),
+ },
+ {
+ .normal = ALLOC_SIZE,
+ .huge = 0,
+ },
+};
+
+static int testcase1_child(int test_index)
+{
+ int ret = 0;
+ int spg_id = test_index + 1;
+ unsigned long normal = test_memory_types[test_index].normal;
+ unsigned long huge = test_memory_types[test_index].huge;
+ time_t kill_start, now;
+
+ sem_set_value(semid, 0);
+
+ // 创建组,申请6G内存,变量为小页占比
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id);
+ if (ret < 0) {
+ pr_info("add group failed.");
+ return ret;
+ }
+
+ // 申请test-memory-type的内存
+ pr_info("test-momery-type %d test started, allocating memory...", test_index);
+ unsigned long addr = 0;
+ if (normal != 0) {
+ addr = wrap_sp_alloc(spg_id, normal, 0);
+ if (addr == 0) {
+ pr_info("alloc normal memory failed.");
+ return -1;
+ }
+ }
+ if (huge != 0) {
+ addr = wrap_sp_alloc(spg_id, huge, SP_HUGEPAGE_ONLY);
+ if (addr == 0) {
+ pr_info("alloc huge memory failed.");
+ return -1;
+ }
+ }
+ pr_info("child %d alloc memory %lx normal, %lx huge success.", test_index, normal, huge);
+
+ // 将12个子进程加组
+ pid_t child[CHILD_PROCESS_NUM];
+ sem_check_zero(semid);
+ for (int i = 0; i < CHILD_PROCESS_NUM; i++) {
+ int pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed.");
+ return -1;
+ } else if (pid == 0) {
+ wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id);
+ sem_inc_by_one(semid);
+ while (1) {
+
+ }
+ } else {
+ child[i] = pid;
+ }
+ }
+ pr_info("fork all child processes success.");
+
+ // ----- DO -----
+
+ // ----- END -----
+
+ // 并发kill所有子进程
+ sem_dec_by_val(semid, CHILD_PROCESS_NUM);
+ pr_info("all child processes add group success.");
+ time(&now);
+ pr_info("time before kill signal sent is: %d", now);
+ for (int i = 0; i < CHILD_PROCESS_NUM; i++)
+ kill(child[i], SIGKILL);
+ time(&kill_start);
+ pr_info("time after kill signal sent is: %d", kill_start);
+
+ // 记录waitpid完成后所需的时间
+ for (int i = 0; i < CHILD_PROCESS_NUM; i++) {
+ int status;
+ ret = waitpid(child[i], &status, 0);
+
+ time(&now);
+ pr_info("time when child %d exit is %d, time taken is %d seconds.", i, now, now - kill_start);
+ if (ret < 0) {
+ pr_info("waitpid failed.");
+ }
+
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", i, status);
+ ret = -1;
+ }
+ }
+ sem_check_zero(semid);
+ time(&now);
+ pr_info("time when all child processes exit is %d, time taken is %d seconds.", now, now - kill_start);
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+ int ret = 0;
+ semid = sem_create(1234, "process_sync");
+
+ for (int i = 0; i < sizeof(test_memory_types) / sizeof(test_memory_types[0]); i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork testcase1 child failed.");
+ return -1;
+ } else if (pid == 0) {
+ exit(testcase1_child(i));
+ }
+ int status;
+ waitpid(pid, &status, 0);
+ pr_info("test-momery-type %d test finished.", i);
+ }
+
+ sem_close(semid);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, true)
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
\ No newline at end of file
diff --git a/tools/testing/sharepool/testcase/performance_test/test_perf_sp_add_group.c b/tools/testing/sharepool/testcase/performance_test/test_perf_sp_add_group.c
new file mode 100644
index 000000000000..c2882be7b0e1
--- /dev/null
+++ b/tools/testing/sharepool/testcase/performance_test/test_perf_sp_add_group.c
@@ -0,0 +1,375 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Description:
+ * Author: Huawei OS Kernel Lab
+ * Create: Sat Mar 13 15:17:32 2021
+ */
+#define _GNU_SOURCE
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sys/sem.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <sys/sysinfo.h>
+#include <sched.h> /* sched_setaffinity */
+#include <pthread.h>
+
+#include "sharepool_lib.h"
+
+#define NSEC2SEC 1000000000
+
+
+static int cpu_num;
+static int thread_sync;
+
+struct test_perf {
+ int (*perf_start)(void *);
+ int (*perf_point)(void *);
+ int (*perf_end)(void *);
+
+ int count;
+ void *arg;
+ char *name;
+};
+
+struct test_params {
+ int spg_id;
+ unsigned int process_num;
+ /* 申请太小的内存不具生产环境意义,现网至少按GB开始 */
+ unsigned long mem_size_normal; /* unit: MB */
+ unsigned long mem_size_huge; /* unit: MB */
+};
+
+static int test_perf_child(struct test_perf *test_perf)
+{
+ int ret = 0;
+ long dur;
+ struct timespec ts_start, ts_end;
+
+ if (!test_perf->perf_point) {
+ pr_info("you must supply a perf_point routine");
+ return -1;
+ }
+
+ if (test_perf->perf_start) {
+ if (test_perf->perf_start(test_perf->arg)) {
+ pr_info("testcase init failed");
+ ret = -1;
+ goto end;
+ }
+ }
+
+ pr_info(">> testcase %s start <<", test_perf->name);
+ clock_gettime(CLOCK_MONOTONIC, &ts_start);
+ if (test_perf->perf_point(test_perf->arg)) {
+ pr_info("testcase point failed");
+ ret = -1;
+ goto end;
+ }
+ clock_gettime(CLOCK_MONOTONIC, &ts_end);
+
+ dur = (ts_end.tv_sec - ts_start.tv_sec) * NSEC2SEC + (ts_end.tv_nsec - ts_start.tv_nsec);
+
+end:
+ if (test_perf->perf_end) {
+ if (test_perf->perf_end(test_perf->arg)) {
+ pr_info("testcase exit failed");
+ return -1;
+ }
+ }
+
+ if (!ret) {
+ pr_info("%50s result: %10ld", test_perf->name, dur);
+ return 0;
+ } else {
+ return ret;
+ }
+}
+
+static int test_perf_routing(struct test_perf *test_perf)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(test_perf_child(test_perf));
+ }
+
+ int status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("testcase %s failed", test_perf->name);
+ return -1;
+ }
+
+ return 0;
+}
+
+#define MAX_CHILD_NR 100
+static pid_t childs[MAX_CHILD_NR];
+
+static int sp_add_group_start(void *arg)
+{
+ cpu_set_t mask;
+ struct test_params *params = arg;
+ struct sp_alloc_info alloc_info;
+ unsigned long i, times;
+
+ CPU_ZERO(&mask);
+ CPU_SET(0, &mask); /* parent process runs on CPU0 */
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("parent process sched_setaffinity failed, errno: %d", errno);
+ return -1;
+ }
+ cpu_num = get_nprocs();
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = params->spg_id,
+ };
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ /* allocation begins */
+ alloc_info.spg_id = params->spg_id;
+ alloc_info.flag = 0;
+ alloc_info.size = 16 * PAGE_SIZE;
+ times = params->mem_size_normal * 16; /* from MB to 16 pages */
+ for (i = 0; i < times; i++) {
+ if (ioctl_alloc(dev_fd, &alloc_info)) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ alloc_info.flag = SP_HUGEPAGE;
+ alloc_info.size = 2 * PMD_SIZE;
+ times = params->mem_size_huge / 4; /* from MB to 2 hugepages */
+ for (i = 0; i < times; i++) {
+ if (ioctl_alloc(dev_fd, &alloc_info)) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+ }
+ /* end of allocation */
+
+ int semid = semget(0xabcd9116, 1, IPC_CREAT | 0644);
+ if (semid < 0) {
+ pr_info("open systemV semaphore failed, errno: %s", strerror(errno));
+ return -1;
+ }
+ int ret = semctl(semid, 0, SETVAL, 0);
+ if (ret < 0) {
+ pr_info("sem setval failed, %s", strerror(errno));
+ goto sem_remove;
+ }
+
+ for (i = 0; i < params->process_num; i++) {
+ pid_t pid = fork();
+
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = 1,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+
+ CPU_ZERO(&mask);
+ CPU_SET(i % cpu_num, &mask);
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("child process %d sched_setaffinity failed, errno: %d", i, errno);
+ return -1;
+ }
+
+ sleep(3600);
+ }
+
+ childs[i] = pid;
+ }
+
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = -params->process_num,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+
+sem_remove:
+ if (semctl(semid, IPC_RMID, NULL) < 0)
+ pr_info("sem remove failed, %s", strerror(errno));
+
+ return 0;
+}
+
+static int sp_add_group_point(void *arg)
+{
+ struct test_params *params = arg;
+ struct sp_add_group_info ag_info = {
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = params->spg_id,
+ };
+
+ for (int i = 0; i < params->process_num; i++) {
+ ag_info.pid = childs[i];
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+void *thread_add_group(void *arg)
+{
+ struct sp_add_group_info *ag_info = arg;
+ ag_info->prot = PROT_READ | PROT_WRITE;
+
+ __sync_fetch_and_add(&thread_sync, 1);
+ __sync_synchronize();
+ while (1) {
+ if (thread_sync == 0) {
+ break;
+ }
+ }
+
+ if (ioctl_add_group(dev_fd, ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ pthread_exit((void *)1);
+ }
+ pthread_exit((void *)0);
+}
+
+static int sp_add_group_concurrent_point(void *arg)
+{
+ struct test_params *params = arg;
+ struct sp_add_group_info ag_info[MAX_CHILD_NR];
+ pthread_t tid[MAX_CHILD_NR];
+ cpu_set_t mask;
+ int i, ret;
+ void *tret;
+
+ thread_sync = -params->process_num;
+
+ for (i = 0; i < params->process_num; i++) {
+ ag_info[i].spg_id = params->spg_id;
+ ag_info[i].pid = childs[i];
+ ret = pthread_create(tid + i, NULL, thread_add_group, ag_info + i);
+ if (ret != 0) {
+ pr_info("create thread %d error\n", i);
+ return -1;
+ }
+ CPU_ZERO(&mask);
+ CPU_SET(i % cpu_num, &mask);
+ if (pthread_setaffinity_np(tid[i], sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("set thread %d affinity failed, errno: %d", i, errno);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < params->process_num; i++) {
+ ret = pthread_join(tid[i], &tret);
+ if (ret != 0) {
+ pr_info("can't join thread %d\n", i);
+ return -1;
+ }
+ if ((long)tret != 0) {
+ pr_info("testcase execution failed, thread %d exited unexpectedly\n", i);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+static int sp_add_group_end(void *arg)
+{
+ int ret;
+
+ for (int i = 0; i < MAX_CHILD_NR && childs[i]; i++) {
+ kill(childs[i], SIGINT);
+ }
+ return 0;
+}
+
+static struct test_params params[] = {
+ {
+ .spg_id = 1,
+ .process_num = 30,
+ .mem_size_normal = 32,
+ .mem_size_huge = 32,
+ },
+ {
+ .spg_id = 1,
+ .process_num = 30,
+ .mem_size_normal = 1024 * 3.5,
+ .mem_size_huge = 1024 * 5,
+ },
+};
+
+static struct test_perf testcases[] = {
+ {
+ .perf_start = sp_add_group_start,
+ .perf_point = sp_add_group_point,
+ .perf_end = sp_add_group_end,
+ .name = "sp_add_group_P30_N0G_H0G",
+ .arg = ¶ms[0],
+ },
+ {
+ .perf_start = sp_add_group_start,
+ .perf_point = sp_add_group_concurrent_point,
+ .perf_end = sp_add_group_end,
+ .name = "sp_add_group_C_P4_N1G_H1G",
+ .arg = ¶ms[0],
+ },
+ {
+ .perf_start = sp_add_group_start,
+ .perf_point = sp_add_group_point,
+ .perf_end = sp_add_group_end,
+ .name = "sp_add_group_P30_N3.5G_H5G",
+ .arg = ¶ms[1],
+ },
+ {
+ .perf_start = sp_add_group_start,
+ .perf_point = sp_add_group_concurrent_point,
+ .perf_end = sp_add_group_end,
+ .name = "sp_add_group_C_P30_N3.5G_H5G",
+ .arg = ¶ms[1],
+ },
+};
+
+#define STRLENGTH 500
+static char filename[STRLENGTH];
+
+int main()
+{
+ int ret = 0;
+ dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ int passed = 0, failed = 0;
+
+ for (int i = 0; i < sizeof(testcases) / sizeof(testcases[0]); i++) {
+ ret = test_perf_routing(&testcases[i]);
+ ret == 0? passed++: failed++;
+ }
+
+ close_device(dev_fd);
+
+ pr_info("----------------------------");
+ printf("%s All %d testcases finished, passing: %d, failing: %d", extract_filename(filename, __FILE__), passed + failed, passed, failed);
+ printf("-------------------------\n");
+
+ return failed == 0? 0: -1;
+}
diff --git a/tools/testing/sharepool/testcase/performance_test/test_perf_sp_alloc.c b/tools/testing/sharepool/testcase/performance_test/test_perf_sp_alloc.c
new file mode 100644
index 000000000000..1f61dd47719b
--- /dev/null
+++ b/tools/testing/sharepool/testcase/performance_test/test_perf_sp_alloc.c
@@ -0,0 +1,618 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Jan 18 08:58:29 2021
+ */
+
+#define _GNU_SOURCE
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sys/sem.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <sys/sysinfo.h>
+#include <sched.h> /* sched_setaffinity */
+
+#include "sharepool_lib.h"
+
+#define NSEC2SEC 1000000000
+
+
+static int nr_child_process = 8;
+
+struct test_perf {
+ int (*perf_start)(void *);
+ int (*perf_point)(void *);
+ int (*perf_end)(void *);
+
+ int count;
+ void *arg;
+ char *name;
+};
+
+static int test_perf_child(struct test_perf *test_perf)
+{
+ long sum, max, min, dur;
+ struct timespec ts_start, ts_end;
+
+ sum = max = 0;
+ min = ((unsigned long)-1) >> 1;
+
+ if (!test_perf->perf_point) {
+ pr_info("you must supply a perf_point routine");
+ return -1;
+ }
+
+ if (test_perf->perf_start) {
+ if (test_perf->perf_start(test_perf->arg)) {
+ pr_info("testcase init failed");
+ if (test_perf->perf_end) {
+ if (test_perf->perf_end(test_perf->arg)) {
+ pr_info("testcase exit failed");
+ return -1;
+ }
+ }
+ return -1;
+ }
+ }
+
+ pr_info(">> testcase %s start <<", test_perf->name);
+ for (int i = 0; i < test_perf->count; i++) {
+ clock_gettime(CLOCK_MONOTONIC, &ts_start);
+ if (test_perf->perf_point(test_perf->arg)) {
+ pr_info("testcase point failed, i: %d", i);
+ return -1;
+ }
+ clock_gettime(CLOCK_MONOTONIC, &ts_end);
+
+ dur = (ts_end.tv_sec - ts_start.tv_sec) * NSEC2SEC + (ts_end.tv_nsec - ts_start.tv_nsec);
+ sum += dur;
+ max = max > dur ? max : dur;
+ min = min < dur ? min : dur;
+ }
+ if (test_perf->perf_end) {
+ if (test_perf->perf_end(test_perf->arg)) {
+ pr_info("testcase exit failed");
+ return -1;
+ }
+ }
+
+ pr_info("%50s result: avg: %10ld, max: %10ld, min: %10ld", test_perf->name, sum / test_perf->count, max, min);
+
+ return 0;
+}
+
+static int test_perf_routing(struct test_perf *test_perf)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(test_perf_child(test_perf));
+ }
+
+ int status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("testcase %s failed", test_perf->name);
+ return -1;
+ } else {
+ pr_info("testcase %s success", test_perf->name);
+ }
+
+ return 0;
+}
+
+static int sp_alloc_start(void *arg)
+{
+ struct sp_alloc_info *alloc_info = arg;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = alloc_info->spg_id,
+ };
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int sp_alloc_point(void *arg)
+{
+ struct sp_alloc_info *alloc_info = arg;
+
+ if (ioctl_alloc(dev_fd, alloc_info)) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int sp_alloc_and_free_point(void *arg)
+{
+ struct sp_alloc_info *alloc_info = arg;
+
+ if (ioctl_alloc(dev_fd, alloc_info)) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ if (ioctl_free(dev_fd, alloc_info)) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ return -1;
+ }
+
+ return 0;
+}
+
+#define MAX_CHILD_NR 100
+static pid_t childs[MAX_CHILD_NR];
+
+/*
+ * 同时创建N个进程加组,后续只有父进程分配内存,预期各个子进程由于建页表
+ * 的开销,会慢一些
+ */
+static int sp_alloc_mult_process_start(void *arg)
+{
+ cpu_set_t mask;
+ struct sp_alloc_info *alloc_info = arg;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = alloc_info->spg_id,
+ };
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ CPU_ZERO(&mask);
+ CPU_SET(0, &mask); /* parent process runs on CPU0 */
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("parent process sched_setaffinity failed, errno: %d", errno);
+ return -1;
+ }
+
+ for (int i = 0; i < nr_child_process; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ CPU_ZERO(&mask);
+ int cpu_count = get_nprocs();
+ CPU_SET((i + 1) % cpu_count, &mask);
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("child process %d sched_setaffinity failed, errno: %d", i, errno);
+ exit(-1);
+ }
+
+ while (1) {};
+ }
+
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+/* 同时创建7进程 */
+static int sp_alloc_mult_process_start_7(void *arg)
+{
+ nr_child_process = 7;
+ sp_alloc_mult_process_start(arg);
+}
+
+/* 同时创建15进程 */
+static int sp_alloc_mult_process_start_15(void *arg)
+{
+ nr_child_process = 15;
+ sp_alloc_mult_process_start(arg);
+}
+
+static int sp_alloc_end(void *arg)
+{
+ for (int i = 0; i < MAX_CHILD_NR && childs[i]; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], NULL, 0);
+ childs[i] = 0;
+ }
+
+ return 0;
+}
+
+// 同时创建N个进程加组,并且子进程也在申请内存
+static int sp_alloc_mult_alloc_start(void *arg)
+{
+ cpu_set_t mask;
+ struct sp_alloc_info *alloc_info = arg;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = alloc_info->spg_id,
+ };
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ CPU_ZERO(&mask);
+ CPU_SET(0, &mask); /* parent process runs on CPU0 */
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("parent process sched_setaffinity failed, errno: %d\n", errno);
+ return -1;
+ }
+
+ int semid = semget(0xabcd996, 1, IPC_CREAT | 0644);
+ if (semid < 0) {
+ pr_info("open systemV semaphore failed, errno: %s", strerror(errno));
+ return -1;
+ }
+ int ret = semctl(semid, 0, SETVAL, 0);
+ if (ret < 0) {
+ pr_info("sem setval failed, %s", strerror(errno));
+ goto sem_remove;
+ }
+
+ for (int i = 0; i < nr_child_process; i++) {
+ struct timespec delay;
+ pid_t pid = fork();
+
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = -1,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+
+ CPU_ZERO(&mask);
+ int cpu_count = get_nprocs();
+ CPU_SET((i + 1) % cpu_count, &mask);
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("child process %d sched_setaffinity failed, errno: %d\n", i, errno);
+ return -1;
+ }
+
+ delay.tv_sec = 0;
+ delay.tv_nsec = 3000000; /* 3ms */
+
+ while (1) {
+ sp_alloc_and_free_point(alloc_info);
+ nanosleep(&delay, NULL);
+ }
+ }
+
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = nr_child_process,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+
+sem_remove:
+ if (semctl(semid, IPC_RMID, NULL) < 0)
+ pr_info("sem remove failed, %s", strerror(errno));
+
+ return 0;
+}
+
+/* 同时创建7进程 */
+static int sp_alloc_mult_alloc_start_7(void *arg)
+{
+ nr_child_process = 7;
+ sp_alloc_mult_alloc_start(arg);
+}
+
+/* 同时创建15进程 */
+static int sp_alloc_mult_alloc_start_15(void *arg)
+{
+ nr_child_process = 15;
+ sp_alloc_mult_alloc_start(arg);
+}
+
+static struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = 0,
+ .size = 2 * PAGE_SIZE, // 8K
+ .spg_id = 1,
+ },
+ {
+ .flag = SP_HUGEPAGE,
+ .size = 2 * PMD_SIZE, // 4M
+ .spg_id = 1,
+ },
+ {
+ .flag = 0,
+ .size = 1024 * PAGE_SIZE, // 4M
+ .spg_id = 1,
+ },
+ {
+ .flag = 0,
+ .size = 512 * PAGE_SIZE, // 2M
+ .spg_id = 1,
+ },
+ {
+ .flag = SP_HUGEPAGE,
+ .size = PMD_SIZE, // 2M
+ .spg_id = 1,
+ },
+};
+
+static struct test_perf testcases[] = {
+ /* 单进程 */
+ {
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_point,
+ .perf_end = NULL,
+ .count = 1000,
+ .name = "sp_alloc_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ /*
+ * 如果只做sp_alloc不做sp_free,随内存消耗的增加,分配大页
+ * 内存的速率将越来越慢
+ */
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_point,
+ .perf_end = NULL,
+ .count = 1000, // 4G 大页 预计很慢
+ .name = "sp_alloc_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+ {
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = NULL,
+ .count = 1000,
+ .name = "sp_alloc_and_free_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = NULL,
+ .count = 1000,
+ .name = "sp_alloc_and_free_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+ {
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_point,
+ .perf_end = NULL,
+ .count = 1000,
+ .name = "sp_alloc_1024_pages",
+ .arg = &alloc_infos[2],
+ },
+ {
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = NULL,
+ .count = 1000,
+ .name = "sp_alloc_and_free_1024_pages",
+ .arg = &alloc_infos[2],
+ },
+ {
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_point,
+ .perf_end = NULL,
+ .count = 1000,
+ .name = "sp_alloc_512_pages",
+ .arg = &alloc_infos[3],
+ },
+ {
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = NULL,
+ .count = 1000,
+ .name = "sp_alloc_and_free_512_pages",
+ .arg = &alloc_infos[3],
+ },
+ {
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_point,
+ .perf_end = NULL,
+ .count = 1000, // 2G 大页 预计很慢
+ .name = "sp_alloc_1_hugepage",
+ .arg = &alloc_infos[4],
+ },
+ {
+ .perf_start = sp_alloc_start,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = NULL,
+ .count = 1000,
+ .name = "sp_alloc_and_free_1_hugepage",
+ .arg = &alloc_infos[4],
+ },
+ /* 父进程申请,子进程仅建页表 */
+ /* 总共8进程 */
+ {
+ .perf_start = sp_alloc_mult_process_start_7,
+ .perf_point = sp_alloc_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_mult_7_proc_populate_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ .perf_start = sp_alloc_mult_process_start_7,
+ .perf_point = sp_alloc_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_mult_7_proc_populate_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+ {
+ .perf_start = sp_alloc_mult_process_start_7,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_and_free_mult_7_proc_populate_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ .perf_start = sp_alloc_mult_process_start_7,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_and_free_mult_7_proc_populate_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+ /* 16进程 */
+ {
+ .perf_start = sp_alloc_mult_process_start_15,
+ .perf_point = sp_alloc_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_mult_15_proc_populate_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ .perf_start = sp_alloc_mult_process_start_15,
+ .perf_point = sp_alloc_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_mult_15_proc_populate_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+ {
+ .perf_start = sp_alloc_mult_process_start_15,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_and_free_mult_15_proc_populate_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ .perf_start = sp_alloc_mult_process_start_15,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_and_free_mult_15_proc_populate_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+ /* 父子进程都加组并申请内存 */
+ /* 8进程 */
+ {
+ .perf_start = sp_alloc_mult_alloc_start_7,
+ .perf_point = sp_alloc_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_mult_7_alloc_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ .perf_start = sp_alloc_mult_alloc_start_7,
+ .perf_point = sp_alloc_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_mult_7_alloc_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+ {
+ .perf_start = sp_alloc_mult_alloc_start_7,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_and_free_mult_7_alloc_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ .perf_start = sp_alloc_mult_alloc_start_7,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_and_free_mult_7_alloc_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+ /* 16进程 */
+ {
+ .perf_start = sp_alloc_mult_alloc_start_15,
+ .perf_point = sp_alloc_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_mult_15_alloc_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ .perf_start = sp_alloc_mult_alloc_start_15,
+ .perf_point = sp_alloc_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_mult_15_alloc_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+ {
+ .perf_start = sp_alloc_mult_alloc_start_15,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_and_free_mult_15_alloc_2_pages",
+ .arg = &alloc_infos[0],
+ },
+ {
+ .perf_start = sp_alloc_mult_alloc_start_15,
+ .perf_point = sp_alloc_and_free_point,
+ .perf_end = sp_alloc_end,
+ .count = 1000,
+ .name = "sp_alloc_and_free_mult_15_alloc_2_hugepage",
+ .arg = &alloc_infos[1],
+ },
+};
+
+
+#define STRLENGTH 500
+static char filename[STRLENGTH];
+
+int main()
+{
+ int ret = 0;
+ dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ int passed = 0, failed = 0;
+
+ for (int i = 0; i < sizeof(testcases) / sizeof(testcases[0]); i++) {
+ ret = test_perf_routing(&testcases[i]);
+ ret == 0? passed++: failed++;
+ }
+
+ close_device(dev_fd);
+
+ pr_info("----------------------------");
+ printf("%s All %d testcases finished, passing: %d, failing: %d", extract_filename(filename, __FILE__), passed + failed, passed, failed);
+ printf("-------------------------\n");
+
+ return failed == 0? 0: -1;
+}
diff --git a/tools/testing/sharepool/testcase/performance_test/test_perf_sp_k2u.c b/tools/testing/sharepool/testcase/performance_test/test_perf_sp_k2u.c
new file mode 100644
index 000000000000..21b7ae4d97e0
--- /dev/null
+++ b/tools/testing/sharepool/testcase/performance_test/test_perf_sp_k2u.c
@@ -0,0 +1,860 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Jan 18 08:58:29 2021
+ */
+
+#define _GNU_SOURCE
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sys/sem.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <sched.h> /* sched_setaffinity */
+#include <sys/sysinfo.h>
+
+#include "sharepool_lib.h"
+
+#define NSEC2SEC 1000000000
+
+
+static int nr_child_process = 8;
+static unsigned long vm_addr;
+
+struct test_perf {
+ int (*perf_start)(void *);
+ int (*perf_point)(void *);
+ int (*perf_end)(void *);
+
+ int count;
+ void *arg;
+ char *name;
+};
+
+static int test_perf_child(struct test_perf *test_perf)
+{
+ long sum, max, min, dur;
+ struct timespec ts_start, ts_end;
+
+ sum = max = 0;
+ min = ((unsigned long)-1) >> 1;
+
+ if (!test_perf->perf_point) {
+ pr_info("you must supply a perf_point routine");
+ return -1;
+ }
+
+ if (test_perf->perf_start) {
+ if (test_perf->perf_start(test_perf->arg)) {
+ pr_info("testcase init failed");
+ if (test_perf->perf_end) {
+ if (test_perf->perf_end(test_perf->arg)) {
+ pr_info("testcase exit failed");
+ return -1;
+ }
+ }
+ return -1;
+ }
+ }
+
+ //pr_info(">> testcase %s start <<", test_perf->name);
+ for (int i = 0; i < test_perf->count; i++) {
+ pr_info(">> testcase %s %dth time begins, %d times left. <<",
+ test_perf->name, i + 1, test_perf->count - (i + 1));
+ clock_gettime(CLOCK_MONOTONIC, &ts_start);
+ if (test_perf->perf_point(test_perf->arg)) {
+ pr_info("testcase %s %dth point failed.", test_perf->name, i + 1);
+ return -1;
+ }
+ clock_gettime(CLOCK_MONOTONIC, &ts_end);
+
+ dur = (ts_end.tv_sec - ts_start.tv_sec) * NSEC2SEC + (ts_end.tv_nsec - ts_start.tv_nsec);
+ sum += dur;
+ max = max > dur ? max : dur;
+ min = min < dur ? min : dur;
+ }
+
+ if (test_perf->perf_end) {
+ if (test_perf->perf_end(test_perf->arg)) {
+ pr_info("testcase exit failed");
+ return -1;
+ }
+ }
+
+ pr_info("%50s result: avg: %10ld, max: %10ld, min: %10ld", test_perf->name, sum / test_perf->count, max, min);
+
+ return 0;
+}
+
+static int test_perf_routing(struct test_perf *test_perf)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(test_perf_child(test_perf));
+ }
+
+ int status = 0;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ return -1;
+ }
+
+ return 0;
+}
+
+static int vmalloc_and_store(unsigned long size)
+{
+ struct vmalloc_info vmalloc_info = {
+ .size = size,
+ };
+
+ if (ioctl_vmalloc(dev_fd, &vmalloc_info)) {
+ pr_info("vmalloc small page failed, errno: %d", errno);
+ return -1;
+ } else {
+ pr_info("vmalloc success: %lx", vmalloc_info.addr);
+ pr_info("vm_addr before: %lx", vm_addr);
+ vm_addr = vmalloc_info.addr;
+ pr_info("vm_addr after: %lx", vm_addr);
+ }
+
+ return 0;
+}
+
+static int vmalloc_hugepage_and_store(unsigned long size)
+{
+ struct vmalloc_info vmalloc_info = {
+ .size = size,
+ };
+
+ if (ioctl_vmalloc_hugepage(dev_fd, &vmalloc_info)) {
+ pr_info("vmalloc small page failed, errno: %d", errno);
+ return -1;
+ } else {
+ pr_info("vmalloc success: %lx", vmalloc_info.addr);
+ pr_info("vm_addr before: %lx", vm_addr);
+ vm_addr = vmalloc_info.addr;
+ pr_info("vm_addr after: %lx", vm_addr);
+ }
+
+ return 0;
+}
+
+static int vfree(unsigned long size)
+{
+ struct vmalloc_info vmalloc_info = {
+ .addr = vm_addr,
+ .size = size,
+ };
+ pr_info("gonna vfree address: %lx", vm_addr);
+ if (ioctl_vfree(dev_fd, &vmalloc_info)) {
+ pr_info("vfree failed, errno: %d", errno);
+ return -1;
+ } else {
+ pr_info("vfree success.");
+ return 0;
+ }
+}
+
+static int sp_k2u_start(void *arg)
+{
+ pid_t pid = getpid();
+
+ struct sp_make_share_info *k2u_info = arg;
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = k2u_info->spg_id,
+ };
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ vmalloc_and_store(k2u_info->size);
+
+ k2u_info->pid = pid;
+ k2u_info->kva = vm_addr;
+
+ return 0;
+}
+
+static int sp_k2u_huge_start(void *arg)
+{
+ pid_t pid = getpid();
+
+ struct sp_make_share_info *k2u_info = arg;
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = k2u_info->spg_id,
+ };
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ vmalloc_hugepage_and_store(k2u_info->size);
+
+ k2u_info->pid = pid;
+ k2u_info->kva = vm_addr;
+ return 0;
+}
+
+/*
+ * 这样做无法完全回收k2u的uva,进而做vfree会提示错误。因此,不调用本函数,
+ * 本用例暂定为会导致内存泄漏
+ */
+static int sp_k2u_end(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ if (vfree(k2u_info->size) < 0)
+ return -1;
+
+ return 0;
+}
+
+static int sp_k2u_point(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ if (ioctl_k2u(dev_fd, k2u_info)) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int sp_k2u_and_unshare_point(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ if (ioctl_k2u(dev_fd, k2u_info)) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ return -1;
+ }
+
+ if (ioctl_unshare(dev_fd, k2u_info)) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return -1;
+ }
+
+ return 0;
+}
+
+#define MAX_CHILD_NR 100
+static pid_t childs[MAX_CHILD_NR];
+
+static int sp_k2u_mult_end(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+ for (int i = 0; i < MAX_CHILD_NR && childs[i]; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], NULL, 0);
+ childs[i] = 0;
+ }
+
+ if (vfree(k2u_info->size) < 0)
+ return -1;
+
+ return 0;
+}
+
+/* 同时创建N个进程加组,并且子进程也在死循环 */
+static int sp_k2u_1_vs_mult_start(void *arg)
+{
+ cpu_set_t mask;
+ pid_t pid = getpid();
+ struct sp_make_share_info *k2u_info = arg;
+ k2u_info->pid = pid;
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = k2u_info->spg_id,
+ };
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ } else {
+ pr_info("test parent process %d add group %d success.", getpid(), ag_info.spg_id);
+ }
+
+ CPU_ZERO(&mask);
+ CPU_SET(0, &mask); /* parent process runs on CPU0 */
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("parent process sched_setaffinity failed, errno: %d", errno);
+ return -1;
+ }
+
+ int cpu_count = get_nprocs();
+ pr_info("cpu count is %d", cpu_count);
+ int i;
+ for (i = 0; i < nr_child_process; i++) {
+ pid_t pid = fork();
+
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ CPU_ZERO(&mask);
+ CPU_SET((i + 1) % cpu_count, &mask);
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("set %dth child process %d sched_setaffinity failed, errno: %d", i, getpid(), errno);
+ exit(-1);
+ } else {
+ pr_info("set %dth child process %d sched_setaffinity success", i, getpid());
+ }
+ while (1) {};
+ exit(0);
+ }
+
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("add %dth child process %d to group failed, errno: %d", i, pid, errno);
+ return -1;
+ } {
+ pr_info("add %dth child process %d to group success", i, pid);
+ }
+ }
+
+ return 0;
+/*
+error:
+ //回收子进程,不在这里回收,则需要在end中回收
+ while (--i >= 0) {
+ kill(childs[i], SIGKILL);
+ int status;
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child process %d ended unexpected.", childs[i]);
+ }
+ childs[i] = 0;
+ }
+ return ret;
+*/
+}
+
+/* 同时创建7睡眠进程,小页 */
+static int sp_k2u_1_vs_mult_start_7(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ vmalloc_and_store(k2u_info->size);
+ k2u_info->kva = vm_addr;
+
+ nr_child_process = 7;
+ return sp_k2u_1_vs_mult_start(arg);
+}
+
+/* 同时创建7睡眠进程,大页 */
+static int sp_k2u_huge_1_vs_mult_start_7(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ vmalloc_hugepage_and_store(k2u_info->size);
+ k2u_info->kva = vm_addr;
+
+ nr_child_process = 7;
+ return sp_k2u_1_vs_mult_start(arg);
+}
+
+/* 同时创建15睡眠进程,小页 */
+static int sp_k2u_1_vs_mult_start_15(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ vmalloc_and_store(k2u_info->size);
+ k2u_info->kva = vm_addr;
+
+ nr_child_process = 15;
+ return sp_k2u_1_vs_mult_start(arg);
+}
+
+/* 同时创建15睡眠进程,大页 */
+static int sp_k2u_huge_1_vs_mult_start_15(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ vmalloc_hugepage_and_store(k2u_info->size);
+ k2u_info->kva = vm_addr;
+
+ nr_child_process = 15;
+ return sp_k2u_1_vs_mult_start(arg);
+}
+/*
+ * 同时创建N个进程加组,后续只有父进程和子进程同时做sp_k2u和unshare(如果有)
+ */
+static int sp_k2u_mult_start(void *arg)
+{
+ cpu_set_t mask;
+ pid_t pid = getpid();
+ struct sp_make_share_info *k2u_info = arg;
+ k2u_info->pid = pid;
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = k2u_info->spg_id,
+ };
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ CPU_ZERO(&mask);
+ CPU_SET(0, &mask); /* parent process runs on CPU0 */
+ int cpu_count = get_nprocs();
+
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("parent process sched_setaffinity failed, errno: %d", errno);
+ return -1;
+ }
+
+ int semid = semget(0xabcd996, 1, IPC_CREAT | 0644);
+ if (semid < 0) {
+ pr_info("open systemV semaphore failed, errno: %s", strerror(errno));
+ return -1;
+ }
+ int ret = semctl(semid, 0, SETVAL, 0);
+ if (ret < 0) {
+ pr_info("sem setval failed, %s", strerror(errno));
+ goto sem_remove;
+ }
+
+ for (int i = 0; i < nr_child_process; i++) {
+ struct timespec delay;
+ pid_t pid = fork();
+
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = -1,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+
+ CPU_ZERO(&mask);
+ CPU_SET((i + 1) % cpu_count, &mask);
+ if (sched_setaffinity(0, sizeof(cpu_set_t), &mask) == -1) {
+ pr_info("child process %d sched_setaffinity failed, errno: %d", i, errno);
+ return -1;
+ }
+
+ delay.tv_sec = 0;
+ delay.tv_nsec = 300000; /* 300us */
+
+ while (1) {
+ sp_k2u_and_unshare_point(k2u_info);
+ nanosleep(&delay, NULL);
+ }
+ }
+
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ if (ioctl_add_group(dev_fd, &ag_info)) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = nr_child_process,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+
+sem_remove:
+ if (semctl(semid, IPC_RMID, NULL) < 0)
+ pr_info("sem remove failed, %s", strerror(errno));
+
+ return 0;
+}
+
+/* 同时创建7进程,小页 */
+static int sp_k2u_mult_start_7(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ vmalloc_and_store(k2u_info->size);
+ k2u_info->kva = vm_addr;
+
+ nr_child_process = 7;
+ return sp_k2u_mult_start(arg);
+}
+
+/* 同时创建7进程,大页 */
+static int sp_k2u_huge_mult_start_7(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ vmalloc_hugepage_and_store(k2u_info->size);
+ k2u_info->kva = vm_addr;
+
+ nr_child_process = 7;
+ return sp_k2u_mult_start(arg);
+}
+
+/* 同时创建15进程,小页 */
+static int sp_k2u_mult_start_15(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ vmalloc_and_store(k2u_info->size);
+ k2u_info->kva = vm_addr;
+
+ nr_child_process = 15;
+ return sp_k2u_mult_start(arg);
+}
+
+/* 同时创建15进程,大页 */
+static int sp_k2u_huge_mult_start_15(void *arg)
+{
+ struct sp_make_share_info *k2u_info = arg;
+
+ vmalloc_hugepage_and_store(k2u_info->size);
+ k2u_info->kva = vm_addr;
+
+ nr_child_process = 15;
+ return sp_k2u_mult_start(arg);
+}
+
+static struct sp_make_share_info k2u_infos[] = {
+ /* 每个用例一个数组元素,防止用例修改kva和pid带来潜在干扰 */
+ /* 单进程 */
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE, /* =vmalloc size */
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+ /* 单进程k2u,其他同组进程sleep */
+ /* 8进程 */
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+ /* 16进程 */
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+ /* 单进程k2u,其他同组进程也做k2u */
+ /* 8进程 */
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+ /* 16进程 */
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PAGE_SIZE,
+ .spg_id = 1,
+ },
+ {
+ .sp_flags = 0,
+ .size = 2 * PMD_SIZE,
+ .spg_id = 1,
+ },
+};
+
+static struct test_perf testcases[] = {
+ /* 单进程 */
+ {
+ .perf_start = sp_k2u_start,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_end,
+ .count = 1000,
+ .name = "sp_k2u_2_pages",
+ .arg = &k2u_infos[0],
+ },
+ {
+ .perf_start = sp_k2u_huge_start,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_end,
+ .count = 1000,
+ .name = "sp_k2u_2_hugepage",
+ .arg = &k2u_infos[1],
+ },
+ {
+ .perf_start = sp_k2u_start,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_2_pages",
+ .arg = &k2u_infos[2],
+ },
+ {
+ .perf_start = sp_k2u_huge_start,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_2_hugepage",
+ .arg = &k2u_infos[3],
+ },
+ /* 父进程申请,子进程仅建页表 */
+ /* 总共8进程 */
+ {
+ .perf_start = sp_k2u_1_vs_mult_start_7,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_1_vs_mult7_proc2_pages",
+ .arg = &k2u_infos[4],
+ },
+ {
+ .perf_start = sp_k2u_huge_1_vs_mult_start_7,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_1_vs_mult7_proc2_hugepages",
+ .arg = &k2u_infos[5],
+ },
+ {
+ .perf_start = sp_k2u_1_vs_mult_start_7,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_1_vs_mult7_proc2_pages",
+ .arg = &k2u_infos[6],
+ },
+ {
+ .perf_start = sp_k2u_huge_1_vs_mult_start_7,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_1_vs_mult7_proc2_hugepages",
+ .arg = &k2u_infos[7],
+ },
+ /* 总共16进程 */
+ {
+ .perf_start = sp_k2u_1_vs_mult_start_15,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_1_vs_mult15_proc2_pages",
+ .arg = &k2u_infos[8],
+ },
+ {
+ .perf_start = sp_k2u_huge_1_vs_mult_start_15,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_1_vs_mult15_proc2_hugepages",
+ .arg = &k2u_infos[9],
+ },
+ {
+ .perf_start = sp_k2u_1_vs_mult_start_15,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_1_vs_mult15_proc2_pages", // failed.
+ .arg = &k2u_infos[10],
+ },
+ {
+ .perf_start = sp_k2u_huge_1_vs_mult_start_15,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_1_vs_mult15_proc2_hugepages",
+ .arg = &k2u_infos[11],
+ },
+ /* 父子进程都加组并申请内存 */
+ /* 8进程 */
+ {
+ .perf_start = sp_k2u_mult_start_7,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_mult7_2_pages",
+ .arg = &k2u_infos[12],
+ },
+ {
+ .perf_start = sp_k2u_huge_mult_start_7,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_mult7_2_hugepage",
+ .arg = &k2u_infos[13],
+ },
+ {
+ .perf_start = sp_k2u_mult_start_7,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_mult7_2_pages",
+ .arg = &k2u_infos[14],
+ },
+ {
+ .perf_start = sp_k2u_huge_mult_start_7,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_mult7_2_hugepage",
+ .arg = &k2u_infos[15],
+ },
+ /* 16进程 */
+ {
+ .perf_start = sp_k2u_mult_start_15,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_mult15_2_pages",
+ .arg = &k2u_infos[16],
+ },
+ {
+ .perf_start = sp_k2u_huge_mult_start_15,
+ .perf_point = sp_k2u_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_mult15_2_hugepage",
+ .arg = &k2u_infos[17],
+ },
+ {
+ .perf_start = sp_k2u_mult_start_15,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_mult15_2_pages",
+ .arg = &k2u_infos[18],
+ },
+ {
+ .perf_start = sp_k2u_huge_mult_start_15,
+ .perf_point = sp_k2u_and_unshare_point,
+ .perf_end = sp_k2u_mult_end,
+ .count = 1000,
+ .name = "sp_k2u_and_unshare_mult15_2_hugepage",
+ .arg = &k2u_infos[19],
+ },
+};
+
+#define STRLENGTH 500
+static char filename[STRLENGTH];
+
+int main(int argc, char *argv[])
+{
+ int ret = 0;
+ dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ int passed = 0, failed = 0;
+
+ if (argc == 1) {
+ for (int i = 0; i < sizeof(testcases) / sizeof(testcases[0]); i++) {
+ printf(">>>> start testcase%d: %s", i + 1, testcases[i].name);
+ ret = test_perf_routing(&testcases[i]);
+ ret == 0? passed++: failed++;
+ printf("<<<< end testcase%d: %s, result: %s\n", i + 1, testcases[i].name, ret != 0 ? "failed" : "passed");
+ }
+ pr_info("----------------------------");
+ printf("%s All %d testcases finished, passing: %d, failing: %d", extract_filename(filename, __FILE__), passed + failed, passed, failed);
+ printf("-------------------------\n");
+ } else {
+ int testnum = atoi(argv[1]);
+ printf(">>>> start testcase%d: %s", testnum, testcases[testnum - 1].name);
+ ret = test_perf_routing(&testcases[testnum - 1]);
+ printf("<<<< end testcase%d: %s, result: %s\n", testnum, testcases[testnum - 1].name, ret != 0 ? "failed" : "passed");
+ pr_info("----------------------------");
+ printf("%s testcase%d finished, %s", extract_filename(filename, __FILE__), testnum, ret == 0 ? "passed." : "failed.");
+ printf("-------------------------\n");
+ }
+
+
+
+ close_device(dev_fd);
+
+
+
+ return failed == 0? 0: -1;
+}
diff --git a/tools/testing/sharepool/testcase/reliability_test/Makefile b/tools/testing/sharepool/testcase/reliability_test/Makefile
new file mode 100644
index 000000000000..aaee60d8872b
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/Makefile
@@ -0,0 +1,11 @@
+MODULEDIR:=coredump fragment k2u_u2k sp_add_group sp_unshare kthread others
+
+all:tooldir
+
+tooldir:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n; done
+install:
+ mkdir -p $(TOOL_BIN_DIR)/reliability_test && cp reliability_test.sh $(TOOL_BIN_DIR)
+ for n in $(MODULEDIR); do $(MAKE) -C $$n install; done
+clean:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n clean; done
diff --git a/tools/testing/sharepool/testcase/reliability_test/coredump/Makefile b/tools/testing/sharepool/testcase/reliability_test/coredump/Makefile
new file mode 100644
index 000000000000..5370208dbd64
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/coredump/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/reliability_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump.c b/tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump.c
new file mode 100644
index 000000000000..bc488692a08a
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump.c
@@ -0,0 +1,581 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 06:59:45 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <signal.h>
+#include <sys/time.h>
+#include <sys/resource.h>
+#include <stdlib.h> /* rand() and srand() */
+#include <time.h> /* time() */
+#include <pthread.h>
+
+#include "sharepool_lib.h"
+#include "sem_use.h"
+
+#define ALLOC_TEST_TYPES 4
+#define GROUP_ID 1
+#define THREAD_NUM 3
+#define KILL_TIME 1000
+static int semid;
+
+void *alloc_thread(void *arg)
+{
+ int ret;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[ALLOC_TEST_TYPES] = {
+ {
+ // 大页
+ .flag = SP_HUGEPAGE,
+ .spg_id = GROUP_ID,
+ .size = 2 * PMD_SIZE,
+ },
+ {
+ // 大页 DVPP
+ .flag = SP_DVPP | SP_HUGEPAGE,
+ .spg_id = GROUP_ID,
+ .size = 2 * PMD_SIZE,
+ },
+ {
+ // 普通页 DVPP
+ .flag = SP_DVPP,
+ .spg_id = GROUP_ID,
+ .size = 4 * PAGE_SIZE,
+ },
+ {
+ // 普通页
+ .flag = 0,
+ .spg_id = GROUP_ID,
+ .size = 4 * PAGE_SIZE,
+ },
+};
+ while (1) {
+ pr_info("%s run time %d", __FUNCTION__, sem_get_value(semid));
+ for (int i = 0; i < ALLOC_TEST_TYPES; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ return -1;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ return -1;
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(dev_fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ return -1;
+ }
+ }
+ for (int i = 0; i < ALLOC_TEST_TYPES; i++) {
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ return -1;
+ }
+ }
+ sem_inc_by_one(semid);
+ if (ret)
+ break;
+ }
+ return ret;
+}
+
+struct vmalloc_info vmalloc_infos[THREAD_NUM] = {0}, vmalloc_huge_infos[THREAD_NUM] = {0};
+sig_atomic_t thread_index = 0;
+void *k2u_thread(void *arg)
+{
+ int ret;
+
+ struct sp_make_share_info k2u_info = {0}, k2u_huge_info = {0};
+ int vm_index = thread_index++;
+ while (1) {
+ pr_info("k2u_thread run time %d", sem_get_value(semid));
+ pr_info("atomic index is %d, thread index is %d", vm_index, thread_index);
+
+ int group_id = (int)arg;
+ int pid = getpid();
+ k2u_info.kva = vmalloc_infos[vm_index].addr;
+ k2u_info.size = vmalloc_infos[vm_index].size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = group_id;
+
+ k2u_huge_info.kva = vmalloc_huge_infos[vm_index].addr;
+ k2u_huge_info.size = vmalloc_huge_infos[vm_index].size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = group_id;
+
+ /* k2u */
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ }
+ }
+
+ /* check k2u memory content */
+ char *addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ sem_inc_by_one(semid);
+ }
+
+ return ret;
+error:
+ return -1;
+}
+
+void *k2task_thread(void *arg)
+{
+ k2u_thread(SPG_ID_DEFAULT);
+}
+
+void *addgroup_thread(void *arg)
+{
+ int ret = 0;
+ while (1) {
+ pr_info("add_group_thread run time %d", sem_get_value(semid));
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ int num = KILL_TIME;
+ int spg_id[KILL_TIME];
+ ret = wrap_sp_group_id_by_pid(getpid(), spg_id, &num);
+ pr_info("add to %d groups", num);
+ sem_inc_by_one(semid);
+ }
+ return ret;
+}
+
+void *u2k_thread(void *arg)
+{
+ int ret = 0;
+ bool judge_ret = true;
+ char *addr;
+ int group_id = (int)arg;
+ int pid = getpid();
+ struct sp_alloc_info alloc_info[ALLOC_TEST_TYPES] = {
+ {
+ // 大页
+ .flag = SP_HUGEPAGE,
+ .spg_id = GROUP_ID,
+ .size = 2 * PMD_SIZE,
+ },
+ {
+ // 大页 DVPP
+ .flag = SP_DVPP | SP_HUGEPAGE,
+ .spg_id = GROUP_ID,
+ .size = 2 * PMD_SIZE,
+ },
+ {
+ // 普通页 DVPP
+ .flag = SP_DVPP,
+ .spg_id = GROUP_ID,
+ .size = 4 * PAGE_SIZE,
+ },
+ {
+ // 普通页
+ .flag = 0,
+ .spg_id = GROUP_ID,
+ .size = 4 * PAGE_SIZE,
+ },
+};
+ struct sp_make_share_info u2k_info[ALLOC_TEST_TYPES] = {0};
+ while (1) {
+ pr_info("u2k_thread run time %d", sem_get_value(semid));
+ for (int i = 0; i < ALLOC_TEST_TYPES; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(dev_fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ pr_info("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ }
+ }
+ }
+
+ for (int i = 0; i < ALLOC_TEST_TYPES; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ sem_inc_by_one(semid);
+ }
+error:
+ //close_device(dev_fd);
+ return -1;
+}
+
+void *walkpagerange_thread(void *arg)
+{
+ int ret = 0;
+ // alloc
+ struct sp_alloc_info alloc_info[ALLOC_TEST_TYPES] = {
+ {
+ // 大页
+ .flag = SP_HUGEPAGE,
+ .spg_id = GROUP_ID,
+ .size = 2 * PMD_SIZE,
+ },
+ {
+ // 大页 DVPP
+ .flag = SP_DVPP | SP_HUGEPAGE,
+ .spg_id = GROUP_ID,
+ .size = 2 * PMD_SIZE,
+ },
+ {
+ // 普通页 DVPP
+ .flag = SP_DVPP,
+ .spg_id = GROUP_ID,
+ .size = 4 * PAGE_SIZE,
+ },
+ {
+ // 普通页
+ .flag = 0,
+ .spg_id = GROUP_ID,
+ .size = 4 * PAGE_SIZE,
+ },
+};
+ for (int i = 0; i < ALLOC_TEST_TYPES; i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ return -1;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ return -1;
+ }
+ }
+ }
+
+ struct sp_walk_page_range_info wpr_info[ALLOC_TEST_TYPES] = {0};
+ while (1) {
+ pr_info("%s run time %d", __FUNCTION__, sem_get_value(semid));
+ for (int i = 0; i < ALLOC_TEST_TYPES; i++) {
+ wpr_info[i].uva = alloc_info[i].addr;
+ wpr_info[i].size = alloc_info[i].size;
+ ret = ioctl_walk_page_range(dev_fd, wpr_info + i);
+ if (ret < 0) {
+ pr_info("ioctl_walk_page_range failed, errno: %d", errno);
+ return -1;
+ }
+ }
+ for (int i = 0; i < ALLOC_TEST_TYPES; i++) {
+ ret = ioctl_walk_page_free(dev_fd, wpr_info + i);
+ if (ret < 0) {
+ pr_info("ioctl_walk_page_range_free failed, errno: %d", errno);
+ return -1;
+ }
+ }
+ sem_inc_by_one(semid);
+ }
+
+ // free
+ for (int i = 0; i < ALLOC_TEST_TYPES; i++) {
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+int vmallocAll()
+{
+ int ret;
+ for (int i = 0; i < THREAD_NUM; i++) {
+ vmalloc_infos[i].size = 3 * PAGE_SIZE;
+ ret = ioctl_vmalloc(dev_fd, vmalloc_infos + i);
+ if(ret < 0) {
+ pr_info("vmalloc failed");
+ } else {
+ pr_info("vmalloc success");
+ }
+ vmalloc_huge_infos[i].size = 3 * PMD_SIZE;
+ ret = ioctl_vmalloc_hugepage(dev_fd, vmalloc_huge_infos + i);
+ if(ret < 0) {
+ pr_info("vmalloc hugepage failed");
+ } else {
+ pr_info("vmalloc hugepage success");
+ }
+ }
+ return ret;
+}
+
+int vfreeAll()
+{
+ pr_info("now inside %s, thread index is %d", __FUNCTION__, thread_index);
+
+ int ret;
+
+ for (int i = 0; i < THREAD_NUM; i++) {
+ ret = ioctl_vfree(dev_fd, vmalloc_infos + i);
+ if (ret != 0) {
+ pr_info("vfree failed, errno is %d", errno);
+ } else {
+ pr_info("vfree success");
+ }
+ ret = ioctl_vfree(dev_fd, vmalloc_huge_infos + i);
+ if (ret != 0) {
+ pr_info("vfree failed, errno is %d", errno);
+ } else {
+ pr_info("vfree hugepage success");
+ }
+ }
+ thread_index = 0;
+ return 0;
+}
+
+int startThreads(void *(thread)(void *))
+{
+ int ret = 0;
+ semid = sem_create(1234, "core_dump_after_xxx_time_count");
+ setCore();
+
+ // add group
+ int group_id = GROUP_ID;
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, group_id);
+ if (ret < 0) {
+ printf("add task(pid%d) to group(%d) failed, err: %s\n", getpid(), group_id, strerror(errno));
+ return -1;
+ } else
+ printf("add task(pid%d) to group(%d) success\n", getpid(), group_id);
+
+ // 重复
+ pthread_t threads[THREAD_NUM];
+ thread_index = 0;
+ for (int i = 0; i < THREAD_NUM; i++) {
+ pthread_create(threads + i, NULL, thread, (void *)group_id);
+ }
+
+ sem_dec_by_val(semid, KILL_TIME);
+
+ sem_close(semid);
+ if (thread_index > THREAD_NUM) {
+ pr_info("failure, thread index: %d not correct!!", thread_index);
+ return -1;
+ }
+ ret = generateCoredump();
+
+ sleep(3);
+
+ return ret;
+}
+
+/* testcase1: alloc后产生coredump */
+static int testcase1(void)
+{
+ int status;
+ int pid = fork();
+ if (pid == 0) {
+ exit(startThreads(alloc_thread));
+ } else if (pid > 0){
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ // expected status is 139 = 128 + 11 (SIGSEGV)
+ pr_info("coredump as expected, return value is %d", status);
+ }
+ }
+ return 0;
+}
+
+/* testcase2: k2spg中coredump */
+static int testcase2(void)
+{
+ int pid;
+ vmallocAll();
+ FORK_CHILD_ARGS(pid, startThreads(k2u_thread));
+ int status;
+ waitpid(pid, &status, 0);
+ vfreeAll();
+}
+
+/* testcase3: u2k中coredump */
+static int testcase3(void)
+{
+ int status;
+ int pid = fork();
+ if (pid == 0) {
+ exit(startThreads(u2k_thread));
+ } else if (pid > 0){
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ // expected status is 139 = 128 + 11 (SIGSEGV)
+ pr_info("coredump as expected, return value is %d", status);
+ }
+ }
+ return 0;
+}
+
+/* testcase4: add group and query中coredump */
+static int testcase4(void)
+{
+ int status;
+ int pid = fork();
+ if (pid == 0) {
+ exit(startThreads(addgroup_thread));
+ } else if (pid > 0){
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ // expected status is 139 = 128 + 11 (SIGSEGV)
+ pr_info("coredump as expected, return value is %d", status);
+ }
+ }
+ return 0;
+}
+
+/* testcase5: k2task中coredump */
+static int testcase5(void)
+{
+ int pid;
+ vmallocAll();
+ FORK_CHILD_ARGS(pid, startThreads(k2task_thread));
+ int status;
+ waitpid(pid, &status, 0);
+ vfreeAll();
+}
+
+/* testcase6: walkpagerange中coredump - 会有kmemory leak*/
+static int testcase6(void)
+{
+ int status;
+ int pid = fork();
+ if (pid == 0) {
+ exit(startThreads(walkpagerange_thread));
+ } else if (pid > 0){
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ // expected status is 139 = 128 + 11 (SIGSEGV)
+ pr_info("coredump as expected, return value is %d", status);
+ }
+ }
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "alloc后产生coredump")
+ TESTCASE_CHILD(testcase2, "k2spg中coredump")
+ TESTCASE_CHILD(testcase3, "u2k中coredump")
+ TESTCASE_CHILD(testcase4, "add group and query中coredump")
+ TESTCASE_CHILD(testcase5, "k2task中coredump")
+ TESTCASE_CHILD(testcase6, "walkpagerange中coredump - 会有kmemory leak")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump2.c b/tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump2.c
new file mode 100644
index 000000000000..9ff36b1e68e4
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump2.c
@@ -0,0 +1,202 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri May 21 07:23:31 2021
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <sys/mman.h>
+#include <time.h> /* time() */
+#include <sys/resource.h>
+
+#include "sharepool_lib.h"
+
+
+static int init_env(void)
+{
+ struct rlimit core_lim;
+
+ if (getrlimit(RLIMIT_CORE, &core_lim)) {
+ printf("getrlimit failed, err: %s\n", strerror(errno));
+ return -1;
+ } else
+ printf("current rlimit for RLIMIT_CORE is: %lx, %lx\n", core_lim.rlim_cur, core_lim.rlim_max);
+
+ core_lim.rlim_cur = RLIM_INFINITY;
+ if (setrlimit(RLIMIT_CORE, &core_lim)) {
+ printf("setrlimit failed, err: %s\n", strerror(errno));
+ return -1;
+ } else
+ printf("setrlimit for RLIMIT_CORE to unlimited\n");
+
+ return 0;
+}
+
+/* do nothing */
+static int child_do_nothing(sem_t *sync)
+{
+ int ret;
+
+ SEM_WAIT(sync);
+
+ pr_info("child pid: %d", getpid());
+
+ while (1);
+
+ return 0;
+}
+
+static int child_sp_alloc(sem_t *sync)
+{
+ int ret;
+
+ SEM_WAIT(sync);
+
+ pr_info("child pid: %d wake up.", getpid());
+out:
+ while (1) {
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 1024,
+ .spg_id = SPG_ID_DEFAULT,
+ };
+ TEST_CHECK(ioctl_alloc(dev_fd, &alloc_info), out);
+ TEST_CHECK(ioctl_free(dev_fd, &alloc_info), out);
+ }
+
+ return 0;
+}
+
+static int child_sp_alloc_and_free(sem_t *sync)
+{
+ int ret;
+
+ SEM_WAIT(sync);
+
+ pr_info("child pid: %d", getpid());
+out:
+ while (1) {
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 1024,
+ .spg_id = SPG_ID_DEFAULT,
+ };
+ TEST_CHECK(ioctl_alloc(dev_fd, &alloc_info), out);
+ TEST_CHECK(ioctl_free(dev_fd, &alloc_info), out);
+ }
+
+ return 0;
+}
+
+#define test_child_num 20
+#define test_group_num 10
+
+static int fork_or_coredump(int (*child)(sem_t *))
+{
+ int ret, i, j;
+ pid_t pid[test_child_num];
+ int groups[test_group_num];
+ sem_t *sync[test_child_num];
+ int repeat = 100;
+
+ for (i = 0; i < test_child_num; i++)
+ SEM_INIT(sync[i], i);
+
+ for (i = 0; i < test_child_num; i++)
+ FORK_CHILD_ARGS(pid[i], child(sync[i]));
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid[0],
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = SPG_ID_AUTO,
+ };
+
+ for (i = 0; i < test_group_num; i++) {
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out_kill);
+ groups[i] = ag_info.spg_id;
+ }
+
+ for (i = 1; i < test_child_num; i++) {
+ ag_info.pid = pid[i];
+ for (j = 0; j < test_group_num; j++) {
+ ag_info.spg_id = groups[j];
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out_kill);
+ }
+ }
+
+ for (i = 0; i < test_child_num; i++)
+ sem_post(sync[i]);
+
+ int alive_process = test_child_num;
+ srand((unsigned)time(NULL));
+
+ for (i = 0; i < repeat; i++) {
+ pr_info("kill time %dth, %d times left.", i + 1, repeat - (i + 1));
+ int idx = rand() % test_child_num;
+ /* 不能把所有进程都杀掉, 否则进程退出,组正在销毁,后面加组会失败 */
+ if (pid[idx] && alive_process > 1) {
+ kill(pid[idx], SIGSEGV);
+ waitpid(pid[idx], 0, NULL);
+ pid[idx] = 0;
+ alive_process--;
+ } else {
+ FORK_CHILD_ARGS(pid[idx], child(sync[idx]));
+ ag_info.pid = pid[idx];
+ for (j = 0; j < test_group_num; j++) {
+ ag_info.spg_id = groups[j];
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out_kill);
+ }
+ sem_post(sync[idx]);
+ alive_process++;
+ }
+ }
+
+ return 0;
+
+out_kill:
+ for (i = 0; i < test_child_num; i++)
+ kill(pid[i], SIGKILL);
+out:
+ return ret;
+}
+
+static int testcase1(void)
+{
+ setCore();
+ return fork_or_coredump(child_do_nothing);
+}
+
+static int testcase2(void)
+{
+ setCore();
+ return fork_or_coredump(child_sp_alloc);
+}
+
+static int testcase3(void)
+{
+ init_env();
+ return fork_or_coredump(child_sp_alloc_and_free);
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "N个加组后什么也不做,coredump")
+ TESTCASE(testcase2, "N个进程加组后alloc,然后coredump")
+ TESTCASE(testcase3, "N个进程加组后alloc-free,然后coredump")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump_k2u_alloc.c b/tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump_k2u_alloc.c
new file mode 100644
index 000000000000..782615493a85
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/coredump/test_coredump_k2u_alloc.c
@@ -0,0 +1,562 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 06:59:45 2020
+ */
+#include <stdio.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <signal.h>
+#include <sys/time.h>
+#include <sys/resource.h>
+#include <stdlib.h> /* rand() and srand() */
+#include <time.h> /* time() */
+#include <pthread.h>
+
+#include "sharepool_lib.h"
+#include "sem_use.h"
+
+#define PROC_NUM 64
+#define GROUP_ID 1
+#define K2U_UNSHARE_TIME 2
+#define ALLOC_FREE_TIME 2
+#define VMALLOC_SIZE 4096
+#define PROT (PROT_READ | PROT_WRITE)
+#define GROUP_NUM 4
+#define K2U_CONTINUOUS_TIME 200
+#define min(a,b) ((a)<(b)?(a):(b))
+
+/* testcase1:
+ * 每个组都拉起一个进程负责k2u,其他N个进程加多组后依次coredump,所有组k2u每次都应该返回成功。
+ * 打印维测信息,k2u正常。
+ * 所有进程coredump后,测试退出,打印维测信息,组和spa均为0,无泄漏。
+ */
+
+static int semid[PROC_NUM];
+static int sem_task;
+static int group_ids[GROUP_NUM];
+
+struct k2u_args {
+ int with_print;
+ int k2u_whole_times; // repeat times
+ int (*k2u_tsk)(struct k2u_args);
+};
+
+struct task_param {
+ bool with_print;
+};
+
+struct test_setting {
+ int (*task)(struct task_param*);
+ struct task_param *task_param;
+};
+
+static int init_sem();
+static int close_sem();
+static int k2u_unshare_task(struct task_param *task_param);
+static int k2u_continuous_task(struct task_param *task_param);
+static int child_process(int index);
+static int alloc_free_task(struct task_param *task_param);
+static int alloc_continuous_task(struct task_param *task_param);
+static int testcase_combine(int (*task1)(struct task_param*),
+ int (*task2)(struct task_param*), struct task_param *param);
+
+static int testcase_base(struct test_setting test_setting)
+{
+ int status;
+ int pid;
+ int child[PROC_NUM];
+ int ret;
+ int pid_k2u;
+
+ setCore();
+ // 初始化sem
+ ret = init_sem();
+ if (ret < 0) {
+ pr_info("init sem failed");
+ return -1;
+ }
+
+ // 创建组
+ //ret = wrap_add_group(getpid(), PROT, GROUP_ID);
+ ret = create_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ return -1;
+ }
+
+ // 拉起功能进程,负责k2u或者alloc
+ pid_k2u = fork();
+ if (pid_k2u < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid_k2u == 0) {
+ ret = add_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ exit(-1);
+ } else {
+ pr_info("functional process add group succuess.");
+ }
+ exit(test_setting.task(test_setting.task_param));
+ }
+
+ // 拉起子进程
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed, deleting procs...");
+ goto delete_procs;
+ } else if (pid == 0) {
+ // 拉起子进程 hanging
+ exit(child_process(i));
+ } else {
+ child[i] = pid;
+ //ret = wrap_add_group(pid, PROT, GROUP_ID);
+ ret = add_multi_groups(pid, GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", pid);
+ goto delete_procs;
+ }
+ }
+ }
+
+ // 依次让子进程coredump
+ for (int i = 0; i < PROC_NUM; i++) {
+ pr_info("coredump process %d", child[i]);
+ sem_inc_by_one(semid[i]);
+ waitpid(child[i], &status, 0);
+ usleep(200000);
+ }
+
+ // 功能进程退出
+ sem_inc_by_one(sem_task);
+ waitpid(pid_k2u, &status, 0);
+
+ close_sem();
+ return 0;
+
+delete_procs:
+ return -1;
+}
+
+static int testcase_combine(int (*task1)(struct task_param*),
+ int (*task2)(struct task_param*), struct task_param *param)
+{
+ int status;
+ int pid;
+ int child[PROC_NUM];
+ int ret;
+ int pid_k2u, pid_alloc;
+
+ setCore();
+ // 初始化sem
+ ret = init_sem();
+ if (ret < 0) {
+ pr_info("init sem failed");
+ return -1;
+ }
+
+ // 创建组
+ //ret = wrap_add_group(getpid(), PROT, GROUP_ID);
+ ret = create_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ return -1;
+ }
+
+ // 拉起功能进程,负责k2u或者alloc
+ pid_k2u = fork();
+ if (pid_k2u < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid_k2u == 0) {
+ ret = add_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ exit(-1);
+ } else {
+ pr_info("functional process add group succuess.");
+ }
+ exit(task1(param));
+ }
+
+ pid_alloc = fork();
+ if (pid_alloc < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid_alloc == 0) {
+ ret = add_multi_groups(getpid(), GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", getpid());
+ exit(-1);
+ } else {
+ pr_info("functional process add group succuess.");
+ }
+ exit(task2(param));
+ }
+
+ // 拉起子进程
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed, deleting procs...");
+ goto delete_procs;
+ } else if (pid == 0) {
+ // 拉起子进程 hanging
+ exit(child_process(i));
+ } else {
+ child[i] = pid;
+ //ret = wrap_add_group(pid, PROT, GROUP_ID);
+ ret = add_multi_groups(pid, GROUP_NUM, group_ids);
+ if (ret < 0) {
+ pr_info("add group failed %d", pid);
+ goto delete_procs;
+ }
+ }
+ }
+
+ // 依次让子进程coredump
+ for (int i = 0; i < PROC_NUM; i++) {
+ pr_info("coredump process %d", child[i]);
+ sem_inc_by_one(semid[i]);
+ waitpid(child[i], &status, 0);
+ usleep(200000);
+ }
+
+ // 功能进程退出
+ sem_inc_by_val(sem_task, 2);
+ waitpid(pid_k2u, &status, 0);
+ waitpid(pid_alloc, &status, 0);
+
+ close_sem();
+ return 0;
+
+delete_procs:
+ return -1;
+}
+
+static struct task_param task_param_table[] = {
+ {
+ .with_print = false, // 不打印维测
+ },
+ {
+ .with_print = true, // 打印维测
+ },
+};
+
+static struct test_setting test_setting_table[] = {
+ {
+ .task_param = &task_param_table[0],
+ .task = k2u_unshare_task, // k2u->unshare 重复N次
+ },
+ {
+ .task_param = &task_param_table[1],
+ .task = k2u_unshare_task, // k2u->unshare 重复N次
+ },
+ {
+ .task_param = &task_param_table[0],
+ .task = k2u_continuous_task, // k2u重复N次,再unshare重复N次
+ },
+ {
+ .task_param = &task_param_table[0],
+ .task = alloc_free_task, // alloc->free 重复N次
+ },
+ {
+ .task_param = &task_param_table[0],
+ .task = alloc_continuous_task, // alloc N块内存,再free掉,重复M次
+ },
+};
+
+static int testcase1(void)
+{
+ return testcase_base(test_setting_table[0]);
+}
+
+static int testcase2(void)
+{
+ return testcase_base(test_setting_table[1]);
+}
+
+static int testcase3(void)
+{
+ return testcase_base(test_setting_table[2]);
+}
+
+static int testcase4(void)
+{
+ return testcase_base(test_setting_table[3]);
+}
+
+static int testcase5(void)
+{
+ return testcase_base(test_setting_table[4]);
+}
+
+static int testcase6(void)
+{
+ return testcase_combine(k2u_continuous_task, alloc_continuous_task, &task_param_table[0]);
+}
+
+/* testcase4: k2u负责进程coredump */
+static int close_sem()
+{
+ int ret;
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ ret = sem_close(semid[i]);
+ if (ret < 0) {
+ pr_info("sem close failed");
+ return ret;
+ }
+ }
+ sem_close(sem_task);
+ pr_info("all sems deleted.");
+ return 0;
+}
+
+static int init_sem()
+{
+ int i = 0;
+
+ sem_task = sem_create(PROC_NUM, "sem_task");
+
+ for (i = 0; i < PROC_NUM; i++) {
+ key_t key = i;
+ semid[i] = sem_create(key, "sem_child");
+ if (semid[i] < 0) {
+ pr_info("semid %d init failed. errno: %d", i, errno);
+ goto delete_sems;
+ }
+ }
+ pr_info("all sems initialized.");
+ return 0;
+
+delete_sems:
+ for (int j = 0; j < i; j++) {
+ sem_close(semid[j]);
+ }
+ return -1;
+}
+
+static int child_process(int index)
+{
+ pr_info("child process %d created", getpid());
+ // 收到coredump信号后coredump
+ sem_dec_by_one(semid[index]);
+ pr_info("child process %d coredump", getpid());
+ generateCoredump();
+ return 0;
+}
+
+static int k2u_unshare_task(struct task_param *task_param)
+{
+ int ret;
+ int i;
+ struct vmalloc_info vmalloc_info;
+ struct sp_make_share_info k2u_info;
+ unsigned long uva[K2U_UNSHARE_TIME];
+
+ vmalloc_info.size = VMALLOC_SIZE;
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed.");
+ return -1;
+ } else {
+ pr_info("vmalloc %ld success.", vmalloc_info.size);
+ }
+
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = getpid();
+ k2u_info.spg_id = GROUP_ID;
+
+repeat:
+ memset(uva, 0, sizeof(unsigned long) * K2U_UNSHARE_TIME);
+ for (i = 0; i < K2U_UNSHARE_TIME; i++) {
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("k2u failed at %d time.", i);
+ goto unshare;
+ } else {
+ pr_info("k2u success %d time, addr = %lx", i, k2u_info.addr);
+ uva[i] = k2u_info.addr;
+ }
+ }
+
+ if (task_param->with_print)
+ sharepool_print();
+
+unshare:
+ for (int j = 0; j < i; j++) {
+ pr_info("uva[%d] is %ld", j, uva[j]);
+ k2u_info.addr = uva[j];
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("unshare failed at %d", j);
+ return -1;
+ }
+ }
+
+ if (sem_get_value(sem_task) == 0)
+ goto repeat;
+
+ ioctl_vfree(dev_fd, &vmalloc_info);
+
+ return 0;
+}
+
+static int k2u_continuous_task(struct task_param *task_param)
+{
+ int ret;
+ int i, h;
+ struct vmalloc_info vmalloc_info;
+ struct sp_make_share_info k2u_info;
+ unsigned long *uva;
+
+ vmalloc_info.size = VMALLOC_SIZE;
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed.");
+ return -1;
+ } else {
+ pr_info("vmalloc %ld success.", vmalloc_info.size);
+ }
+
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = getpid();
+ //k2u_info.spg_id = GROUP_ID;
+
+ uva = malloc(sizeof(unsigned long) * K2U_CONTINUOUS_TIME * GROUP_NUM);
+
+ memset(uva, 0, sizeof(unsigned long) * K2U_CONTINUOUS_TIME * GROUP_NUM);
+ for (i = 0; i < K2U_CONTINUOUS_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ k2u_info.spg_id = group_ids[h];
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("k2u failed at %d time in group %d.", i, group_ids[h]);
+ goto unshare;
+ } else {
+ pr_info("k2u success %d time, addr = %lx", i, k2u_info.addr);
+ uva[i * GROUP_NUM + h] = k2u_info.addr;
+ }
+ }
+ }
+
+unshare:
+ for (int j = 0; j < min((i * GROUP_NUM + h), K2U_CONTINUOUS_TIME * GROUP_NUM); j++) {
+ pr_info("uva[%d] is %ld", j, uva[j]);
+ k2u_info.addr = uva[j];
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("unshare failed at %d", j);
+ return -1;
+ }
+ }
+
+ ioctl_vfree(dev_fd, &vmalloc_info);
+
+ return 0;
+}
+
+/* 已加组,持续alloc-free*/
+#define ALLOC_SIZE 4096
+#define ALLOC_FLAG 0
+static int alloc_free_task(struct task_param *task_param)
+{
+ int ret = 0;
+ int i, h;
+ unsigned long ret_addr = -1;
+ unsigned long addr[ALLOC_FREE_TIME][GROUP_NUM];
+
+repeat:
+ for (i = 0; i < ALLOC_FREE_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ ret_addr = wrap_sp_alloc(group_ids[h], ALLOC_SIZE, ALLOC_FLAG);
+ if (!ret_addr) {
+ pr_info("alloc failed %d time %d group", i, h + 1);
+ return ret_addr;
+ } else {
+ addr[i][h] = ret_addr;
+ pr_info("alloc success addr %x", ret_addr);
+ }
+ }
+ }
+
+ if (task_param->with_print)
+ sharepool_print();
+
+free:
+ for (i = 0; i < ALLOC_FREE_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ ret = wrap_sp_free(addr[i][h]);
+ if (ret < 0) {
+ pr_info("free failed %d time group %d", i, h + 1);
+ return ret;
+ }
+ }
+ }
+
+ if (sem_get_value(sem_task) == 0)
+ goto repeat;
+
+ return ret;
+}
+
+#define ALLOC_CONTINUOUS_TIME 200
+static int alloc_continuous_task(struct task_param *task_param)
+{
+ int ret = 0;
+ int i, h;
+ unsigned long ret_addr = -1;
+ unsigned long addr[ALLOC_CONTINUOUS_TIME][GROUP_NUM];
+ for (i = 0; i < ALLOC_CONTINUOUS_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ ret_addr = wrap_sp_alloc(group_ids[h], ALLOC_SIZE, ALLOC_FLAG);
+ if (!ret_addr) {
+ pr_info("alloc failed %d time %d group", i, h + 1);
+ return ret_addr;
+ } else {
+ addr[i][h] = ret_addr;
+ pr_info("alloc success addr %x", ret_addr);
+ }
+ }
+ }
+ for (i = 0; i < ALLOC_CONTINUOUS_TIME; i++) {
+ for (h = 0; h < GROUP_NUM; h++) {
+ ret = wrap_sp_free(addr[i][h]);
+ if (ret < 0) {
+ pr_info("free failed %d time group %d", i, h + 1);
+ return ret;
+ }
+ }
+ }
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "每个组都拉起一个进程负责k2u,其他N个进程加多组后依次coredump,所有组k2u每次都应该返回成功。")
+ TESTCASE_CHILD(testcase2, "testcase1, 但打印维测")
+ TESTCASE_CHILD(testcase3, "持续地k2u并且coredump")
+ TESTCASE_CHILD(testcase4, "alloc-free循环的coredump")
+ TESTCASE_CHILD(testcase5, "alloc-coredump, free-coredump")
+ TESTCASE_CHILD(testcase6, "持续地k2u, alloc, 无维测")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/fragment/Makefile b/tools/testing/sharepool/testcase/reliability_test/fragment/Makefile
new file mode 100644
index 000000000000..5370208dbd64
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/fragment/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/reliability_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/reliability_test/fragment/test_external_fragmentation.c b/tools/testing/sharepool/testcase/reliability_test/fragment/test_external_fragmentation.c
new file mode 100644
index 000000000000..1e1d88705a00
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/fragment/test_external_fragmentation.c
@@ -0,0 +1,37 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Description: test external fragmentation
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Apr 20 22:23:51 2021
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+
+/*
+ * I recomend you run this test twice in parallel then kill one of them
+ * so that external fragmentation is created.
+ */
+
+int main(int argc, char *argv[]) {
+ char *p;
+ int i, times;
+ pid_t pid = getpid();
+
+ times = atoi(argv[1]);
+ printf("Fragmentation test pid %d will allocate %d 4K pages\n", pid, times);
+
+ p = sbrk(0);
+
+ for (i = 0; i < times; i++) {
+ sbrk(4096);
+ memset(p + i * 4096, 'a', 4096);
+ }
+
+ printf("Test %d allocation finished. begin sleep.\n", pid);
+ sleep(1200);
+ return 0;
+}
diff --git a/tools/testing/sharepool/testcase/reliability_test/fragment/test_external_fragmentation_trigger.c b/tools/testing/sharepool/testcase/reliability_test/fragment/test_external_fragmentation_trigger.c
new file mode 100644
index 000000000000..310c7c240177
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/fragment/test_external_fragmentation_trigger.c
@@ -0,0 +1,58 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Description: trigger
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Apr 20 23:17:22 2021
+ */
+
+#include <stdio.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <sys/types.h>
+
+#include "sharepool_lib.h"
+
+#define GROUP_ID 1
+
+int main(void) {
+
+ int i, fd, ret;
+ fd = open_device();
+ if (fd < 0) {
+ printf("open fd error\n");
+ return -1;
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = GROUP_ID,
+ };
+ ret = ioctl_add_group(fd, &ag_info);
+ if (ret < 0) {
+ printf("add group failed, ret is %d, error is %d\n", ret, errno);
+ goto error;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_HUGEPAGE,
+ .spg_id = GROUP_ID,
+ .size = PMD_SIZE,
+ };
+
+ // alloc 400MB try to trigger memory compaction
+ for (i = 0; i < 200; i++) {
+ ret = ioctl_alloc(fd, &alloc_info);
+ if (ret) {
+ printf("alloc failed, ret is %d, error is %d\n", ret, errno);
+ goto error;
+ }
+ }
+
+ close_device(fd);
+ return 0;
+error:
+ close_device(fd);
+ return -1;
+}
diff --git a/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/Makefile b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/Makefile
new file mode 100644
index 000000000000..5370208dbd64
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/reliability_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_k2u_and_kill.c b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_k2u_and_kill.c
new file mode 100644
index 000000000000..e0b7287faba0
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_k2u_and_kill.c
@@ -0,0 +1,276 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Thu Dec 17 03:09:02 2020
+ */
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+#include <sys/ipc.h>
+#include <sys/msg.h>
+
+#include <pthread.h>
+
+#include "sharepool_lib.h"
+
+#define CHILD_NUM 50
+#define THREAD_PER_PROCESS 20
+#define KILL_NUM 50
+#define VMALLOC_SIZE (3 * PAGE_SIZE)
+
+static int vmalloc_count;
+static int vfree_count;
+static struct vmalloc_info vm_data[CHILD_NUM];
+
+struct msgbuf {
+ long mtype;
+ struct vmalloc_info vmalloc_info;
+};
+
+static void send_msg(int msgid, int msgtype, struct vmalloc_info ka_info)
+{
+ struct msgbuf msg = {
+ .mtype = msgtype,
+ .vmalloc_info = ka_info,
+ };
+
+ if(msgsnd(msgid, (void *) &msg, sizeof(msg.vmalloc_info),
+ IPC_NOWAIT) == -1) {
+ perror("msgsnd error");
+ exit(EXIT_FAILURE);
+ } else {
+ pr_info("child %d message sent success: size: %x, addr: %lx",
+ msgtype - 1, ka_info.size, ka_info.addr);
+ }
+}
+
+static void get_msg(int msgid, int msgtype)
+{
+ struct msgbuf msg;
+ if (msgrcv(msgid, (void *) &msg, sizeof(msg.vmalloc_info), msgtype,
+ MSG_NOERROR) == -1) {
+ if (errno != ENOMSG) {
+ perror("msgrcv");
+ exit(EXIT_FAILURE);
+ }
+ pr_info("No message available for msgrcv()");
+ } else {
+ pr_info("child %d message received success: size: %x, addr: %lx",
+ msgtype - 1, msg.vmalloc_info.size, msg.vmalloc_info.addr);
+ vm_data[msgtype - 1] = msg.vmalloc_info;
+ vmalloc_count++;
+ }
+}
+
+static void *child_thread(void *arg)
+{
+ int ret;
+ struct sp_make_share_info k2u_info = *(struct sp_make_share_info *)arg;
+ while (1) {
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ } else {
+ //pr_info("ioctl_k2u success");
+ }
+
+ memset((void *)k2u_info.addr, 'a', k2u_info.size);
+
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ return NULL;
+}
+
+static int child_process(int idx, int msgid)
+{
+ int ret;
+
+ struct vmalloc_info ka_info = {
+ .size = VMALLOC_SIZE,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+
+ if (ret < 0) {
+ pr_info("child%d: ioctl_vmalloc failed", idx);
+ return -1;
+ } else {
+ send_msg(msgid, idx + 1, ka_info);
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .pid = getpid(),
+ };
+
+ pthread_t threads[THREAD_PER_PROCESS] = {0};
+ for (int i = 0; i < THREAD_PER_PROCESS; i++) {
+ ret = pthread_create(threads + i, NULL, child_thread, &k2u_info);
+ if (ret < 0) {
+ pr_info("child%d: pthread_create failed, err:%d", idx, ret);
+ return -1;
+ }
+ }
+
+ for (int i = 0; i < THREAD_PER_PROCESS; i++)
+ if (threads[i])
+ pthread_join(threads[i], NULL);
+
+ return 0;
+}
+
+static pid_t fork_and_add_group(int idx, int group_id, int (*child)(int, int), int msgid, char ch)
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(idx, msgid));
+ }
+
+ if (group_id == SPG_ID_DEFAULT)
+ return pid;
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return -1;
+ } else {
+ return pid;
+ }
+}
+
+/*
+ * group_id == SPG_ID_DEFAULT 不加组
+ * group_id == SPG_ID_DEFAULT 所有进程加不同组
+ * 其他,所有进程加指定组
+ */
+static int testcase_routing(int group_id, char ch)
+{
+ int ret = 0;
+ pid_t child[CHILD_NUM] = {0};
+
+ int msgid;
+ int msgkey = 1234;
+ msgid = msgget(msgkey, IPC_CREAT | 0666);
+ pr_info("msg id is %d", msgid);
+
+ for (int i = 0; i < CHILD_NUM; i++) {
+ pid_t pid = fork_and_add_group(i, group_id == SPG_ID_DEFAULT ? i + 1 : group_id, child_process, msgid, ch);
+ if (pid < 0) {
+ ret = -1;
+ goto out;
+ }
+ child[i] = pid;
+ get_msg(msgid, i + 1);
+ }
+
+ unsigned int sed = time(NULL);
+ srand(sed);
+ pr_info("rand sed: %u", sed);
+
+ int count = 0;
+ for (int i = 0; i < KILL_NUM; i++) {
+ int idx = rand() % CHILD_NUM;
+ if (child[idx] > 0) {
+ kill(child[idx], SIGKILL);
+ waitpid(child[idx], NULL, 0);
+ //pr_info("vfree address is %lx", vm_data[idx].addr);
+ //vm_data[idx].size = VMALLOC_SIZE;
+ if (ioctl_vfree(dev_fd, &vm_data[idx]) < 0) {
+ pr_info("vfree %d failed", idx);
+ } else {
+ vfree_count++;
+ pr_info("vfree %d finished.", idx);
+ }
+ pr_info("count: %d, kill child: %d, pid: %d", ++count, idx, child[idx]);
+ child[idx] = 0;
+ } else {
+ pid_t pid = fork_and_add_group(idx, group_id == SPG_ID_DEFAULT ? idx + 1 : group_id,
+ child_process, msgid, ch);
+ if (pid < 0) {
+ ret = -1;
+ goto out;
+ }
+ child[idx] = pid;
+ pr_info("fork child: %d, pid: %d", idx, child[idx]);
+ get_msg(msgid, idx + 1);
+ }
+// sleep(1);
+ }
+
+out:
+ for (int i = 0; i < CHILD_NUM; i++)
+ if (child[i] > 0) {
+ kill(child[i], SIGKILL);
+ //pr_info("vfree2 address is %lx", vm_data[i].addr);
+ //vm_data[i].size = VMALLOC_SIZE;
+ if (ioctl_vfree(dev_fd, &vm_data[i]) < 0) {
+ pr_info("vfree2 %d failed, errno is %d", i, errno);
+ } else {
+ vfree_count++;
+ pr_info("vfree2 %d finished.", i);
+ }
+ }
+
+ pr_info("vmalloc %d times, vfree %d times.", vmalloc_count, vfree_count);
+ vmalloc_count = 0;
+ vfree_count = 0;
+
+ return ret;
+}
+
+// 所有进程不加组
+static int testcase1(void)
+{
+ return testcase_routing(SPG_ID_DEFAULT, 'a');
+}
+
+// 所有进程加不同组
+static int testcase2(void)
+{
+ return testcase_routing(SPG_ID_DEFAULT, 'b');
+}
+
+// 所有进程加同一组
+static int testcase3(void)
+{
+ return testcase_routing(100, 'c');
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "k2u and kill, 所有进程不加组")
+ TESTCASE_CHILD(testcase2, "k2u and kill, 所有进程加不同组")
+ TESTCASE_CHILD(testcase3, "k2u and kill, 所有进程加同一组")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_k2u_unshare.c b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_k2u_unshare.c
new file mode 100644
index 000000000000..85ad9ce5af01
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_k2u_unshare.c
@@ -0,0 +1,188 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Nov 30 18:27:26 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+#define pr_info(fmt, args...) \
+ printf("[file:%s, func:%s, line:%d] " fmt "\n", __FILE__, __func__, __LINE__, ##args)
+
+static int dev_fd;
+
+/*
+ * 内存共享后,先释放内存再停止共享。
+ * testcase1: vmalloc并k2task后,不unshare直接vfree,预计失败。kill进程再vfree。预计成功。
+ */
+
+static int testcase1_child(struct vmalloc_info ka_info)
+{
+ int ret = 0;
+
+ while (1);
+
+ return ret;
+}
+
+/* testcase1: 测试发送signal 9 给进程 */
+static int testcase1(void)
+{
+ int ret;
+ int pid;
+ int status;
+
+ struct vmalloc_info ka_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+ }
+
+ pid = fork();
+ if (pid < 0)
+ printf("fork failed");
+ else if (pid == 0)
+ exit(testcase1_child(ka_info));
+ else {
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = pid,
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+ }
+
+ // 测试点1:不经过unshare()直接vfree,预期失败,有警告打印
+ printf("test point 1: vfree no unshare. ----------\n");
+ ioctl_vfree(dev_fd, &ka_info);
+
+ // 测试点2:使用SIGKILL杀死进程,再vfree,预期成功,无警告打印
+ printf("test point 2: vfree after kill process. ----------\n");
+ kill(pid, SIGKILL);
+ waitpid(pid, &status, 0);
+
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ return ret;
+}
+
+/* testcase2: 测试发送signal 2 给进程 */
+static int testcase2(void)
+{
+ int ret;
+ int pid;
+ int status;
+
+ struct vmalloc_info ka_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+ }
+
+ pid = fork();
+ if (pid < 0)
+ printf("fork failed");
+ else if (pid == 0)
+ exit(testcase1_child(ka_info));
+ else {
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = pid,
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+ }
+
+ // 测试点1:不经过unshare()直接vfree,预期失败,有警告打印
+ printf("test point 1: vfree no unshare. ----------\n");
+ ioctl_vfree(dev_fd, &ka_info);
+
+ // 测试点2:用SIGINT杀死进程,再vfree,预期成功,无警告打印
+ printf("test point 2: vfree after kill process. ----------\n");
+ kill(pid, SIGINT);
+ waitpid(pid, &status, 0);
+
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "k2task之后,测试点1:不经过unshare()直接vfree,预期失败,有警告打印;测试点2:使用SIGKILL杀死进程,再vfree,预期成功,无警告打印")
+ TESTCASE_CHILD(testcase2, "k2task之后,测试点1:不经过unshare()直接vfree,预期失败,有警告打印;测试点2:使用SIGINT杀死进程,再vfree,预期成功,无警告打印")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_malloc_u2k.c b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_malloc_u2k.c
new file mode 100644
index 000000000000..168ad1139648
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_malloc_u2k.c
@@ -0,0 +1,187 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Dec 04 17:20:10 2020
+ */
+#define _GNU_SOURCE
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <pthread.h>
+#include <fcntl.h> /* For O_* constants */
+
+#include <sys/wait.h>
+#include <sys/types.h>
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * testcase1: 用户态调用malloc,再调用u2k。预期成功。
+ */
+
+static int testcase1(void)
+{
+ int ret;
+
+ int psize = getpagesize();
+ char *user_addr = malloc(1000 * psize);
+ if (user_addr == NULL) {
+ pr_info("malloc failed, errno: %d", errno);
+ return -1;
+ }
+ memset((void *)user_addr, 'q', 1000 * psize);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = (unsigned long)user_addr,
+ .size = 1000 * psize,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'q',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+
+ free(user_addr);
+ return ret;
+}
+
+#define TEST2_MEM_SIZE (64 * 1024 * 1024) // 64MB
+char *test2_p;
+
+static void *testcase2_thread(void *arg)
+{
+ int ret = 0;
+ sem_t *sync = (sem_t *)arg;
+ struct sp_make_share_info u2k_info = {
+ // 由于malloc分配的首地址往往不是页对齐的,这里映射时故意减少1个页
+ .size = 3 * PAGE_SIZE,
+ .pid = getpid(),
+ };
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ for (unsigned long i = 0; i < TEST2_MEM_SIZE / PAGE_SIZE / 4; i++) {
+ // we expect page migration may happen here
+ test2_p[i * 4 * PAGE_SIZE] = 'b';
+
+ u2k_info.uva = (unsigned long)(&test2_p[i * 4 * PAGE_SIZE]);
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ pthread_exit((void *)1);
+ }
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ pthread_exit((void *)1);
+ }
+ }
+
+ pthread_exit((void *)0);
+}
+
+/*
+ * 尝试构造页迁移,线程绑核需结合NUMA节点的配置,建议在QEMU中运行
+ */
+static int testcase2(void)
+{
+ int ret = 0;
+ pthread_t tid, self;
+ char *sync_name = "/testcase2_sync";
+
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ self = pthread_self();
+ cpu_set_t cpuset;
+ CPU_ZERO(&cpuset);
+ CPU_SET(0, &cpuset); // 需检查:0号核的NUMA节点
+ ret = pthread_setaffinity_np(self, sizeof(cpu_set_t), &cpuset);
+ if (ret < 0) {
+ pr_info("set cpu affinity for main thread failed %d", ret);
+ return -1;
+ }
+
+ ret = pthread_create(&tid, NULL, testcase2_thread, sync);
+ if (ret != 0) {
+ pr_info("pthread_create failed, errno: %d", errno);
+ return -1;
+ }
+
+ CPU_ZERO(&cpuset);
+ CPU_SET(5, &cpuset); // 需检查:5号核的NUMA节点
+ ret = pthread_setaffinity_np(tid, sizeof(cpu_set_t), &cpuset);
+ if (ret < 0) {
+ pr_info("set cpu affinity for test thread failed %d", ret);
+ return -1;
+ }
+
+ test2_p = malloc(TEST2_MEM_SIZE);
+ // 所有的页全踩一遍
+ for (unsigned int i = 0; i < TEST2_MEM_SIZE / PAGE_SIZE; i++) {
+ test2_p[i * PAGE_SIZE] = 'a';
+ }
+
+ sem_post(sync);
+
+ void *thread_ret;
+ ret = pthread_join(tid, &thread_ret);
+ if (ret != 0) {
+ pr_info("can't join thread %d", ret);
+ return -1;
+ }
+ if ((int)thread_ret != 0) {
+ pr_info("join thread failed %d", (int)thread_ret);
+ return -1;
+ }
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "用户态调用malloc,再调用u2k。预期成功。")
+ TESTCASE_CHILD(testcase2, "尝试构造页迁移,线程绑核需结合NUMA节点的配置,建议在QEMU中运行")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_u2k_and_kill.c b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_u2k_and_kill.c
new file mode 100644
index 000000000000..f6022aa59bd0
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/k2u_u2k/test_u2k_and_kill.c
@@ -0,0 +1,155 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Dec 08 01:48:57 2020
+ */
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <pthread.h>
+
+#include "sharepool_lib.h"
+
+
+
+/*
+ * 异常用例,用来看护u2k接口,会oom,单独跑,内核没挂就通过
+ *
+ * 多进程多线程并发执行u2k,然后随机杀掉进程
+ * 可能存在内存泄漏
+ */
+#define CHILD_NUM 5
+#define THREAD_PER_PROCESS 10
+#define KILL_NUM 1000
+
+static void *child_thread(void *arg)
+{
+ int ret;
+ struct sp_make_share_info u2k_info = *(struct sp_make_share_info *)arg;
+ while (1) {
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return (void *)ret;
+ } else {
+ //pr_info("ioctl_u2k success");
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return (void *)ret;
+ }
+ }
+
+ return NULL;
+}
+
+static int child_process(void)
+{
+ int ret;
+ int psize = getpagesize();
+ char *user_addr = malloc(psize);
+ if (user_addr == NULL) {
+ pr_info("malloc failed, errno: %d", errno);
+ return -1;
+ }
+ memset((void *)user_addr, 'q', psize);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = (unsigned long)user_addr,
+ .size = psize,
+ .pid = getpid(),
+ };
+
+ pthread_t threads[THREAD_PER_PROCESS] = {0};
+ for (int i = 0; i < THREAD_PER_PROCESS; i++) {
+ ret = pthread_create(threads + i, NULL, child_thread, &u2k_info);
+ if (ret < 0) {
+ pr_info("pthread_create failed, err:%d", ret);
+ return -1;
+ }
+ }
+
+ for (int i = 0; i < THREAD_PER_PROCESS; i++)
+ if (threads[i])
+ pthread_join(threads[i], NULL);
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+ int ret = 0;
+ pid_t child[CHILD_NUM] = {0};
+
+ for (int i = 0; i < CHILD_NUM; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ ret = -1;
+ goto out;
+ } else if (pid == 0) {
+ exit(child_process());
+ }
+ child[i] = pid;
+ }
+
+ unsigned int sed = time(NULL);
+ srand(sed);
+ pr_info("rand sed: %u", sed);
+
+ int count = 0;
+ for (int i = 0; i < KILL_NUM; i++) {
+ int idx = rand() % CHILD_NUM;
+ if (child[idx] > 0) {
+ kill(child[idx], SIGKILL);
+ waitpid(child[idx], NULL, 0);
+
+ pr_info("count: %d, kill child: %d, pid: %d", ++count, idx, child[idx]);
+
+ child[idx] = 0;
+ } else {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ ret = -1;
+ goto out;
+ } else if (pid == 0) {
+ exit(child_process());
+ }
+ child[idx] = pid;
+ pr_info("fork child: %d, pid: %d", idx, child[idx]);
+ }
+// sleep(1);
+ }
+
+out:
+ for (int i = 0; i < CHILD_NUM; i++)
+ if (child[i] > 0)
+ kill(child[i], SIGKILL);
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "多进程多线程并发执行u2k,然后随机杀掉进程")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/kthread/Makefile b/tools/testing/sharepool/testcase/reliability_test/kthread/Makefile
new file mode 100644
index 000000000000..5370208dbd64
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/kthread/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/reliability_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/reliability_test/kthread/test_add_strange_task.c b/tools/testing/sharepool/testcase/reliability_test/kthread/test_add_strange_task.c
new file mode 100644
index 000000000000..e97ae46a2e81
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/kthread/test_add_strange_task.c
@@ -0,0 +1,46 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 02:28:23 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <stdlib.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 增加某个进程到某个组
+ */
+
+int main(int argc, char *argv[])
+{
+ if (argc != 3) {
+ printf("Usage:\n"
+ "\t%s <pid> <group_id>", argv[0]);
+ return -1;
+ }
+
+ int dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ pid_t pid = atoi(argv[1]);
+ int group_id = atoi(argv[2]);
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ printf("add task(pid%d) to group(%d) failed, err: %s\n", pid, group_id, strerror(errno));
+ return -1;
+ } else {
+ printf("add task(pid%d) to group(%d) success\n", pid, group_id);
+ return 0;
+ }
+}
diff --git a/tools/testing/sharepool/testcase/reliability_test/kthread/test_del_kthread.c b/tools/testing/sharepool/testcase/reliability_test/kthread/test_del_kthread.c
new file mode 100644
index 000000000000..649e716ed769
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/kthread/test_del_kthread.c
@@ -0,0 +1,61 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 02:28:23 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <stdlib.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 增加某个进程到某个组
+ */
+
+int main(int argc, char *argv[])
+{
+ int ret = 0;
+ if (argc != 3) {
+ printf("Usage:\n"
+ "\t%s <pid> <group_id>", argv[0]);
+ return -1;
+ }
+
+ int dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ pid_t pid = atoi(argv[1]);
+ int group_id = atoi(argv[2]);
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ printf("add task(pid%d) to group(%d) failed, err: %s\n", ag_info.pid, group_id, strerror(errno));
+ goto error_out;
+ }
+
+ // 尝试将传入的进程退组
+ struct sp_del_from_group_info del_info = {
+ .pid = pid,
+ .spg_id = group_id,
+ };
+ ret = ioctl_del_from_group(dev_fd, &del_info);
+ if (ret < 0) {
+ printf("try delete task(pid%d) from group(%d) failed, err: %s\n", pid, group_id, strerror(errno));
+ goto error_out;
+ }
+
+ close_device(dev_fd);
+ return 0;
+
+error_out:
+ close_device(dev_fd);
+ return -1;
+}
diff --git a/tools/testing/sharepool/testcase/reliability_test/others/Makefile b/tools/testing/sharepool/testcase/reliability_test/others/Makefile
new file mode 100644
index 000000000000..5370208dbd64
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/others/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/reliability_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/reliability_test/others/test_judge_addr.c b/tools/testing/sharepool/testcase/reliability_test/others/test_judge_addr.c
new file mode 100644
index 000000000000..cdb168a0cddb
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/others/test_judge_addr.c
@@ -0,0 +1,104 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Dec 08 20:38:39 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * testcase1: 进程调用u2k得到的kva,用is_sharepool_addr去查询,预期false。
+ */
+
+static int testcase1(void)
+{
+ int ret;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = 1,
+ };
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info.addr, 'q', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+
+ if (!ioctl_judge_addr(dev_fd, u2k_info.addr)) {
+ pr_info("invalid address as expected, errno: %d", errno);
+ } else {
+ pr_info("valid address unexpected");
+ return -1;
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "进程调用u2k得到的kva,用is_sharepool_addr去查询,预期false")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/others/test_kill_sp_process.c b/tools/testing/sharepool/testcase/reliability_test/others/test_kill_sp_process.c
new file mode 100644
index 000000000000..37093689a59a
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/others/test_kill_sp_process.c
@@ -0,0 +1,430 @@
+#include "sharepool_lib.h"
+#include "sem_use.h"
+#include <pthread.h>
+#include <stdlib.h>
+
+#define ALLOC_TEST_TYPES 4
+
+#define PROCESS_NUM 5
+#define GROUP_NUM 2
+#define KILL_TIME 1000
+
+static int group_ids[GROUP_NUM];
+static int semid;
+static struct vmalloc_info vmalloc_infos[PROCESS_NUM][GROUP_NUM] = {0};
+static struct vmalloc_info vmalloc_huge_infos[PROCESS_NUM][GROUP_NUM] = {0};
+
+static int add_multi_group()
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+ for (int i = 0; i < GROUP_NUM; i++) {
+ group_ids[i] = i + 1;
+ ag_info.spg_id = group_ids[i];
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("proc%d add group%d failed", getpid(), group_ids[i]);
+ return -1;
+ }
+ }
+
+ return ret;
+}
+
+static int check_multi_group()
+{
+ int ret;
+ // query groups
+ int spg_ids[GROUP_NUM];
+ int group_num = GROUP_NUM;
+ struct sp_group_id_by_pid_info find_group_info = {
+ .num = &group_num,
+ .spg_ids = spg_ids,
+ .pid = getpid(),
+ };
+ ret = ioctl_find_group_by_pid(dev_fd, &find_group_info);
+ if (ret < 0) {
+ pr_info("find group id by pid failed");
+ return ret;
+ } else {
+ for (int i = 0; i < GROUP_NUM; i++)
+ if (spg_ids[i] != group_ids[i]) {
+ pr_info("group id %d not consistent", i);
+ ret = -1;
+ }
+ }
+
+ return ret;
+}
+
+/*
+ * alloc - u2k - vmalloc - k2u - unshare - vfree - unshare - free
+ * 对大页/大页dvpp/小页/小页dvpp都测一下
+ * 多组:对每一组都测一下(query 进程所在的group)
+ */
+void *thread_and_process_helper(int arg)
+{
+ pr_info("thread_and_process_helper start.");
+ int ret, i;
+ pid_t pid;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[ALLOC_TEST_TYPES] = {0};
+ struct sp_make_share_info u2k_info[ALLOC_TEST_TYPES] = {0}, k2u_info = {0}, k2u_huge_info = {0};
+ char *addr;
+
+ int process_index = arg / 100;
+ int group_index = arg % 100;
+ int group_id = group_ids[group_index];
+
+ /* check sp group */
+ pid = getpid();
+
+ // 大页
+ alloc_info[0].flag = SP_HUGEPAGE;
+ alloc_info[0].spg_id = group_id;
+ alloc_info[0].size = 2 * PMD_SIZE;
+
+ // 大页 DVPP
+ alloc_info[1].flag = SP_DVPP | SP_HUGEPAGE;
+ alloc_info[1].spg_id = group_id;
+ alloc_info[1].size = 2 * PMD_SIZE;
+
+ // 普通页 DVPP
+ alloc_info[2].flag = SP_DVPP;
+ alloc_info[2].spg_id = group_id;
+ alloc_info[2].size = 4 * PAGE_SIZE;
+
+ // 普通页
+ alloc_info[3].flag = 0;
+ alloc_info[3].spg_id = group_id;
+ alloc_info[3].size = 4 * PAGE_SIZE;
+
+ /* alloc & u2k */
+ for (i = 0; i < ALLOC_TEST_TYPES; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("sp_alloc return addr %lx\n", alloc_info[i].addr);
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(dev_fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("addr %lx is a valid share pool addr\n", alloc_info[i].addr);
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ pr_info("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("u2k return addr %lx, check memory content succ.\n",
+ // u2k_info[i].addr);
+ }
+ }
+ //pr_info("\n");
+ }
+
+ /* prepare for vmalloc */
+ vmalloc_infos[process_index][group_index].size = 3 * PAGE_SIZE;
+ vmalloc_huge_infos[process_index][group_index].size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(dev_fd, &(vmalloc_infos[process_index][group_index]));
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(dev_fd, &(vmalloc_huge_infos[process_index][group_index]));
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_infos[process_index][group_index].addr;
+ k2u_info.size = vmalloc_infos[process_index][group_index].size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = group_id;
+
+ k2u_huge_info.kva = vmalloc_huge_infos[process_index][group_index].addr;
+ k2u_huge_info.size = vmalloc_huge_infos[process_index][group_index].size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = group_id;
+
+ /* k2u */
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ret = ioctl_vfree(dev_fd, &(vmalloc_infos[process_index][group_index]));
+ if (ret < 0) {
+ pr_info("vfree small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vfree(dev_fd, &(vmalloc_huge_infos[process_index][group_index]));
+ if (ret < 0) {
+ pr_info("vfree huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* unshare kva & sp_free*/
+ for (i = 0; i < ALLOC_TEST_TYPES; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ //close_device(dev_fd);
+ return 0;
+
+error:
+ //close_device(dev_fd);
+ return -1;
+}
+
+void *thread(void *arg)
+{
+ int ret = 0;
+ int count = 0;
+ while (1) {
+ ret = thread_and_process_helper(arg);
+ pr_info("thread finished, count: %d", count);
+ if (ret < 0) {
+ pr_info("thread_and_process helper failed, spg %d.", (int)arg);
+ return -1;
+ }
+ sem_inc_by_one(semid);
+ int sem_val = sem_get_value(semid);
+ pr_info("thread run %d times, %d left.", sem_val, KILL_TIME - sem_val);
+ }
+
+ return ret;
+}
+
+static int process(int index)
+{
+ int ret = 0;
+
+ // 每个组创建一个线程,循环执行helper
+ pthread_t threads[GROUP_NUM];
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = pthread_create(threads + i, NULL, thread, index * 100 + i);
+ if (ret < 0) {
+ pr_info("pthread %d create failed.", i);
+ return -1;
+ }
+ }
+
+ for (int i = 0; i < GROUP_NUM; i++) {
+ void *tret;
+ ret = pthread_join(threads[i], &tret);
+ if (ret < 0) {
+ pr_info("pthread %d join failed.", i);
+ ret = -1;
+ }
+ if ((long)tret != 0) {
+ pr_info("testcase execution failed, thread %d exited unexpectedly\n", i);
+ ret = -1;
+ }
+ }
+
+ return ret;
+}
+
+static int vfreeAll()
+{
+ int ret = 0;
+ for (int i = 0; i < PROCESS_NUM; i++) {
+ for (int j = 0; j < GROUP_NUM; j++) {
+ ioctl_vfree(dev_fd, &(vmalloc_infos[i][j]));
+ ioctl_vfree(dev_fd, &(vmalloc_huge_infos[i][j]));
+ }
+ }
+ return ret;
+}
+
+/*
+ * testcase1: 创建N个进程,加入所有组,并创建M个线程执行sharepool任务: u2k/k2u...;
+ * 并发kill所有进程,期望能顺利结束。
+ */
+static int testcase1(void)
+{
+ int ret;
+
+ semid = sem_create(1234, "kill all after N times api calls");
+ sem_set_value(semid, 0);
+
+ // 创建N个进程,加入所有组;
+ pid_t childs[PROCESS_NUM];
+ for (int i = 0; i < PROCESS_NUM; i++) {
+ pid_t pid_child = fork();
+ if (pid_child < 0) {
+ pr_info("fork failed, error %d", pid_child);
+ exit(-1);
+ } else if (pid_child == 0) {
+ if (add_multi_group())
+ return -1;
+ if (check_multi_group())
+ return -1;
+ pr_info("%s add %dth child to all groups success, %d left",
+ __FUNCTION__, i + 1, PROCESS_NUM - i - 1);
+ exit(process(i));
+ } else {
+ childs[i] = pid_child;
+ pr_info("fork child%d, pid: %d", i, pid_child);
+ }
+ }
+
+ // 计数达到xxx后,并发kill进程
+ sem_dec_by_val(semid, KILL_TIME);
+ pr_info("start to kill all process...");
+ for (int i = 0; i < PROCESS_NUM; i++) {
+ kill(childs[i], SIGKILL);
+ }
+
+ // waitpid
+ for (int i = 0; i < PROCESS_NUM; i++) {
+ int status;
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", i, status);
+ ret = 0;
+ } else {
+ pr_info("child%d test success, status: %d, %d processes left", i, status, PROCESS_NUM - i);
+ }
+ childs[i] = 0;
+ }
+
+ // vfree the vmalloc memories
+ if (vfreeAll() < 0) {
+ pr_info("not all are vfreed.");
+ }
+
+ sem_close(semid);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "创建N个进程,加入所有组,并创建M个线程执行sharepool任务: u2k/k2u...; 并发kill所有进程,期望能顺利结束。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/others/test_kthread.c b/tools/testing/sharepool/testcase/reliability_test/others/test_kthread.c
new file mode 100644
index 000000000000..a7d0b5dc0d73
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/others/test_kthread.c
@@ -0,0 +1,195 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Dec 08 20:38:39 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+static int testcase1(void)
+{
+ int ret;
+ struct sp_kthread_info info;
+ ret = ioctl_kthread_start(dev_fd, &info);
+ if (ret < 0) {
+ pr_info("kthread start failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_kthread_end(dev_fd, &info);
+ if (ret < 0) {
+ pr_info("kthread end failed, errno: %d", errno);
+ return ret;
+ }
+
+ return ret;
+}
+
+static int testcase2(void)
+{
+ int ret;
+ int spg_id = 1;
+ void *addr;
+ unsigned long va;
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id);
+ if (ret < 0) {
+ pr_info("add group failed");
+ return -1;
+ }
+
+ addr = wrap_sp_alloc(spg_id, 4096, 0);
+ if (addr == (void *)-1) {
+ pr_info("alloc failed");
+ return -1;
+ }
+ va = (unsigned long)addr;
+
+ struct sp_kthread_info info = {
+ .type = 1,
+ .addr = va,
+ .size = 4096,
+ .spg_id = spg_id,
+ };
+
+ ret = ioctl_kthread_start(dev_fd, &info);
+ if (ret < 0) {
+ pr_info("kthread start failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_kthread_end(dev_fd, &info);
+ if (ret < 0) {
+ pr_info("kthread end failed, errno: %d", errno);
+ return ret;
+ }
+
+ return ret;
+}
+
+static int prepare(struct vmalloc_info *ka_info, bool ishuge)
+{
+ int ret;
+ if (ishuge) {
+ ret = ioctl_vmalloc_hugepage(dev_fd, ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc_hugepage failed, errno: %d", errno);
+ return -1;
+ }
+ } else {
+ ret = ioctl_vmalloc(dev_fd, ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'a',
+ .addr = ka_info->addr,
+ .size = ka_info->size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, ka_info);
+ }
+ return ret;
+}
+
+static int testcase3(void)
+{
+ int ret;
+ int spg_id = 1;
+ void *addr;
+ unsigned long va;
+ unsigned long flag = 0;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = spg_id,
+ .sp_flags = flag,
+ .pid = getpid(),
+ };
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id);
+ if (ret < 0)
+ return -1;
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("testcase ioctl_k2u failed unexpected", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ return -1;
+ } else {
+ pr_info("testcase5 ioctl_k2u success expected");
+ }
+
+ struct sp_kthread_info info = {
+ .type = 2,
+ .addr = k2u_info.addr,
+ .size = k2u_info.size,
+ .spg_id = spg_id,
+ };
+
+ ret = ioctl_kthread_start(dev_fd, &info);
+ if (ret < 0) {
+ pr_info("kthread start failed, errno: %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+ }
+
+ sleep(2);
+
+ ret = ioctl_kthread_end(dev_fd, &info);
+ if (ret < 0) {
+ pr_info("kthread end failed, errno: %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "内核线程调用")
+ TESTCASE_CHILD(testcase2, "内核线程调用 sp free")
+ TESTCASE_CHILD(testcase3, "内核线程调用 sp unshare")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/others/test_mmap_sp_address.c b/tools/testing/sharepool/testcase/reliability_test/others/test_mmap_sp_address.c
new file mode 100644
index 000000000000..01416b647cbd
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/others/test_mmap_sp_address.c
@@ -0,0 +1,223 @@
+#include <stdio.h>
+#include <unistd.h>
+#include <errno.h>
+#include <stdbool.h>
+#include <stdlib.h>
+#include "sharepool_lib.h"
+
+#define start_addr 0xe80000000000UL
+#define end_addr 0xf80000000000UL
+
+static int try_mmap(void *addr, unsigned long size)
+{
+ int *result;
+ int ret = 0;
+ result = mmap(addr, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0);
+ if (result == MAP_FAILED) {
+ printf("mmap failed as expected, errno is %d\n", errno);
+ ret = 0;
+ }
+ else {
+ printf("mmap success unexpected, addr is %lx\n", (unsigned long)result);
+ ret = -1;
+ }
+ return ret;
+}
+
+/* testcase1: 试图通过mmap()映射sharepool内存地址(未使用),预期失败 */
+static int testcase1(void)
+{
+ int ret;
+
+ void *addr = start_addr;
+
+ int *result = mmap(addr, sizeof(int), PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0);
+ if (result == MAP_FAILED) {
+ printf("mmap failed as expected, errno is %d\n", errno);
+ ret = 0;
+ }
+ else {
+ printf("mmap success unexpected, addr is %lx\n", (unsigned long)result);
+ ret = -1;
+ }
+
+ return ret;
+}
+
+/* testcase2: 试图通过mmap()映射sharepool内存地址(已使用),预期失败 */
+static int testcase2(void)
+{
+ int *result;
+ int ret;
+ int spg_id = 1;
+
+ // 加组,申请内存,用返回地址进行mmap
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id) < 0)
+ return -1;
+
+ unsigned long addr;
+ addr = (unsigned long)wrap_sp_alloc(spg_id, PAGE_SIZE, 0);
+ pr_info("sp_alloc first address is %lx", addr);
+
+ addr = (unsigned long)wrap_sp_alloc(spg_id, PAGE_SIZE, 0);
+ pr_info("sp_alloc second address is %lx", addr);
+
+ addr = (unsigned long)wrap_sp_alloc(spg_id, PMD_SIZE, 0);
+ pr_info("sp_alloc third address is %lx", addr);
+
+ addr = (unsigned long)wrap_sp_alloc(spg_id, PMD_SIZE, 0);
+ pr_info("sp_alloc fourth address is %lx", addr);
+
+ result = mmap(addr, PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0);
+ if (result == MAP_FAILED) {
+ printf("mmap failed as expected, errno is %d\n", errno);
+ ret = 0;
+ }
+ else {
+ printf("mmap success unexpected, addr is %lx\n", (unsigned long)result);
+ ret = -1;
+ }
+
+ return ret;
+}
+
+/* testcase3: 加组,申请内存,用返回地址进行munmap,再sp_free */
+static int testcase3(void)
+{
+ int *result;
+ int ret;
+ int spg_id = 1;
+
+
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id) < 0)
+ return -1;
+
+ unsigned long addr1;
+ addr1 = (unsigned long)wrap_sp_alloc(spg_id, PAGE_SIZE, 0);
+ pr_info("sp_alloc small page is %lx", addr1);
+
+ // 尝试munmap小页
+ ret = munmap(addr1, PAGE_SIZE);
+ if (ret < 0)
+ pr_info("munmap failed");
+ else
+ pr_info("munmap success");
+
+
+ // 尝试munmap大页
+ unsigned long addr2;
+ addr2 = (unsigned long)wrap_sp_alloc(spg_id, PMD_SIZE, 0);
+ pr_info("sp_alloc hugepage is %lx", addr2);
+
+ ret = munmap(addr2, PMD_SIZE);
+ if (ret < 0) {
+ pr_info("munmap hugepage failed");
+ ret = 0;
+ }
+ else {
+ pr_info("munmap hugepage success.");
+ return -1;
+ }
+
+ // 再sp free,再munmap
+ ret = wrap_sp_free(addr2);
+ if (ret < 0) {
+ pr_info("sp_free hugepage failed.");
+ } else {
+ pr_info("sp_free hugepage success.");
+ }
+
+ return ret;
+}
+
+/* testcase4: 加组,申请内存,用返回地址进行mmap和munmap,再sp_free。 */
+static int testcase4(void)
+{
+ int *result;
+ int ret;
+ int spg_id = 1;
+
+ // alloc
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id) < 0)
+ return -1;
+
+ unsigned long addr1;
+ addr1 = (unsigned long)wrap_sp_alloc(spg_id, PAGE_SIZE, 0);
+ pr_info("sp_alloc addr1 is %lx", addr1);
+
+ // mmap & munmap
+ result = mmap(addr1, PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0);
+ if (result == MAP_FAILED) {
+ printf("mmap addr1 failed as expected, errno is %d\n", errno);
+ ret = 0;
+ }
+ else {
+ printf("mmap addr1 success unexpected, addr is %lx\n", (unsigned long)result);
+ ret = munmap(addr1, PAGE_SIZE);
+ if (ret < 0)
+ pr_info("munmap after mmap failed");
+ else
+ pr_info("munmap after mmap success");
+ }
+
+ // 再free
+ ret = wrap_sp_free(addr1);
+ if (ret < 0) {
+ pr_info("sp_free addr1 failed.");
+ } else {
+ pr_info("sp_free addr1 success.");
+ }
+
+ return ret;
+}
+
+/* testcase5: alloc和mmap交叉进行 */
+static int testcase5(void)
+{
+ int *result;
+ int ret;
+ int spg_id = 1;
+
+ // 加组,申请内存,用返回地址进行mmap
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id) < 0)
+ return -1;
+
+ unsigned long addr;
+ addr = (unsigned long)wrap_sp_alloc(spg_id, PAGE_SIZE, 0);
+ pr_info("sp_alloc first address is %lx", addr);
+ ret = try_mmap(addr, PAGE_SIZE);
+
+ addr = (unsigned long)wrap_sp_alloc(spg_id, PAGE_SIZE, 0);
+ pr_info("sp_alloc second address is %lx", addr);
+ ret = try_mmap(addr, PAGE_SIZE);
+
+ addr = (unsigned long)wrap_sp_alloc(spg_id, PMD_SIZE, 0);
+ pr_info("sp_alloc third address is %lx", addr);
+ ret = try_mmap(addr, PMD_SIZE);
+
+ addr = (unsigned long)wrap_sp_alloc(spg_id, PMD_SIZE, 0);
+ pr_info("sp_alloc fourth address is %lx", addr);
+ ret = try_mmap(addr, PMD_SIZE);
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "试图通过mmap()映射sharepool内存地址(未使用),预期失败")
+ TESTCASE_CHILD(testcase2, "试图通过mmap()映射sharepool内存地址(已使用),预期失败")
+ TESTCASE_CHILD(testcase3, "加组,申请内存,用返回地址进行munmap,预期失败,再sp_free,预期成功")
+ TESTCASE_CHILD(testcase4, "加组,申请内存,用返回地址进行mmap和munmap,预期失败,再sp_free,预期成功")
+ TESTCASE_CHILD(testcase5, "alloc和mmap交叉进行")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/others/test_notifier_block.c b/tools/testing/sharepool/testcase/reliability_test/others/test_notifier_block.c
new file mode 100644
index 000000000000..856142a15a85
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/others/test_notifier_block.c
@@ -0,0 +1,101 @@
+#include <stdlib.h>
+#include <errno.h>
+#include <string.h>
+#include "sharepool_lib.h"
+
+#define PROC_NUM 4
+#define GROUP_ID 1
+
+int register_notifier_block(int id)
+{
+ int ret = 0;
+
+ /* see sharepool_dev.c for more details*/
+ struct sp_notifier_block_info notifier_info = {
+ .i = id,
+ };
+ ret = ioctl_register_notifier_block(dev_fd, ¬ifier_info);
+ if (ret != 0)
+ pr_info("proc %d register notifier block %d failed. ret is %d. errno is %s.",
+ getpid(), id, ret, strerror(errno));
+ else
+ pr_info("proc %d register notifier for func %d success!!", getpid(), id);
+
+ return ret;
+}
+
+int unregister_notifier_block(int id)
+{
+ int ret = 0;
+
+ /* see sharepool_dev.c for more details*/
+ struct sp_notifier_block_info notifier_info = {
+ .i = id,
+ };
+ ret = ioctl_unregister_notifier_block(dev_fd, ¬ifier_info);
+ if (ret != 0)
+ pr_info("proc %d unregister notifier block %d failed. ret is %d. errno is %s.",
+ getpid(), id, ret, strerror(errno));
+ else
+ pr_info("proc %d unregister notifier for func %d success!!", getpid(), id);
+
+ return ret;
+}
+
+static int testcase1(void)
+{
+ pid_t childs[PROC_NUM];
+
+ register_notifier_block(1);
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed.");
+ return -1;
+ } else if (pid == 0) {
+ if (wrap_add_group(getpid(), PROT_READ | PROT_WRITE, i + 1) < 0) {
+ pr_info("add group failed.");
+ return -1;
+ }
+ while (1) {
+
+ }
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ sleep(2);
+
+ for (int i = 0; i < PROC_NUM / 2; i++) {
+ pr_info("group %d is exiting...", i + 1);
+ KILL_CHILD(childs[i]);
+ }
+
+ unregister_notifier_block(1);
+ sleep(2);
+
+ for (int i = PROC_NUM / 2; i < PROC_NUM; i++) {
+ pr_info("group %d is exiting...", i + 1);
+ KILL_CHILD(childs[i]);
+ }
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "测试注册组销毁通知的效果")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/reliability_test.sh b/tools/testing/sharepool/testcase/reliability_test/reliability_test.sh
new file mode 100755
index 000000000000..7d47a35cfd0d
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/reliability_test.sh
@@ -0,0 +1,51 @@
+#!/bin/sh
+
+set -x
+
+for line in test_add_group1 \
+ test_unshare1 \
+ test_unshare2 \
+ test_unshare3 \
+ test_unshare4 \
+ test_unshare5 \
+ test_unshare6 \
+ test_unshare7 \
+ test_malloc_u2k \
+ test_judge_addr
+do
+ ./reliability_test/$line
+ if [ $? -ne 0 ] ;then
+ echo "testcase reliability_test/$line failed"
+ exit 1
+ fi
+done
+
+# 异常用例,用来看护u2k接口,会oom,单独跑,内核没有挂就通过
+./reliability_test/test_u2k_and_kill
+
+# 异常用例,用来看护k2u接口,会oom,单独跑,内核没有挂就通过
+./reliability_test/test_k2u_and_kill
+
+# 添加当前shell到组11,不崩溃即可
+./reliability_test/test_add_strange_task $$ 11
+# 添加kthread线程到组99999,不崩溃即可
+./reliability_test/test_add_strange_task 2 99999
+# 添加init进程到组10,不崩溃即可
+#./reliability_test/test_add_strange_task 1 11
+# 添加守护进程到组12,不崩溃即可(ps命令查看,进程名用方括号包含的是守护进程)
+#./reliability_test/test_add_strange_task 2 11
+
+# coredump程序运行时,同组其它进程正在做share pool各项基础操作
+# 这里不再额外编程,快速复用已有的用例,但要注意:
+# 1. 检查两个用例的共享组id是否相等
+# 2. 在背景用例执行完之前触发test_coredump的coredump
+./test_mult_process/test_proc_interface_process 1 &
+./reliability_test/test_coredump
+
+# 构造外部碎片,平时不运行该用例
+# 120000 * 4K ~= 500MB. Please kill one of the process after allocation is done.
+#./reliability_test/test_external_fragmentation 100000 & ./reliability_test/test_external_fragmentation 100000 &
+#echo "now sleep 20 sec, please kill one of the process above"
+#sleep 20
+#./reliability_test/test_external_fragmentation_trigger
+
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_add_group/Makefile b/tools/testing/sharepool/testcase/reliability_test/sp_add_group/Makefile
new file mode 100644
index 000000000000..5370208dbd64
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_add_group/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/reliability_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_exiting_task.c b/tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_exiting_task.c
new file mode 100644
index 000000000000..a72eceffa38c
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_exiting_task.c
@@ -0,0 +1,61 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 28 02:21:42 2020
+ */
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include "sharepool_lib.h"
+
+static int testcase1(void)
+{
+ int ret = 0;
+
+ srand((unsigned int)time(NULL));
+ for (int i = 0; i < 10000; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ if (!(i % 100))
+ pr_info("child process %d", i);
+ exit(0);
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = rand() % (SPG_ID_AUTO_MIN - 1) + 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d, pid: %d, spg_id: %d",
+ errno, pid, ag_info.spg_id);
+ }
+ waitpid(pid, NULL, 0);
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "将正在退出的task加入组中。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_group1.c b/tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_group1.c
new file mode 100644
index 000000000000..fb64b0c379c9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_group1.c
@@ -0,0 +1,118 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Nov 30 15:24:35 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+#define MAX_CHILD 100
+#define CHILD_EXIT 5
+
+static int group_id = SPG_ID_MAX;
+/*
+ * 多进程并发加入指定组,预期只有第一个进程会加组成功。
+ */
+
+static int testcase1(void)
+{
+ pid_t child_pid;
+ int status;
+ int ret;
+ int child_result;
+
+ pr_info("start test: child num = %d", MAX_CHILD);
+ for (int i = 0; i < MAX_CHILD; i++) {
+ child_pid = fork();
+ if (child_pid == -1 || child_pid == 0) { // ensure only parent do fork()
+ break;
+ }
+ }
+ if (child_pid == -1) {
+ pr_info("fork failed");
+ } else if (child_pid == 0) {
+ int child_ret = 0;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+
+ child_ret = ioctl_add_group(dev_fd, &ag_info);
+ sleep(10);
+ if (child_ret == 0) {
+ exit(EXIT_SUCCESS);
+ } else if (child_ret < 0 && errno == EPERM) {
+ exit(CHILD_EXIT);
+ } else {
+ pr_info("ioctl_add_group failed unexpected: errno = %d", errno);
+ exit(EXIT_FAILURE);
+ }
+
+ } else {
+ for (int i = 0; i < MAX_CHILD; i++) {
+ ret = waitpid(-1, &status, 0);
+ if (WIFEXITED(status)) {
+ pr_info("child execute success, ret value : %d, pid = %d", WEXITSTATUS(status), ret);
+ if (WEXITSTATUS(status) == CHILD_EXIT) {
+ child_result++;
+ }
+ } else if (WIFSIGNALED(status)) {
+ pr_info("child execute failed, killed by signal : %d, pid = %d", WTERMSIG(status), ret);
+ ret = -1;
+ goto error_out;
+ } else if (WIFSTOPPED(status)) {
+ printf("child execute failed, stoped by signal : %d, pid = %d", WSTOPSIG(status), ret);
+ ret = -1;
+ goto error_out;
+ } else {
+ printf("child execute failed, WIFEXITED(status) : %d, pid = %d", WIFEXITED(status), ret);
+ ret = -1;
+ goto error_out;
+ }
+ }
+ if (child_result == 0) {
+ pr_info("testcase success!!");
+ ret = 0;
+ } else {
+ pr_info("testcase failed!! %d childs add group failed", child_result);
+ ret = -1;
+ goto error_out;
+ }
+ }
+
+error_out:
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "多进程并发加入指定组,预期都会加组成功。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_strange_task.c b/tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_strange_task.c
new file mode 100644
index 000000000000..e97ae46a2e81
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_add_group/test_add_strange_task.c
@@ -0,0 +1,46 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 16 02:28:23 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <stdlib.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 增加某个进程到某个组
+ */
+
+int main(int argc, char *argv[])
+{
+ if (argc != 3) {
+ printf("Usage:\n"
+ "\t%s <pid> <group_id>", argv[0]);
+ return -1;
+ }
+
+ int dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ pid_t pid = atoi(argv[1]);
+ int group_id = atoi(argv[2]);
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ printf("add task(pid%d) to group(%d) failed, err: %s\n", pid, group_id, strerror(errno));
+ return -1;
+ } else {
+ printf("add task(pid%d) to group(%d) success\n", pid, group_id);
+ return 0;
+ }
+}
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_unshare/Makefile b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/Makefile
new file mode 100644
index 000000000000..5370208dbd64
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/reliability_test
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare1.c b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare1.c
new file mode 100644
index 000000000000..08d14a6c2d95
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare1.c
@@ -0,0 +1,325 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Nov 30 18:27:26 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * 内存共享后,先释放内存再停止共享。
+ * testcase1: vmalloc并k2task后,直接vfree kva(预期成功),然后unshare uva(预期成功)。
+ * testcase2: vmalloc并k2spg后,直接vfree kva(预期失败),然后unshare uva(预期成功)
+ * testcase3: 父进程vmalloc,子进程加组,父进程k2u给子进程所在组,子进程退出,组销毁。父进程vfree。(成功, 无报错)
+ * testcase4: sp_alloc并u2k后,直接sp_free uva(预期成功),然后unshare kva(预期成功)。
+ */
+
+static int testcase1(void)
+{
+ int ret;
+
+ struct vmalloc_info vmalloc_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ return ret;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ return ret;
+ }
+
+ ret = ioctl_vfree(dev_fd, &vmalloc_info); /* 预期失败 */
+ if (ret < 0) {
+ pr_info("ioctl vfree failed for the first time, errno: %d", errno);
+ }
+
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ }
+ ret = ioctl_vfree(dev_fd, &vmalloc_info); /* 预期成功 */
+ if (ret < 0) {
+ pr_info("ioctl vfree failed for the second time, errno: %d", errno);
+ }
+ return ret;
+}
+
+#define GROUP_ID 1
+static int testcase2(void)
+{
+ int ret;
+
+ /* k2u prepare*/
+ struct vmalloc_info vmalloc_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ return ret;
+ }
+
+ struct sp_make_share_info k2spg_info = {
+ .kva = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ .spg_id = GROUP_ID,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID);
+ if (ret < 0) {
+ pr_info("process add group %d failed, errno: %d", GROUP_ID, errno);
+ } else {
+ pr_info("process %d add group %d success", getpid(), GROUP_ID);
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2spg_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ return ret;
+ } else {
+ pr_info("process k2u success");
+ }
+
+ /* the vm_area has SP_FLAG in it, vfree shall cause warning*/
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ //ret = sharepool_log("vfree without unshare");
+
+ ret = ioctl_unshare(dev_fd, &k2spg_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ }
+ //ret = sharepool_log("after unshare");
+
+ ioctl_vfree(dev_fd, &vmalloc_info); /* 预期成功 */
+ //ret = sharepool_log("vfree again");
+
+ return 0;
+}
+
+static int testcase3(void)
+{
+ int ret;
+ sem_t *sem_addgroup, *sem_k2u;
+ sem_addgroup = sem_open("/child_process_add_group_finish", O_CREAT, O_RDWR, 0);
+ sem_k2u = sem_open("/child_process_k2u_finish", O_CREAT, O_RDWR, 0);
+
+ /* k2u prepare*/
+ struct vmalloc_info vmalloc_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ return ret;
+ }
+
+ struct sp_make_share_info k2spg_info = {
+ .kva = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ .spg_id = GROUP_ID,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ /* fork child */
+ pid_t child = fork();
+ if (child == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID);
+ if (ret < 0) {
+ pr_info("child process add group %d failed, errno: %d", GROUP_ID, errno);
+ } else {
+ pr_info("child process add group %d success", GROUP_ID);
+ }
+ sem_post(sem_addgroup);
+ /*wait parent make share kernel address to child */
+ sem_wait(sem_k2u);
+ exit(0);
+ }
+ pr_info("parent process is %d, child process is %d", getpid(), child);
+
+ sem_wait(sem_addgroup);
+ /* k2u to child */
+ k2spg_info.pid = child;
+ ret = ioctl_k2u(dev_fd, &k2spg_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed as exepected, errno: %d", errno);
+ ret = 0;
+ } else {
+ pr_info("parent process %d k2u success unexpected.", getpid());
+ ret = -1;
+ }
+ sem_post(sem_k2u);
+
+ int status;
+ waitpid(child, &status, 0);
+ pr_info("child process %d exited.", child);
+
+ /* the vm_area has SP_FLAG in it, vfree shall cause warning*/
+ ioctl_vfree(dev_fd, &vmalloc_info);
+
+ sem_unlink("/child_process_add_group_finish");
+ sem_unlink("/child_process_k2u_finish");
+ return ret;
+}
+
+static int testcase4(void)
+{
+ int ret;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ } else {
+ pr_info("process %d add group %d success.", getpid(), ag_info.spg_id);
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = 1,
+ };
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info.addr, 'q', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'q',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret == 0) {
+ pr_info("ioctl_free succeed as expected, ret: %d, errno: %d", ret, errno);
+ } else {
+ pr_info("ioctl_free result unexpected, ret: %d, errno: %d", ret, errno);
+ return -1;
+ }
+
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ ioctl_unshare(dev_fd, &u2k_info);
+ return ret;
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "vmalloc并k2task后,直接vfree kva(预期成功),然后unshare uva(预期成功)")
+ TESTCASE_CHILD(testcase2, "vmalloc并k2spg后,直接vfree kva(预期失败),然后unshare uva(预期成功)")
+ TESTCASE_CHILD(testcase3, "父进程vmalloc,子进程加组,父进程k2u给子进程所在组,子进程退出,组销毁。父进程vfree。(成功, 无报错)")
+ TESTCASE_CHILD(testcase4, "sp_alloc并u2k后,直接sp_free uva(预期成功),然后unshare kva(预期成功)")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
+
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare2.c b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare2.c
new file mode 100644
index 000000000000..fc1eeb623f49
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare2.c
@@ -0,0 +1,202 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Dec 01 09:10:50 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * testcase1: 进程A分配并共享内存后,由不同组的进程B进行停止共享(B得到了A的pid和spg_id)。
+ * testcase2: 进程A分配内存后,由不同组的进程B进行释放。
+ */
+
+/*
+ * A进程u2k得到的kva,没有办法判断合法的unshare调用方是谁。B进程如果猜到了kva,它去unshare kva,也能成功
+ * 这个一直是设计缺失的。我们认为现网用户态没法拿到kva,也无法调用unshare kva. 只有驱动内核态才能调。
+ */
+
+#define TESTCASE2_MSG_KEY 20
+#define TESTCASE2_MSG_TYPE 200
+// 利用消息队列在进程间传递struct sp_alloc_info
+struct msgbuf_alloc_info {
+ long type;
+ struct sp_alloc_info alloc_info;
+};
+
+static int testcase2_child(sem_t *sync, sem_t *grandsync)
+{
+ int ret;
+
+ do {
+ ret = sem_wait(sync);
+ } while (ret < 0 && errno == EINTR);
+
+ int msgid = msgget(TESTCASE2_MSG_KEY, 0);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ struct sp_alloc_info *alloc_info = &msgbuf.alloc_info;
+ ret = msgrcv(msgid, &msgbuf, sizeof(*alloc_info), TESTCASE2_MSG_TYPE, IPC_NOWAIT);
+ if (ret < 0) {
+ pr_info("msgrcv failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ if (!ioctl_judge_addr(dev_fd, alloc_info->addr)) {
+ pr_info("invalid address");
+ goto error_out;
+ }
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0 && errno == EPERM) {
+ pr_info("ioctl_free failed as expected, errno: %d", errno);
+ } else {
+ pr_info("ioctl_free succeed unexpected, ret: %d", ret);
+ goto error_out;
+ }
+
+ sem_post(grandsync);
+ return 0;
+
+error_out:
+ sem_post(grandsync);
+ return -1;
+}
+
+static int testcase2(struct sp_alloc_info *alloc_info)
+{
+ int ret, status = 0;
+ int group_id = alloc_info->spg_id;
+
+ char *sync_name = "/testcase2_sync";
+ sem_t *sync = sem_open(sync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(sync_name);
+
+ char *grandsync_name = "/testcase2_childsync";
+ sem_t *grandsync = sem_open(grandsync_name, O_CREAT, O_RDWR, 0);
+ if (sync == SEM_FAILED) {
+ pr_info("sem_open failed");
+ return -1;
+ }
+ sem_unlink(grandsync_name);
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ return -1;
+ }
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ return -1;
+ } else if (pid == 0) {
+ exit(testcase2_child(sync, grandsync));
+ }
+
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ int msgid = msgget(TESTCASE2_MSG_KEY, IPC_CREAT | 0666);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ msgbuf.type = TESTCASE2_MSG_TYPE;
+ memcpy(&msgbuf.alloc_info, alloc_info, sizeof(*alloc_info));
+ ret = msgsnd(msgid, &msgbuf, sizeof(*alloc_info), 0);
+ if (ret < 0) {
+ pr_info("msgsnd failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ memset((void *)alloc_info->addr, 'a', alloc_info->size);
+
+ sem_post(sync);
+ ret = sem_wait(grandsync);
+ if (ret < 0) {
+ pr_info("sem wait failed, errno: %d", errno);
+ goto error_out;
+ } else {
+ goto out;
+ }
+
+error_out:
+ kill(pid, SIGKILL);
+out:
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child failed!!");
+ return -1;
+ } else
+ pr_info("child success!!");
+
+ if (ioctl_free(dev_fd, alloc_info) < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return -1;
+ }
+ return ret;
+}
+
+struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = 100,
+};
+
+static int test2(void) { return testcase2(&alloc_info); }
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(test2, "进程A分配内存后,由不同组的进程B进行释放")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
+
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare3.c b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare3.c
new file mode 100644
index 000000000000..9735b1e30d72
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare3.c
@@ -0,0 +1,243 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Dec 01 19:49:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * 内存共享后,由另一方停止共享。预期停止共享失败。
+ * testcase1: k2u后,由用户态进程停止共享。(用户态进程用内核vm area地址调用unshare,预期失败)
+ * testcase2: k2u后,由用户态进程停止共享。(用户态进程用进程内vma地址调用unshare,预期成功)
+ * testcase3: u2k后,由内核模块停止共享。
+ */
+static int testcase2(void)
+{
+ int ret;
+
+ struct vmalloc_info kva_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &kva_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'a',
+ .addr = kva_info.addr,
+ .size = kva_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto out;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = kva_info.addr,
+ .size = kva_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto out;
+ }
+
+ unsigned long user_va = k2u_info.addr; // user vma
+ //k2u_info.addr = kva_info.addr; //kernel vm area
+ ret = ioctl_unshare(dev_fd, &k2u_info); // user process try to unshare user vma (back to kernel), shall success
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed unexpected, errno: %d", errno);
+ ret = -1;
+ } else {
+ pr_info("ioctl_unshare succeed as expected");
+ ret = 0;
+ }
+
+ /*
+ k2u_info.addr = user_va;
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ ret = -1;
+ goto out;
+ }
+ */
+
+out:
+ ioctl_vfree(dev_fd, &kva_info);
+ return ret;
+}
+
+static int testcase3(void)
+{
+ int ret;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = 1,
+ };
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info.addr, 'q', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+
+ unsigned long kernel_va = u2k_info.addr;
+ u2k_info.addr = alloc_info.addr;
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ //pr_info("ioctl_unshare failed as expected, errno: %d", errno);
+ ret = 0;
+ } else {
+ pr_info("ioctl_unshare result unexpected, ret: %d, errno: %d", ret, errno);
+ ret = -1;
+ }
+
+ /*
+ u2k_info.addr = kernel_va;
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+ */
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ ret = -1;
+ } else {
+ pr_info("u2k area freed.");
+ }
+ return ret;
+}
+
+static int testcase4(void)
+{
+ int ret;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = 1,
+ };
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info.addr, 'q', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+
+ unsigned long kernel_va = u2k_info.addr;
+ //u2k_info.addr = alloc_info.addr;
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed unexpected, errno: %d", errno);
+ ret = -1;
+ } else {
+ pr_info("ioctl_unshare result succeed as expected.");
+ ret = 0;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ ret = -1;
+ } else {
+ pr_info("u2k area freed.");
+ }
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+// TESTCASE_CHILD(testcase1, false)
+ TESTCASE_CHILD(testcase2, "k2u后,由用户态进程停止共享。(用户态进程用进程内vma地址调用unshare,预期成功")
+ TESTCASE_CHILD(testcase3, "u2k后,由内核模块停止共享。")
+ TESTCASE_CHILD(testcase4, "TBD")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare4.c b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare4.c
new file mode 100644
index 000000000000..2edfe9ffbd06
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare4.c
@@ -0,0 +1,516 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Dec 02 09:38:57 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+#define MAX_PROC_PER_GRP 500
+#define CHILD_EXIT 5
+
+static int process_pre_group = 100;
+static sem_t *sync1[MAX_PROC_PER_GRP];
+static sem_t *grandsync1[MAX_PROC_PER_GRP];
+static sem_t *sync2[MAX_PROC_PER_GRP];
+static sem_t *grandsync2[MAX_PROC_PER_GRP];
+
+/*
+ * testcase1: 进程A分配并共享内存后,组内所有进程并发调用unshare。预期只有其中一个是成功的
+ * -> 本用例想的很不错,但是sp_unshare_kva无法防住并发。
+ * testcase2: 进程A分配内存后,组内所有进程并发调用free。预期只有其中一个是成功的。
+ */
+#if 0
+#define TESTCASE1_MSG_KEY 10
+// 利用消息队列在进程间传递struct sp_alloc_info
+struct msgbuf_share_info {
+ long type;
+ struct sp_make_share_info u2k_info;
+};
+
+static int testcase1_child(int num)
+{
+ int ret;
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(sync1[num]);
+ } while (ret < 0 && errno == EINTR);
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ /* 通知父进程加组成功 */
+ sem_post(grandsync1[num]);
+ if (group_id < 0) {
+ pr_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ /* 等待父进程共享 */
+ do {
+ ret = sem_wait(sync1[num]);
+ } while (ret < 0 && errno == EINTR);
+
+ int msgid = msgget(TESTCASE1_MSG_KEY, 0);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_share_info msgbuf = {0};
+ struct sp_make_share_info *u2k_info = &msgbuf.u2k_info;
+ ret = msgrcv(msgid, &msgbuf, sizeof(*u2k_info), (num + 1), IPC_NOWAIT);
+ if (ret < 0) {
+ pr_info("msgrcv failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ ret = ioctl_unshare(dev_fd, u2k_info);
+ if (ret == 0) {
+ pr_info("ioctl_unshare success");
+ } else {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ ret = CHILD_EXIT;
+ }
+
+ sem_post(grandsync1[num]);
+ return ret;
+
+error_out:
+ sem_post(grandsync1[num]);
+ return -1;
+}
+
+static int testcase1(struct sp_alloc_info *alloc_info)
+{
+ int ret, status = 0;
+ int group_id = alloc_info->spg_id;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+ int child_succ = 0;
+ int child_fail = 0;
+
+ for (int i = 0; i < process_pre_group; i++) {
+ char buf[100];
+ sprintf(buf, "/testcase1_sync%d", i);
+ sync1[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (sync1[i] == SEM_FAILED) {
+ pr_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < process_pre_group; i++) {
+ char buf[100];
+ sprintf(buf, "/testcase1_childsync%d", i);
+ grandsync1[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandsync1[i] == SEM_FAILED) {
+ pr_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ return -1;
+ }
+
+ for (int i = 0; i < process_pre_group; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ return -1;
+ } else if (pid == 0) {
+ exit(testcase1_child(i));
+ } else {
+ pr_info("fork grandchild%d, pid: %d", i, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("add grandchild%d to group %d failed", i, group_id);
+ goto error_out;
+ } else
+ pr_info("add grandchild%d to group %d successfully", i, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(sync1[i]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandsync1[i]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ memset((void *)alloc_info->addr, 'a', alloc_info->size);
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info->addr,
+ .size = alloc_info->size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ int msgid = msgget(TESTCASE1_MSG_KEY, IPC_CREAT | 0666);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ struct msgbuf_share_info msgbuf = {0};
+ memcpy(&msgbuf.u2k_info, &u2k_info, sizeof(u2k_info));
+ for (int i = 0; i < process_pre_group; i++) {
+ msgbuf.type = i + 1;
+ ret = msgsnd(msgid, &msgbuf, sizeof(u2k_info), 0);
+ if (ret < 0) {
+ pr_info("msgsnd failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+ }
+
+
+ /* 通知子进程共享成功 */
+ for (int i = 0; i < process_pre_group; i++) {
+ sem_post(sync1[i]);
+ }
+
+ for (int i = 0; i < process_pre_group; i++) {
+ ret = sem_wait(grandsync1[i]);
+ if (ret < 0) {
+ pr_info("sem wait failed, errno: %d", errno);
+ goto error_out;
+ }
+ }
+ goto out;
+
+error_out:
+ for (int i = 0; i < process_pre_group && childs[i] > 0; i++) {
+ if (kill(childs[i], SIGKILL) != 0) {
+ return -1;
+ }
+ }
+out:
+ for (int i = 0; i < process_pre_group && childs[i] > 0; i++) {
+ waitpid(childs[i], &status, 0);
+ if (WIFEXITED(status)) {
+ pr_info("grandchild%d execute success, ret: %d", i, WEXITSTATUS(status));
+ if (WEXITSTATUS(status) == CHILD_EXIT) {
+ child_fail++;
+ } else if (WEXITSTATUS(status) == 0) {
+ child_succ++;
+ }
+ }
+ }
+
+ if (ioctl_free(dev_fd, alloc_info) < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return -1;
+ }
+
+ if (child_succ == 1 && child_fail == (process_pre_group - 1)) {
+ pr_info("testcase1: child unshare test success!!");
+ return 0;
+ } else {
+ pr_info("testcase1: child unshare test failed!! child_succ: %d, child_fail: %d", child_succ, child_fail);
+ return -1;
+ }
+
+ return ret;
+}
+#endif
+
+#define TESTCASE2_MSG_KEY 20
+// 利用消息队列在进程间传递struct sp_alloc_info
+struct msgbuf_alloc_info {
+ long type;
+ struct sp_alloc_info alloc_info;
+};
+
+static int testcase2_child(int num)
+{
+ int ret;
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(sync2[num]);
+ } while (ret < 0 && errno == EINTR);
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ /* 通知父进程加组成功 */
+ sem_post(grandsync2[num]);
+ if (group_id < 0) {
+ pr_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ /* 等待父进程共享 */
+ do {
+ ret = sem_wait(sync2[num]);
+ } while (ret < 0 && errno == EINTR);
+
+ int msgid = msgget(TESTCASE2_MSG_KEY, 0);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ struct sp_alloc_info *alloc_info = &msgbuf.alloc_info;
+ ret = msgrcv(msgid, &msgbuf, sizeof(*alloc_info), (num + 1), IPC_NOWAIT);
+ if (ret < 0) {
+ pr_info("msgrcv failed, errno: %d", errno);
+ goto error_out;
+ }
+
+ if (!ioctl_judge_addr(dev_fd, alloc_info->addr)) {
+ pr_info("invalid address");
+ goto error_out;
+ }
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret == 0) {
+ pr_info("ioctl_free success, errno: %d", errno);
+ } else {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ ret = CHILD_EXIT;
+ }
+
+ sem_post(grandsync2[num]);
+ return ret;
+
+error_out:
+ sem_post(grandsync2[num]);
+ return -1;
+}
+
+static int test(struct sp_alloc_info *alloc_info)
+{
+ int ret, status = 0;
+ int group_id = alloc_info->spg_id;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+ int child_succ = 0;
+ int child_fail = 0;
+
+ for (int i = 0; i < process_pre_group; i++) {
+ char buf[100];
+ sprintf(buf, "/testcase2_sync%d", i);
+ sync2[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (sync2[i] == SEM_FAILED) {
+ pr_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < process_pre_group; i++) {
+ char buf[100];
+ sprintf(buf, "/testcase2_childsync%d", i);
+ grandsync2[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandsync2[i] == SEM_FAILED) {
+ pr_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret) {
+ pr_info("add group failed, errno: %d", errno);
+ return -1;
+ }
+
+ for (int i = 0; i < process_pre_group; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ return -1;
+ } else if (pid == 0) {
+ exit(testcase2_child(i));
+ } else {
+ pr_info("fork grandchild%d, pid: %d", i, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("add grandchild%d to group %d failed", i, group_id);
+ goto error_out;
+ } else
+ pr_info("add grandchild%d to group %d successfully", i, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(sync2[i]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandsync2[i]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ int msgid = msgget(TESTCASE2_MSG_KEY, IPC_CREAT | 0666);
+ if (msgid < 0) {
+ pr_info("msgget failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ struct msgbuf_alloc_info msgbuf = {0};
+ memcpy(&msgbuf.alloc_info, alloc_info, sizeof(*alloc_info));
+ for (int i = 0; i < process_pre_group; i++) {
+ msgbuf.type = i + 1;
+ ret = msgsnd(msgid, &msgbuf, sizeof(*alloc_info), 0);
+ if (ret < 0) {
+ pr_info("msgsnd failed, errno: %d", errno);
+ ret = -1;
+ goto error_out;
+ }
+ }
+
+ memset((void *)alloc_info->addr, 'a', alloc_info->size);
+
+ /* 通知子进程alloc成功 */
+ for (int i = 0; i < process_pre_group; i++) {
+ sem_post(sync2[i]);
+ }
+
+ for (int i = 0; i < process_pre_group; i++) {
+ ret = sem_wait(grandsync2[i]);
+ if (ret < 0) {
+ pr_info("sem wait failed, errno: %d", errno);
+ goto error_out;
+ }
+ }
+ goto out;
+
+error_out:
+ for (int i = 0; i < process_pre_group && childs[i] > 0; i++) {
+ if (kill(childs[i], SIGKILL) != 0) {
+ return -1;
+ }
+ }
+out:
+ for (int i = 0; i < process_pre_group && childs[i] > 0; i++) {
+ waitpid(childs[i], &status, 0);
+ if (WIFEXITED(status)) {
+ pr_info("grandchild%d execute success, ret: %d", i, WEXITSTATUS(status));
+ if (WEXITSTATUS(status) == CHILD_EXIT) {
+ child_fail++;
+ } else if (WEXITSTATUS(status) == 0) {
+ child_succ++;
+ }
+ }
+ }
+
+ if (child_succ == 1 && child_fail == (process_pre_group - 1)) {
+ pr_info("testcase2: child unshare test success!!");
+ return 0;
+ } else {
+ pr_info("testcase2: child unshare test failed!! child_succ: %d, child_fail: %d", child_succ, child_fail);
+ return -1;
+ }
+
+ return ret;
+}
+
+static void print_help()
+{
+ printf("Usage:./test_unshare4 -p process_num\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:")) != -1) {
+ switch (opt) {
+ case 'p': // 组内子进程个数
+ process_pre_group = atoi(optarg);
+ if (process_pre_group > MAX_PROC_PER_GRP || process_pre_group <= 0) {
+ printf("process num invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = 100,
+ };
+
+ return test(&alloc_info);
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "进程A分配内存后,组内所有进程并发调用free。预期只有其中一个是成功的。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare5.c b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare5.c
new file mode 100644
index 000000000000..dd8e704e390c
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare5.c
@@ -0,0 +1,185 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Thu Dec 03 11:27:42 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * 内存共享后,先释放内存再停止共享。预期停止共享失败。
+ * testcase1: 内核调用k2u得到的uva,用sp_free去释放,再由内核调用unshare
+ * testcase2: 进程调用u2k得到的kva,用vfree去释放,再由用户态调用unshare, 再用free去释放。
+ */
+
+static int testcase1(void)
+{
+ int ret;
+
+ struct vmalloc_info ka_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ return ret;
+ }
+
+ struct sp_alloc_info uva_info = {
+ .addr = k2u_info.addr,
+ .size = 10000,
+ };
+
+ ret = ioctl_free(dev_fd, &uva_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed as expected, errno: %d", errno);
+ ret = 0;
+ } else {
+ pr_info("ioctl_free result unexpected, ret: %d, errno: %d", ret, errno);
+ return -1;
+ }
+
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ } else {
+ pr_info("ioctl_unshare k2u succeeded.");
+ }
+
+ ret = ioctl_vfree(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("ioctl_vfree failed, errno: %d", errno);
+ } else {
+ pr_info("ioctl_vfree succeeded.");
+ }
+ return ret;
+}
+
+static int testcase2(void)
+{
+ int ret;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = 1,
+ };
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info.addr, 'q', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+
+ struct vmalloc_info kva_info = {
+ .addr = u2k_info.addr,
+ .size = 12345,
+ };
+
+ if (ioctl_vfree(dev_fd, &kva_info) < 0) {
+ pr_info("vfree u2k kernel vm area failed.");
+ ret = -1;
+ } else {
+ pr_info("vfree u2k kernal vm area succeeded.");
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed as expected, errno: %d", errno);
+ } else {
+ pr_info("ioctl_unshare result unexpected, ret: %d, errno: %d", ret, errno);
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("free u2k vma failed, errno: %d", errno);
+ } else {
+ pr_info("free u2k vma succeeded");
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "内核调用k2u得到的uva,用sp_free去释放,再由内核调用unshare")
+ TESTCASE_CHILD(testcase2, "进程调用u2k得到的kva,用vfree去释放,再由用户态调用unshare, 再用free去释放。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare6.c b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare6.c
new file mode 100644
index 000000000000..389402e9ebc0
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare6.c
@@ -0,0 +1,93 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Dec 04 17:20:10 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+/*
+ * testcase1: 用户态进程sp_alloc分配得到的uva用sp_unshare去停止共享,预期失败。
+ */
+
+static int testcase1(void)
+{
+ int ret;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = 12345,
+ .spg_id = 1,
+ };
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info.addr, 'q', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .addr = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed as expected, errno: %d", errno);
+ } else {
+ pr_info("ioctl_unshare result unexpected, ret: %d, errno: %d", ret, errno);
+ return -1;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "用户态进程sp_alloc分配得到的uva用sp_unshare去停止共享,预期失败。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare7.c b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare7.c
new file mode 100644
index 000000000000..b85171c50995
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare7.c
@@ -0,0 +1,159 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Nov 30 18:27:26 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/* testcase1: 验证未加组的进程能否k2spg */
+
+#define GROUP_ID 1
+
+static int testcase1(void)
+{
+ int ret;
+
+ /* sync prepare*/
+ sem_t *sem_addgroup, *sem_k2u;
+ sem_addgroup = sem_open("/child_process_add_group_finish", O_CREAT, O_RDWR, 0);
+ sem_k2u = sem_open("/child_process_k2u_finish", O_CREAT, O_RDWR, 0);
+
+ /* k2u prepare*/
+ struct vmalloc_info vmalloc_info = {
+ .size = 10000,
+ };
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ return ret;
+ }
+
+ struct sp_make_share_info k2spg_info = {
+ .kva = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ .spg_id = GROUP_ID,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ /* fork child */
+ pid_t child = fork();
+ if (child == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID);
+ if (ret < 0) {
+ pr_info("child process add group %d failed, errno: %d", GROUP_ID, errno);
+ } else {
+ pr_info("child process %d add group %d success", getpid(), GROUP_ID);
+ }
+ sem_post(sem_addgroup);
+ /*wait parent make share kernel address to child */
+ sem_wait(sem_k2u);
+ /* check kernel shared address */
+
+ exit(0);
+ }
+
+ sem_wait(sem_addgroup);
+ /* k2u to child */
+ k2spg_info.pid = child;
+ ret = ioctl_k2u(dev_fd, &k2spg_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed as expected, errno: %d", errno);
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ ret = 0;
+ goto end;
+ } else {
+ pr_info("parent process k2u success unexpected.");
+ ret = -1;
+ }
+
+ /* fork a new process and add into group to check k2u address*/
+ pid_t proc_check = fork();
+ if (proc_check == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID);
+ if (ret < 0) {
+ pr_info("child process %d add group %d failed, errno: %d", getpid(), GROUP_ID, errno);
+ } else {
+ pr_info("child process %d add group %d success", getpid(), GROUP_ID);
+ }
+ char *addr = (char *)k2spg_info.addr;
+ if (addr[0] != 'b') {
+ pr_info("addr value is not b! k2spg failed.");
+ } else {
+ pr_info("addr value is b! k2spg success.");
+ }
+ exit(0);
+ }
+
+ int status;
+ waitpid(proc_check, &status, 0);
+ pr_info("k2spg check process exited.");
+
+end:
+ sem_post(sem_k2u);
+
+ waitpid(child, &status, 0);
+ pr_info("child process %d exited.", child);
+
+ /* the vm_area has SP_FLAG in it, vfree shall cause warning*/
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ pr_info("ioctl vfree success.");
+
+ /* parent try create same group id and unshare, expect success */
+ // ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID);
+ // if (ret < 0) {
+ // pr_info("add group failed, errno: %d", errno);
+ // } else {
+ // pr_info("parent add group %d success", GROUP_ID);
+ // }
+
+ sem_unlink("/child_process_add_group_finish");
+ sem_unlink("/child_process_k2u_finish");
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "验证未加组的进程能否k2spg ")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
+
diff --git a/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare_kill.c b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare_kill.c
new file mode 100644
index 000000000000..94c2401a7376
--- /dev/null
+++ b/tools/testing/sharepool/testcase/reliability_test/sp_unshare/test_unshare_kill.c
@@ -0,0 +1,150 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Nov 30 18:27:26 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <setjmp.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+#include <pthread.h>
+
+#include "sharepool_lib.h"
+#include "sem_use.h"
+
+#define PROC_NUM 200
+#define PROT (PROT_WRITE | PROT_READ)
+#define REPEAT 20
+#define THREAD_NUM 50
+/*
+ * 内存共享后,先释放内存再停止共享。
+ * testcase1: vmalloc并k2task后,直接vfree kva(预期成功),然后unshare uva(预期成功)。
+ */
+struct sp_make_share_info k2u_info;
+int sem;
+
+static void *tc1_thread(void *arg)
+{
+ int ret;
+ int gid = (int)arg;
+ struct sp_make_share_info infos[REPEAT];
+
+ pr_info("gid is %d\n", gid);
+ for (int i = 0; i < REPEAT; i++) {
+ infos[i].spg_id = SPG_ID_DEFAULT;
+ infos[i].pid = getpid();
+ infos[i].kva = k2u_info.kva;
+ infos[i].size = k2u_info.size;
+ infos[i].sp_flags = 0;
+ ret = ioctl_k2u(dev_fd, &infos[i]);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, child %d errno: %d", gid, errno);
+ sem_inc_by_one(sem);
+ return ret;
+ }
+ }
+
+ sem_inc_by_one(sem);
+ sem_check_zero(sem);
+
+ for (int i = 0; i < REPEAT; i++) {
+ ret = ioctl_unshare(dev_fd, &infos[i]);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, child %d errno: %d", getpid(), errno);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+static int tc1_child(int gid)
+{
+ int ret;
+ pthread_t threads[THREAD_NUM];
+ void *tret;
+
+ ret = wrap_add_group(getpid(), PROT, gid);
+ if (ret < 0) {
+ pr_info("add group failed child %d", gid);
+ sem_inc_by_val(sem, THREAD_NUM);
+ return ret;
+ }
+
+ for (int i = 0; i < THREAD_NUM; i++)
+ pthread_create(threads + i, NULL, tc1_thread, (void *)gid);
+
+ for (int i = 0; i < THREAD_NUM; i++)
+ pthread_join(threads[i], &tret);
+
+ pr_info("child %d finish all work", gid);
+ return 0;
+}
+
+static int testcase1(void)
+{
+ int ret;
+ pid_t child[PROC_NUM];
+ struct vmalloc_info vmalloc_info = {
+ .size = 4096,
+ };
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ sem = sem_create(1234, "sem");
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ FORK_CHILD_ARGS(child[i], tc1_child(i + 1));
+ }
+
+ pr_info("\nwaits to kill all child...\n");
+ sem_dec_by_val(sem, PROC_NUM * THREAD_NUM);
+ pr_info("\nstarts to kill child...\n");
+ for (int i = 0; i < PROC_NUM; i++)
+ kill(child[i], SIGKILL);
+ for (int i = 0; i < PROC_NUM; i++)
+ waitpid(child[i], NULL, 0);
+
+ pr_info("finish kill child...\n");
+ ret = ioctl_vfree(dev_fd, &vmalloc_info); /* 预期失败 */
+ if (ret < 0) {
+ pr_info("ioctl vfree failed for the first time, errno: %d", errno);
+ }
+
+out:
+ sem_close(sem);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "vmalloc并k2task后,直接vfree kva(预期成功),然后unshare uva(预期成功)")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
+
diff --git a/tools/testing/sharepool/testcase/remove_list.sh b/tools/testing/sharepool/testcase/remove_list.sh
new file mode 100755
index 000000000000..03287c09ea6e
--- /dev/null
+++ b/tools/testing/sharepool/testcase/remove_list.sh
@@ -0,0 +1,22 @@
+#!/bin/bash
+rm_list()
+{
+ curdir=$1
+ echo $curdir
+
+ cd $curdir
+ rm -rf tc_list
+
+ subdirs=`ls -d */`
+
+ for dir in $subdirs
+ do
+ rm_list $dir
+ done
+
+ cd ..
+ echo "back to `pwd`"
+}
+
+rm_list `pwd`
+
diff --git a/tools/testing/sharepool/testcase/scenario_test/Makefile b/tools/testing/sharepool/testcase/scenario_test/Makefile
new file mode 100644
index 000000000000..826d5ba6b255
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/Makefile
@@ -0,0 +1,15 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ mkdir -p $(TOOL_BIN_DIR)/scenario_test
+ cp $(testcases) $(TOOL_BIN_DIR)/scenario_test
+ cp test_hugepage_setting.sh $(TOOL_BIN_DIR)/
+ cp scenario_test.sh $(TOOL_BIN_DIR)/
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/scenario_test/scenario_test.sh b/tools/testing/sharepool/testcase/scenario_test/scenario_test.sh
new file mode 100755
index 000000000000..f344ae0dda17
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/scenario_test.sh
@@ -0,0 +1,45 @@
+#!/bin/sh
+
+set -x
+
+echo 'test_dfx_heavy_load
+ test_dvpp_16g_limit
+ test_max_50000_groups
+ test_proc_sp_group_state
+ test_oom ' | while read line
+do
+ let flag=0
+ ./scenario_test/$line
+ if [ $? -ne 0 ] ;then
+ echo "testcase scenario_test/$line failed"
+ let flag=1
+ fi
+
+ sleep 3
+
+ #打印spa_stat
+ ret=`cat /proc/sharepool/spa_stat | wc -l`
+ if [ $ret -ge 15 ] ;then
+ cat /proc/sharepool/spa_stat
+ echo spa_stat not clean
+ let flag=1
+ fi
+ #打印proc_stat
+ ret=`cat /proc/sharepool/proc_stat | wc -l`
+ if [ $ret -ge 15 ] ;then
+ cat /proc/sharepool/proc_stat
+ echo proc_stat not clean
+ let flag=1
+ fi
+
+ cat /proc/sharepool/proc_overview
+ #如果泄漏 -->exit
+ if [ $flag -eq 1 ] ;then
+ exit 1
+ fi
+ echo "testcase scenario_test/$line success"
+
+ cat /proc/meminfo
+ free -m
+
+done
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_auto_check_statistics.c b/tools/testing/sharepool/testcase/scenario_test/test_auto_check_statistics.c
new file mode 100644
index 000000000000..904a5bbee2ad
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_auto_check_statistics.c
@@ -0,0 +1,338 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 20 01:38:40 2020
+ */
+
+#include <errno.h>
+#include <fcntl.h> /* For O_* constants */
+#include <semaphore.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/stat.h> /* For mode constants */
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <unistd.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+static int semid;
+
+#define PROC_NUM 1023
+#define GROUP_NUM 2999
+#define GROUP_ID 1
+
+#define ROW_MAX 100
+#define COL_MAX 300
+
+#define byte2kb(x) (x) / 1024
+#define byte2mb(x) (x) / 1024UL / 1024UL
+
+#define SPA_STAT "/proc/sharepool/spa_stat"
+#define PROC_STAT "/proc/sharepool/proc_stat"
+#define PROC_OVERVIEW "/proc/sharepool/proc_overview"
+
+static void __reset(char **array, int end)
+{
+ for (int i = 0; i < end; i++)
+ memset(array[i], 0, COL_MAX);
+}
+
+static int test_route(unsigned long flag, unsigned long size, int spg_id)
+{
+ int ret = 0;
+ pid_t pid;
+ unsigned long addr;
+ char pidstr[SIZE];
+ char pidattr[SIZE];
+ int row_num, column_num;
+ char **result;
+ char **exp;
+ int row_real;
+ bool flag_out = false;
+ unsigned long size_dvpp = 0;
+
+ strcat(pidattr, "/proc/");
+ sprintf(pidstr, "%d", getpid());
+ strcat(pidattr, pidstr);
+ strcat(pidattr, "/sp_group");
+
+ if (flag & SP_DVPP)
+ size_dvpp = PMD_SIZE;
+
+ // 加组
+ if (spg_id == SPG_ID_DEFAULT)
+ goto alloc;
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID);
+ if (ret < 0) {
+ pr_info("add group failed!");
+ return ret;
+ }
+
+ // 起一个工具进程加组,工具进程的统计也要正确
+ pid = fork();
+ if (pid == 0) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, GROUP_ID);
+ if (ret < 0) {
+ pr_info("tool proc add group failed!");
+ exit(-1);
+ }
+ while (1) {
+
+ }
+ }
+
+alloc:
+ // sp_alloc flag=大页/小页 dvpp/非dvpp size
+ addr = (unsigned long)wrap_sp_alloc(spg_id, size, flag);
+ if (addr == -1) {
+ pr_info("alloc memory failed, size %lx, flag %lx", size, flag);
+ ret = -1;
+ goto kill_child;
+ }
+
+ sleep(5);
+
+ // 检查spa_stat
+ result = (char **)calloc(ROW_MAX, sizeof(char *));
+ exp = (char **)calloc(ROW_MAX, sizeof(char *));
+ for (int i = 0; i < ROW_MAX; i++) {
+ result[i] = (char *)calloc(COL_MAX, sizeof(char));
+ exp[i] = (char *)calloc(COL_MAX, sizeof(char));
+ }
+
+ get_attr(SPA_STAT, result, ROW_MAX, COL_MAX, &row_real);
+ for (int i = 0; i < row_real; i++) {
+ printf("%s", result[i]);
+ }
+ pr_info("\nrow_real is %d", row_real);
+
+ sprintf(exp[0], "Share pool total size: %d KB, spa total num: %d.\n",
+ byte2kb(size), 1);
+ sprintf(exp[1],
+ "Group %6d size: %lld KB, spa num: %d, total alloc: %lld KB, normal alloc: %lld KB, huge alloc: %lld KB\n",
+ spg_id, byte2kb(size), 1, byte2kb(size),
+ flag & SP_HUGEPAGE ? 0 : byte2kb(size),
+ flag & SP_HUGEPAGE ? byte2kb(size) : 0);
+ sprintf(exp[2], "%s", "\n");
+ sprintf(exp[3], "Spa total num %u.\n", 1);
+ sprintf(exp[4], "Spa alloc num %u, k2u(task) num %u, k2u(spg) num %u.\n",
+ 1, 0, 0);
+ sprintf(exp[5], "Spa total size: %13lu KB\n", byte2kb(size));
+ sprintf(exp[6], "Spa alloc size: %13lu KB\n", byte2kb(size));
+ sprintf(exp[7], "Spa k2u(task) size: %13lu KB\n", 0);
+ sprintf(exp[8], "Spa k2u(spg) size: %13lu KB\n", 0);
+ sprintf(exp[9], "Spa dvpp size: %13lu KB\n",
+ flag & SP_DVPP ? byte2kb(size) : 0);
+ sprintf(exp[10], "Spa dvpp va size: %13lu MB\n", byte2mb(size_dvpp));
+
+ sprintf(exp[11], "%s", "\n");
+ sprintf(exp[12], "%-10s %-16s %-16s %-10s %-7s %-5s %-8s %-8s\n",
+ "Group ID", "va_start", "va_end", "Size(KB)", "Type", "Huge", "PID", "Ref");
+ sprintf(exp[13], "%-10d %2s%-14lx %2s%-14lx %-10ld %-7s %-5s %-8d %-8d\n",
+ spg_id,
+ "0x", flag & SP_DVPP ? 0xf00000000000 : 0xe80000000000,
+ "0x", flag & SP_DVPP ? 0xf00000200000 : 0xe80000200000,
+ byte2kb(size), "ALLOC", flag & SP_HUGEPAGE ? "Y" : "N", getpid(), 3);
+ for (int i = 0; i < row_real; i++) {
+ if (strcmp(result[i], exp[i]) != 0) {
+ pr_info("a not same with b,\na: %sb: %s", result[i], exp[i]);
+ flag_out = true;
+ }
+ }
+
+ // 检查proc_stat
+ __reset(result, row_real);
+ __reset(exp, row_real);
+ get_attr(PROC_STAT, result, ROW_MAX, COL_MAX, &row_real);
+ for (int i = 0; i < row_real; i++) {
+ printf("%s", result[i]);
+ }
+ pr_info("\nrow_real is %d\n", row_real);
+
+ sprintf(exp[0], "Share pool total size: %d KB, spa total num: %d.\n",
+ byte2kb(size), 1);
+ sprintf(exp[1],
+ "Group %6d size: %lld KB, spa num: %d, total alloc: %lld KB, normal alloc: %lld KB, huge alloc: %lld KB\n",
+ spg_id, byte2kb(size), 1, byte2kb(size),
+ flag & SP_HUGEPAGE ? 0 : byte2kb(size),
+ flag & SP_HUGEPAGE ? byte2kb(size) : 0);
+ sprintf(exp[2], "%s", "\n");
+ sprintf(exp[3], "Spa total num %u.\n", 1);
+ sprintf(exp[4], "Spa alloc num %u, k2u(task) num %u, k2u(spg) num %u.\n",
+ 1, 0, 0);
+ sprintf(exp[5], "Spa total size: %13lu KB\n", byte2kb(size));
+ sprintf(exp[6], "Spa alloc size: %13lu KB\n", byte2kb(size));
+ sprintf(exp[7], "Spa k2u(task) size: %13lu KB\n", 0);
+ sprintf(exp[8], "Spa k2u(spg) size: %13lu KB\n", 0);
+ sprintf(exp[9], "Spa dvpp size: %13lu KB\n",
+ flag & SP_DVPP ? byte2kb(size) : 0);
+ sprintf(exp[10], "Spa dvpp va size: %13lu MB\n", byte2mb(size_dvpp));
+ sprintf(exp[11], "%s", "\n");
+ sprintf(exp[12], "%-8s %-8s %-9s %-9s %-9s %-8s %-7s %-7s %-4s\n",
+ "PID", "Group_ID", "SP_ALLOC", "SP_K2U", "SP_RES", "VIRT", "RES",
+ "Shm", "PROT");
+ sprintf(exp[13], "%-8s %-8s %-9lld %-9lld\n", "guard", "-", 0, 0);
+ sprintf(exp[14], "%-8d %-8d %-9ld %-9ld %-9ld %-8ld %-7ld %-7ld %-4s\n",
+ getpid(), spg_id, byte2kb(size), 0, byte2kb(size), 0, 0, 0, "RW");
+ sprintf(exp[15], "%-8d %-8d %-9ld %-9ld %-9ld %-8ld %-7ld %-7ld %-4s\n",
+ pid, spg_id, 0, 0, byte2kb(size), 0, 0, 0, "RW");
+ sprintf(exp[16], "%-8d %-8d %-9ld %-9ld %-9ld %-8ld %-7ld %-7ld %-4s \n",
+ getpid(), 200001, 0, 0, 0, 0, 0, 0, "RW");
+ sprintf(exp[17], "%-8d %-8d %-9ld %-9ld %-9ld %-8ld %-7ld %-7ld %-4s \n",
+ pid, 200002, 0, 0, 0, 0, 0, 0, "RW");
+
+ for (int i = 0; i < row_real; i++) {
+ if (i < 14)
+ ret = strcmp(result[i], exp[i]);
+ else
+ ret = strncmp(result[i], exp[i], 43);
+
+ if (ret == 0)
+ continue;
+
+ pr_info("a not same with b,\na: %sb: %s", result[i], exp[i]);
+ flag_out = true;
+ }
+
+ // 检查proc_overview
+ __reset(result, row_real);
+ __reset(exp, row_real);
+ get_attr(PROC_OVERVIEW, result, ROW_MAX, COL_MAX, &row_real);
+ for (int i = 0; i < row_real; i++) {
+ printf("%s", result[i]);
+ }
+ pr_info("\nrow_real is %d\n", row_real);
+ sprintf(exp[0], "%-8s %-16s %-9s %-9s %-9s %-10s %-10s %-8s\n",
+ "PID", "COMM", "SP_ALLOC", "SP_K2U", "SP_RES", "Non-SP_RES",
+ "Non-SP_Shm", "VIRT");
+ sprintf(exp[1], "%-8d %-16s %-9ld %-9ld %-9ld %-10ld %-10ld %-8ld\n",
+ getpid(), "test_auto_check",
+ byte2kb(size), 0, byte2kb(size), 0, 0, 0);
+ sprintf(exp[2], "%-8d %-16s %-9ld %-9ld %-9ld %-10ld %-10ld %-8ld\n",
+ pid, "test_auto_check", 0, 0, byte2kb(size), 0, 0, 0);
+ for (int i = 0; i < row_real; i++) {
+ if (i < 1)
+ ret = strcmp(result[i], exp[i]);
+ else
+ ret = strncmp(result[i], exp[i], 51);
+
+ if (ret == 0)
+ continue;
+
+ pr_info("a not same with b,\na: %sb: %s", result[i], exp[i]);
+ flag_out = true;
+ }
+
+ // 检查sp_group
+ __reset(result, row_real);
+ __reset(exp, row_real);
+ get_attr(pidattr, result, ROW_MAX, COL_MAX, &row_real);
+ for (int i = 0; i < row_real; i++) {
+ printf("%s", result[i]);
+ }
+ pr_info("\nrow_real is %d\n", row_real);
+ sprintf(exp[0], "Share Pool Aggregate Data of This Process\n");
+ sprintf(exp[1], "\n");
+ sprintf(exp[2], "%-8s %-16s %-9s %-9s %-9s %-10s %-10s %-8s\n",
+ "PID", "COMM", "SP_ALLOC", "SP_K2U", "SP_RES", "Non-SP_RES",
+ "Non-SP_Shm", "VIRT");
+ sprintf(exp[3], "%-8d %-16s %-9ld %-9ld %-9ld %-10ld %-10ld %-8ld\n",
+ getpid(), "test_auto_check", byte2kb(size), 0, byte2kb(size),
+ 0, 0, 0);
+ sprintf(exp[4], "\n");
+ sprintf(exp[5], "\n");
+ sprintf(exp[6], "Process in Each SP Group\n");
+ sprintf(exp[7], "\n");
+ sprintf(exp[8], "%-8s %-9s %-9s %-9s %-4s\n",
+ "Group_ID", "SP_ALLOC", "SP_K2U", "SP_RES", "PROT");
+ sprintf(exp[9], "%-8d %-9ld %-9ld %-9ld %s\n",
+ 200001, 0, 0, 0, "RW");
+ sprintf(exp[10], "%-8d %-9ld %-9ld %-9ld %s\n",
+ spg_id, byte2kb(size), 0, byte2kb(size), "RW");
+
+ for (int i = 0; i < row_real; i++) {
+ if (i != 3)
+ ret = strcmp(result[i], exp[i]);
+ else
+ ret = strncmp(result[i], exp[i], 51);
+
+ if (ret == 0)
+ continue;
+
+ pr_info("a not same with b,\na: %sb: %s", result[i], exp[i]);
+ flag_out = true;
+ }
+
+ // 释放
+ struct sp_alloc_info info = {
+ .addr = addr,
+ .spg_id = spg_id,
+ };
+ ret = ioctl_free(dev_fd, &info);
+ if (ret < 0) {
+ pr_info("free memory failed, size %lx, flag %lx", size, flag);
+ goto out;
+ }
+ pr_info("\n\nfree memory finished\n\n");
+
+out:
+ // 回收array
+ for (int i = 0; i < ROW_MAX; i++) {
+ free(result[i]);
+ free(exp[i]);
+ }
+ free(result);
+ free(exp);
+
+kill_child:
+ // 回收工具进程
+ if (spg_id != SPG_ID_DEFAULT)
+ KILL_CHILD(pid);
+ if (flag_out)
+ return -1;
+ return ret;
+}
+
+static int testcase1(void) { return test_route(0, 4096, 1); }
+static int testcase2(void) { return test_route(SP_HUGEPAGE, 2 * 1024UL * 1024UL, 1); }
+static int testcase3(void) { return test_route(SP_DVPP, 4096, 1); }
+static int testcase4(void) { return test_route(SP_HUGEPAGE | SP_DVPP, 2 * 1024UL * 1024UL, 1); }
+
+static int testcase5(void) {
+ if (wrap_add_group(getpid(), PROT_READ, 1) < 0)
+ return -1;
+
+ sharepool_print();
+ sleep(1);
+
+ if (wrap_del_from_group(getpid(), 1) < 0)
+ return -1;
+
+ sharepool_print();
+ sleep(1);
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "共享组申请一个小页")
+ TESTCASE_CHILD(testcase2, "共享组申请一个大页")
+ TESTCASE_CHILD(testcase3, "共享组申请一个dvpp小页")
+ TESTCASE_CHILD(testcase4, "共享组申请一个dvpp大页")
+ TESTCASE_CHILD(testcase5, "进程先加入共享组,再主动退出共享组")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_dfx_heavy_load.c b/tools/testing/sharepool/testcase/scenario_test/test_dfx_heavy_load.c
new file mode 100644
index 000000000000..16ec53d046bb
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_dfx_heavy_load.c
@@ -0,0 +1,143 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+#define MAX_GROUP 49999
+#define SPG_ID_AUTO_MIN 100000
+#define PROC_NUM 20
+
+int sem_id;
+int msg_id;
+struct msgbuf {
+ long mtype;
+ int group_num;
+};
+
+int group_count;
+
+static void send_msg(int msgid, int msgtype, int group_num)
+{
+ struct msgbuf msg = {
+ .mtype = msgtype,
+ .group_num = group_num,
+ };
+
+ if(msgsnd(msgid, (void *) &msg, sizeof(msg.group_num),
+ IPC_NOWAIT) == -1) {
+ perror("msgsnd error");
+ exit(EXIT_FAILURE);
+ } else {
+ pr_info("child %d message sent success: group_num: %d",
+ msgtype - 1, group_num);
+ }
+}
+
+static void get_msg(int msgid, int msgtype)
+{
+ struct msgbuf msg;
+ if (msgrcv(msgid, (void *) &msg, sizeof(msg.group_num), msgtype,
+ MSG_NOERROR) == -1) {
+ if (errno != ENOMSG) {
+ perror("msgrcv");
+ exit(EXIT_FAILURE);
+ }
+ pr_info("No message available for msgrcv()");
+ } else {
+ pr_info("child %d message received success: group_num: %d",
+ msgtype - 1, msg.group_num);
+ group_count += msg.group_num;
+ }
+}
+
+/* 子进程创建组直至失败 */
+static int test1(void)
+{
+ int ret = 0;
+ int count = 0;
+ int spg_id = 0;
+
+ while (1) {
+ spg_id = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, SPG_ID_AUTO);
+ if (spg_id < 0)
+ break;
+ wrap_sp_alloc(spg_id, 4096, 0);
+ count++;
+ }
+
+ pr_info("proc %d add %d groups", getpid(), count);
+ send_msg(msg_id, getpid(), count);
+ sem_inc_by_one(sem_id);
+ while (1) {
+
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+ int ret = 0;
+ int count = 0;
+ int status = 0;
+ char cpid[SIZE];
+ pid_t child[PROC_NUM];
+
+ sem_id = sem_create(1234, "semid");
+ int msgkey = 2345;
+ msg_id = msgget(msgkey, IPC_CREAT | 0666);
+
+ for (int i = 0; i < PROC_NUM; i++)
+ FORK_CHILD_ARGS(child[i], test1());
+
+ for (int i = 0; i < PROC_NUM; i++)
+ get_msg(msg_id, (int)child[i]);
+
+ sem_dec_by_val(sem_id, PROC_NUM);
+ pr_info("\n%d Groups are created.\n", group_count);
+ sleep(2);
+
+ pr_info("Gonna cat /proc/sharepool/proc_stat...\n");
+ ret = cat_attr("/proc/sharepool/proc_stat");
+
+ msgctl(msg_id, IPC_RMID, 0);
+ sem_close(sem_id);
+
+ for (int i = 0; i < PROC_NUM; i++)
+ KILL_CHILD(child[i]);
+
+ return ret;
+}
+
+/*
+ * testcase2: 申请共享组,使用AUTO,预期最多49999个。
+*/
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "申请49999个共享组后,cat /proc/sharepool/proc_stat")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_dvpp_16g_limit.c b/tools/testing/sharepool/testcase/scenario_test/test_dvpp_16g_limit.c
new file mode 100644
index 000000000000..9b61c86901b4
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_dvpp_16g_limit.c
@@ -0,0 +1,68 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sharepool_lib.h"
+
+#define MEM_1G_SIZE (1024UL * 1024UL * 1024UL)
+#define MAX_DVPP_16G 16
+
+static int test_route(int spg_id)
+{
+ int ret = 0;
+ unsigned long addr;
+ int count = 0;
+
+ if (spg_id != SPG_ID_DEFAULT) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id);
+ if (ret < 0)
+ return -1;
+ }
+
+ while (1) {
+ addr = (unsigned long)wrap_sp_alloc(spg_id, MEM_1G_SIZE,
+ SP_HUGEPAGE | SP_DVPP);
+ if (addr == -1)
+ break;
+ pr_info("alloc %dG success", ++count);
+ }
+
+ if (count != MAX_DVPP_16G) {
+ pr_info("count is %d unexpected", count);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int testcase1(void) { return test_route(1); }
+static int testcase2(void) { return test_route(SPG_ID_DEFAULT); }
+
+/* testcase1: 申请共享组,指定ID,预期最多49999个。
+ * testcase2: 申请共享组,使用AUTO,预期最多49999个。
+*/
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "申请共享组,指定ID,预期最多49999个。")
+ TESTCASE_CHILD(testcase2, "申请共享组,使用AUTO,预期最多49999个")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_failure.c b/tools/testing/sharepool/testcase/scenario_test/test_failure.c
new file mode 100644
index 000000000000..7e0e7919ac30
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_failure.c
@@ -0,0 +1,630 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 执行流程:sp_alloc直至触发OOM
+ */
+
+#define GROUP_ID 1
+#define PROC_NUM 20
+#define PROT (PROT_READ | PROT_WRITE)
+#define HP_SIZE (2 * 1024 * 1024UL)
+
+static int testcase1(void)
+{
+ int i;
+ int ret = 0;
+ pid_t pid = getpid();
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ sleep(1);
+
+ struct sp_del_from_group_info info = {
+ .pid = pid,
+ .spg_id = -3,
+ };
+
+ ret = ioctl_del_from_group(dev_fd, &info);
+ if (ret < 0) {
+ pr_info("ioctl_del_group failed, errno: %d", errno);
+ }
+
+ return ret;
+}
+
+static int testcase2(void)
+{
+ int i;
+ int ret = 0;
+ unsigned long ret_addr;
+ pid_t pid = getpid();
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = GROUP_ID,
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ ret_addr = (unsigned long)wrap_sp_alloc(GROUP_ID, 0, 0);
+ if (ret_addr == -1)
+ pr_info("alloc failed!");
+ else {
+ pr_info("alloc success!");
+ }
+
+ return ret;
+}
+
+static int testcase3(void)
+{
+ int i;
+ int ret = 0;
+ unsigned long ret_addr;
+ pid_t pid = getpid();
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .size = PAGE_SIZE,
+ .spg_id = 1,
+ };
+
+ ret_addr = (unsigned long)ioctl_alloc(dev_fd, &alloc_info);
+ if (ret_addr == -1)
+ pr_info("alloc failed!");
+ else {
+ pr_info("alloc success!");
+ }
+
+ return ret;
+}
+
+static int testcase4(void)
+{
+ int i;
+ int ret;
+ pid_t pid = getpid();
+
+ struct sp_walk_page_range_info wpr_info = {
+ .uva = 0,
+ .size = 1000,
+ };
+ ret = ioctl_walk_page_range_null(dev_fd, &wpr_info);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range successed unexpected");
+ return -1;
+ }
+ return ret;
+}
+
+static int testcase5(void)
+{
+ int i;
+ int ret;
+ pid_t pid = getpid();
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ,
+ .spg_id = GROUP_ID,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ cat_attr("/proc/sharepool/proc_stat");
+ return ret;
+}
+
+static int prepare(struct vmalloc_info *ka_info, bool ishuge)
+{
+ int ret;
+ if (ishuge) {
+ ret = ioctl_vmalloc_hugepage(dev_fd, ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc_hugepage failed, errno: %d", errno);
+ return -1;
+ }
+ } else {
+ ret = ioctl_vmalloc(dev_fd, ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'a',
+ .addr = ka_info->addr,
+ .size = ka_info->size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, ka_info);
+ }
+ return ret;
+}
+
+static int testcase6(void)
+{
+ int i;
+ int ret;
+ pid_t pid;
+ pid = fork();
+
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ pid_t pid = getpid();
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 2,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+ while(1)
+ sleep(1);
+ }
+
+ /*try to k2u*/
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ /*add group to group 1*/
+ struct sp_add_group_info ag_info = {
+ .spg_id = GROUP_ID,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ /*try to share with group 2*/
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = 2,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ return -1;
+}
+
+static int testcase7(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = 1,
+ .addr = -PAGE_SIZE,
+ };
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ struct sp_add_group_info ag_info = {
+ .spg_id = GROUP_ID,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = GROUP_ID,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ k2u_info.size = 1;
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("unshare failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ return ret;
+}
+
+static int testcase8(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ struct sp_add_group_info ag_info = {
+ .spg_id = GROUP_ID,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = GROUP_ID,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ k2u_info.kva = 281474976710656 * 2,
+ k2u_info.addr = 281474976710656 * 2,
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("unshare failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ return ret;
+}
+
+static int testcase9(void)
+{
+ int ret;
+ int id = 1;
+
+ /* see sharepool_dev.c for more details*/
+ struct sp_notifier_block_info notifier_info = {
+ .i = id,
+ };
+ ret = ioctl_register_notifier_block(dev_fd, ¬ifier_info);
+ if (ret != 0)
+ pr_info("proc %d register notifier block %d failed. ret is %d. errno is %s.",
+ getpid(), id, ret, strerror(errno));
+ else
+ pr_info("proc %d register notifier for func %d success!!", getpid(), id);
+
+ /* see sharepool_dev.c for more details*/
+ ret = ioctl_unregister_notifier_block(dev_fd, ¬ifier_info);
+ if (ret != 0)
+ pr_info("proc %d unregister notifier block %d failed. ret is %d. errno is %s.",
+ getpid(), id, ret, strerror(errno));
+ else
+ pr_info("proc %d unregister notifier for func %d success!!", getpid(), id);
+
+ return ret;
+}
+
+static int testcase10(void)
+{
+ void *ret;
+
+ struct sp_add_group_info ag_info = {
+ .spg_id = GROUP_ID,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ // 申请MAX个
+ for (int i = 0; i < 1000; i++) {
+ ret = wrap_sp_alloc(GROUP_ID, 100 * PMD_SIZE, SP_HUGEPAGE_ONLY);
+ if (ret == (void *)-1) {
+ pr_info("alloc hugepage failed.");
+ return -1;
+ }
+ }
+ return ret;
+}
+
+static int testcase11(void)
+{
+ int ret;
+ int group_id = 100;
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("add group failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .spg_id = group_id,
+ .size = 12345,
+ };
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+ struct sp_walk_page_range_info wpr_info = {
+ .uva = alloc_info.addr,
+ .size = -PAGE_SIZE,
+ };
+ ret = ioctl_walk_page_range(dev_fd, &wpr_info);
+ if (!ret) {
+ pr_info("ioctl_walk_page_range successed unexpected");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int testcase12(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ struct sp_add_group_info ag_info = {
+ .spg_id = GROUP_ID,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = GROUP_ID,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ k2u_info.kva = 281474976710656 - 100,
+ k2u_info.addr = 281474976710656 - 100,
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("unshare failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ return ret;
+}
+
+static int testcase13(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+ struct sp_add_group_info ag_info = {
+ .spg_id = GROUP_ID,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+
+ pid_t pid = fork();
+
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ struct sp_add_group_info ag_info = {
+ .spg_id = GROUP_ID,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ struct sp_make_share_info k2u_info2 = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info2);
+
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ // if (ret < 0) {
+ // pr_info("unshare failed, errno %d", errno);
+ // ioctl_vfree(dev_fd, &ka_info2);
+ // }
+ exit(0);
+ }
+
+ int status;
+ wait(&status);
+
+ return 0;
+}
+
+static int testcase14(void)
+{
+ int ret;
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+
+ if (prepare(&ka_info, false) != 0) {
+ return -1;
+ }
+
+ struct sp_add_group_info ag_info = {
+ .spg_id = GROUP_ID,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = GROUP_ID,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ ioctl_vfree(dev_fd, &ka_info);
+ }
+
+
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ struct sp_add_group_info ag_info = {
+ .spg_id = 2,
+ .prot = PROT_READ | PROT_WRITE,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ // if (ret < 0) {
+ // pr_info("unshare failed, errno %d", errno);
+ // ioctl_vfree(dev_fd, &ka_info2);
+ // }
+ exit(0);
+ }
+
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "传入非法group id")
+ TESTCASE_CHILD(testcase2, "传入sp_alloc size 传入0")
+ TESTCASE_CHILD(testcase3, "spalloc 时进程未进组")
+ TESTCASE_CHILD(testcase4, "调用walk_page_range,其中sp_walk_data == NULL")
+ TESTCASE_CHILD(testcase5, "cat 一个 group prot 为 READ")
+ TESTCASE_CHILD(testcase6, "k2u to task 时当前进程不在组内")
+ TESTCASE_CHILD(testcase7, "ushare kva时 kva + size出现溢出")
+ TESTCASE_CHILD(testcase8, "ushare kva时, kva == 1")
+ TESTCASE_CHILD(testcase9, "test register and unregister notifier")
+ TESTCASE_CHILD(testcase10,"alloc huge page to OOM")
+ TESTCASE_CHILD(testcase11,"sp_walk_page_range uva_aligned + size_aligned overflow")
+ TESTCASE_CHILD(testcase12,"sp_unshare_uva uva is invalid")
+ TESTCASE_CHILD(testcase13,"sp_unshare_uva unshare uva(to task) no permission")
+ TESTCASE_CHILD(testcase14,"sp_unshare_uva not in the group")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_hugepage.c b/tools/testing/sharepool/testcase/scenario_test/test_hugepage.c
new file mode 100644
index 000000000000..f7797a03aaeb
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_hugepage.c
@@ -0,0 +1,231 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 执行流程:sp_alloc直至触发OOM
+ */
+
+#define PROT (PROT_READ | PROT_WRITE)
+#define ALLOC_NUM_MAX 100
+#define GROUP_ID 1
+
+static int alloc_num = 50;
+static int nr_hugepages = 0;
+static int nr_overcommit = 0;
+static int cgroup_on = 0;
+
+char *prefix = "/sys/kernel/mm/hugepages/hugepages-2048kB/";
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "n:o:a:c:")) != -1) {
+ switch (opt) {
+ case 'n': // 系统配置静态大页数
+ nr_hugepages = atoi(optarg);
+ printf("nr_hugepages = %d\n", nr_hugepages);
+ break;
+ case 'o': // 系统配置overcommit大页数
+ nr_overcommit = atoi(optarg);
+ printf("nr_overcommit_hugepages = %d\n", nr_overcommit);
+ break;
+ case 'a': // 申请sp大页数量
+ alloc_num = atoi(optarg);
+ printf("want to alloc hugepages = %d\n", alloc_num);
+ break;
+ case 'c': // cgroup是否开启
+ cgroup_on = atoi(optarg);
+ printf("cgroup is %s\n", cgroup_on ? "on" : "off");
+ break;
+ default:
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+char *fields[] = {
+ "nr_hugepages",
+ "free_hugepages",
+ "resv_hugepages",
+ "surplus_hugepages"
+};
+
+static int check_val(char *field, int val)
+{
+ char path[SIZE];
+ char result[SIZE];
+ int real_val;
+
+ strcpy(path, prefix);
+ strcat(path, field);
+ read_attr(path, result, SIZE);
+ real_val = atoi(result);
+ pr_info("%s val is %d", path, real_val);
+ if (real_val != val) {
+ pr_info("Val %s incorrect. expected: %d, now: %d", field, val, real_val);
+ return -1;
+ }
+ return 0;
+}
+
+static int check_vals(int val[])
+{
+ for (int i = 0; i < ARRAY_SIZE(fields); i++)
+ if (check_val(fields[i], val[i]))
+ return -1;
+ return 0;
+}
+
+static void *addr[ALLOC_NUM_MAX];
+static int exp_vals[4];
+
+static int alloc_hugepages_check(void)
+{
+ void *ret;
+
+ // 申请MAX个
+ for (int i = 0; i < alloc_num; i++) {
+ ret = wrap_sp_alloc(GROUP_ID, PMD_SIZE, SP_HUGEPAGE_ONLY);
+ if (ret == (void *)-1) {
+ pr_info("alloc hugepage failed.");
+ return -1;
+ }
+ addr[i] = ret;
+ }
+ pr_info("Alloc %d hugepages success!", alloc_num);
+
+ // 检查 meminfoi f(N)
+ mem_show();
+ if (nr_hugepages >= alloc_num) {
+ exp_vals[0] = nr_hugepages;
+ exp_vals[1] = nr_hugepages - alloc_num;
+ exp_vals[2] = 0;
+ exp_vals[3] = 0;
+ } else if (nr_hugepages + nr_overcommit >= alloc_num) {
+ exp_vals[0] = alloc_num;
+ exp_vals[1] = 0;
+ exp_vals[2] = 0;
+ exp_vals[3] = alloc_num - nr_hugepages;
+ } else {
+ exp_vals[0] = alloc_num;
+ exp_vals[1] = 0;
+ exp_vals[2] = 0;
+ exp_vals[3] = nr_overcommit;
+ }
+
+ if (check_vals(exp_vals)) {
+ pr_info("Check /proc/meminfo hugepages failed.");
+ return -1;
+ }
+ pr_info("Check /proc/meminfo hugepages success");
+ return 0;
+}
+
+static int free_hugepages_check(void)
+{
+ // 释放N个
+ for (int i = 0; i < alloc_num; i++) {
+ if (wrap_sp_free((unsigned long)addr[i])) {
+ pr_info("free failed");
+ return -1;
+ }
+ }
+ pr_info("Free /proc/meminfo hugepages success.");
+
+ // 检查 meminfo
+ mem_show();
+ exp_vals[0] = nr_hugepages;
+ exp_vals[1] = nr_hugepages;
+ exp_vals[2] = 0;
+ exp_vals[3] = 0;
+ if (check_vals(exp_vals)) {
+ pr_info("Check /proc/meminfo hugepages failed.");
+ return -1;
+ }
+ pr_info("Check /proc/meminfo hugepages success");
+ return 0;
+}
+
+/* testcase1:
+ * 申请大页,检查计数;再释放大页,检查计数 */
+static int testcase1(void) {
+ if (wrap_add_group(getpid(), PROT, GROUP_ID) < 0) {
+ pr_info("add group failed");
+ return -1;
+ }
+
+ return alloc_hugepages_check() || free_hugepages_check();
+}
+
+/* testcase2:
+ * 子进程申请大页,检查计数;然后子进程退出
+ * 父进程检查计数 */
+static int testcase2_child(void)
+{
+ if (wrap_add_group(getpid(), PROT, GROUP_ID) < 0) {
+ pr_info("add group failed");
+ return -1;
+ }
+ return alloc_hugepages_check();
+}
+
+static int testcase2(void) {
+ int ret;
+ pid_t child;
+
+ FORK_CHILD_ARGS(child, testcase2_child());
+ WAIT_CHILD_STATUS(child, out);
+
+ // 检查 meminfo
+ mem_show();
+ exp_vals[0] = nr_hugepages;
+ exp_vals[1] = nr_hugepages;
+ exp_vals[2] = 0;
+ exp_vals[3] = 0;
+ if (check_vals(exp_vals)) {
+ pr_info("Check /proc/meminfo hugepages failed.");
+ return -1;
+ }
+ pr_info("Check /proc/meminfo hugepages success");
+ return 0;
+
+out:
+ return -1;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "检查进程申请-主动释放大页过程中,HugePages_xx计数是否正确")
+ TESTCASE_CHILD(testcase2, "检查进程申请-kill并被动释放大页过程中,HugePages_xx计数是否正确")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_hugepage_setting.sh b/tools/testing/sharepool/testcase/scenario_test/test_hugepage_setting.sh
new file mode 100755
index 000000000000..917893247e18
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_hugepage_setting.sh
@@ -0,0 +1,51 @@
+#!/bin/sh
+set -x
+
+nr_hpages=/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
+nr_overcmt=/proc/sys/vm/nr_overcommit_hugepages
+meminfo=/proc/meminfo
+
+# 测试点1:
+# sharepool临时大页体现在HugePages_Total和HugePages_Surp上;
+# setting:
+# 设置静态大页数目=0,overcommit=0
+
+set_nr(){
+ echo $1 > $nr_hpages
+ echo $2 > $nr_overcmt
+ ret=`cat $nr_hpages`
+ if [ $ret -ne $1 ] ;then
+ echo set nr_hugepages failed!
+ return 1
+ fi
+ ret=`cat $nr_overcmt`
+ if [ $ret -ne $2 ] ;then
+ echo set nr_overcommit_hugepages failed!
+ return 1
+ fi
+ return 0
+}
+
+test_hpage(){
+ set_nr $1 $2
+ ./scenario_test/test_hugepage -n $1 -o $2 -a 50 -c 0
+ if [ $? -ne 0 ] ;then
+ return 1
+ fi
+ return 0
+}
+
+echo 'test_hpage 0 0 0
+ test_hpage 20 0 0
+ test_hpage 60 0 0
+ test_hpage 20 20 0
+ test_hpage 20 40 0
+ test_hpage 0 60 0' | while read line
+do
+ $line
+ if [ $? -ne 0 ] ;then
+ echo "$line failed!"
+ exit 1
+ fi
+ echo "$line success"
+done
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_max_50000_groups.c b/tools/testing/sharepool/testcase/scenario_test/test_max_50000_groups.c
new file mode 100644
index 000000000000..cc8a1278c08d
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_max_50000_groups.c
@@ -0,0 +1,138 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+#define MAX_GROUP 49999
+#define SPG_ID_AUTO_MIN 100000
+#define PROC_NUM 20
+
+int sem_id;
+int msg_id;
+struct msgbuf {
+ long mtype;
+ int group_num;
+};
+
+int group_count;
+
+static void send_msg(int msgid, int msgtype, int group_num)
+{
+ struct msgbuf msg = {
+ .mtype = msgtype,
+ .group_num = group_num,
+ };
+
+ if(msgsnd(msgid, (void *) &msg, sizeof(msg.group_num),
+ IPC_NOWAIT) == -1) {
+ perror("msgsnd error");
+ exit(EXIT_FAILURE);
+ } else {
+ pr_info("child %d message sent success: group_num: %d",
+ msgtype - 1, group_num);
+ }
+}
+
+static void get_msg(int msgid, int msgtype)
+{
+ struct msgbuf msg;
+ if (msgrcv(msgid, (void *) &msg, sizeof(msg.group_num), msgtype,
+ MSG_NOERROR) == -1) {
+ if (errno != ENOMSG) {
+ perror("msgrcv");
+ exit(EXIT_FAILURE);
+ }
+ pr_info("No message available for msgrcv()");
+ } else {
+ pr_info("child %d message received success: group_num: %d",
+ msgtype - 1, msg.group_num);
+ group_count += msg.group_num;
+ }
+}
+
+/* 子进程创建组直至失败 */
+static int test1(void)
+{
+ int ret = 0;
+ int count = 0;
+
+ while (1) {
+ if (wrap_add_group(getpid(), PROT_READ, SPG_ID_AUTO) < 0)
+ break;
+ count++;
+ }
+
+ pr_info("proc %d add %d groups", getpid(), count);
+ send_msg(msg_id, getpid(), count);
+ sem_inc_by_one(sem_id);
+ while (1) {
+
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+ int ret = 0;
+ int count = 0;
+ int status = 0;
+ char cpid[SIZE];
+ pid_t child[PROC_NUM];
+
+ sem_id = sem_create(1234, "semid");
+ int msgkey = 2345;
+ msg_id = msgget(msgkey, IPC_CREAT | 0666);
+
+ for (int i = 0; i < PROC_NUM; i++)
+ FORK_CHILD_ARGS(child[i], test1());
+
+ for (int i = 0; i < PROC_NUM; i++)
+ get_msg(msg_id, (int)child[i]);
+
+ pr_info("\n%d Groups are created.\n", group_count);
+ sleep(5);
+
+ sem_dec_by_val(sem_id, PROC_NUM);
+// cat_attr("/proc/sharepool/spa_stat");
+
+ for (int i = 0; i < PROC_NUM; i++)
+ KILL_CHILD(child[i]);
+
+ msgctl(msg_id, IPC_RMID, 0);
+ sem_close(sem_id);
+ return ret;
+}
+
+/*
+ * testcase2: 申请共享组,使用AUTO,预期最多49999个。
+*/
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "申请共享组,使用AUTO,预期最多49999个")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_oom.c b/tools/testing/sharepool/testcase/scenario_test/test_oom.c
new file mode 100644
index 000000000000..ca78d71fbc1a
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_oom.c
@@ -0,0 +1,135 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 执行流程:sp_alloc直至触发OOM
+ */
+
+#define PROC_NUM 20
+#define PROT (PROT_READ | PROT_WRITE)
+#define HP_SIZE (2 * 1024 * 1024UL)
+
+static int testcase1(void)
+{
+ int ret = 0;
+ unsigned long ret_addr;
+ int status;
+ pid_t child[PROC_NUM];
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ int pid = fork();
+ if (pid == 0) {
+ int count = 0;
+ TEST_CHECK(wrap_add_group(getpid(), PROT, i + 1), error);
+ while (1) {
+ ret_addr = (unsigned long)wrap_sp_alloc(i + 1, HP_SIZE, 0);
+ if (ret_addr == -1)
+ pr_info("alloc failed! ret_addr: %lx, errno: %d", ret_addr, errno);
+ else
+ pr_info("proc%d: alloc success %d time.", i, ++count);
+ }
+ }
+ child[i] = pid;
+ }
+
+ for (int i = 0; i < PROC_NUM; i++)
+ waitpid(child[i], &status, 0);
+
+error:
+ return ret;
+}
+
+static int testcase2(void)
+{
+ int ret = 0;
+ unsigned long ret_addr;
+ int status;
+ pid_t child[PROC_NUM];
+ pid_t add_workers[PROC_NUM];
+
+ // 拉起加组进程,反复加入和退出组
+ for (int i = 0; i < PROC_NUM; i++) {
+ int pid = fork();
+ if (pid == 0) { // add group -> del group
+ int count = 0;
+ while (1) {
+ int grp_id = PROC_NUM + i + 1;
+ ret = wrap_add_group(getpid(), PROT, grp_id);
+ if (ret < 0) {
+ pr_info("add group %d failed. ret: %d", grp_id, ret);
+ continue;
+ }
+ pr_info("add group %d success.", grp_id);
+ ret = wrap_del_from_group(getpid(), grp_id);
+ if (ret < 0) {
+ pr_info("del from group %d failed unexpected. ret: %d", grp_id, ret);
+ break;
+ }
+ }
+ exit(ret);
+ }
+ add_workers[i] = pid;
+ }
+
+ // 拉起申请进程
+ for (int i = 0; i < PROC_NUM; i++) {
+ int pid = fork();
+ if (pid == 0) {
+ int count = 0;
+ TEST_CHECK(wrap_add_group(getpid(), PROT, i + 1), error); // group id [1, PROC_NUM]
+ while (1) {
+ ret_addr = (unsigned long)wrap_sp_alloc(i + 1, HP_SIZE, 0);
+ if (ret_addr == -1)
+ pr_info("alloc failed! ret_addr: %lx, errno: %d", ret_addr, errno);
+ else
+ pr_info("proc%d: alloc success %d time.", i, ++count);
+ }
+ }
+ child[i] = pid;
+ }
+
+ // 等待申请进程OOM
+ for (int i = 0; i < PROC_NUM; i++)
+ waitpid(child[i], &status, 0);
+
+ // kill加组进程
+ for (int i = 0; i < PROC_NUM; i++)
+ KILL_CHILD(add_workers[i]);
+
+error:
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "执行流程:sp_alloc直至触发OOM")
+ TESTCASE_CHILD(testcase2, "执行sp_alloc,直到oom;同时拉起加组进程,反复加入和退出组")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_proc_sp_group_state.c b/tools/testing/sharepool/testcase/scenario_test/test_proc_sp_group_state.c
new file mode 100644
index 000000000000..8e4ba2881800
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_proc_sp_group_state.c
@@ -0,0 +1,170 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 16:47:36 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdbool.h>
+
+#include "sharepool_lib.h"
+
+#define ALIGN_UP(x, align_to) (((x) + ((align_to)-1)) & ~((align_to)-1))
+#define CMD_LEN 100
+#define UNIT 1024
+#define PAGE_NUM 100
+#define HGPAGE_NUM 10
+#define LARGE_PAGE_NUM 1000000
+#define ATOMIC_TEST_SIZE (1024UL * 1024UL * 1024UL) // 1G
+/*
+ * 前置条件:进程先加组。
+ * testcase1: 申请共享组内存,flag为0。预期申请成功。
+ * testcase2: 申请共享组内存,flag为HP,size不对齐。预期申请到对齐大页大小size的内存。
+ */
+
+static int addgroup(void)
+{
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = 1,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int cleanup(struct sp_alloc_info *alloc_info)
+{
+ int ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_free failed, errno: %d", errno);
+ }
+ return ret;
+}
+
+static int alloc_large_repeat(bool hugepage, int repeat)
+{
+ int ret;
+ if (addgroup() != 0) {
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = hugepage ? 1 : 0,
+ .size = ATOMIC_TEST_SIZE,
+ .spg_id = 1,
+ };
+
+ pr_info("start to alloc...");
+ for (int i = 0; i < repeat; i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret != 0) {
+ pr_info("alloc %s failed. errno %d",
+ hugepage ? "huge page" : "normal page",
+ errno);
+ return ret;
+ } else {
+ pr_info("alloc %s success %d time.",
+ hugepage ? "huge page" : "normal page",
+ i + 1);
+ }
+ sharepool_print();
+ mem_show();
+ }
+ return 0;
+}
+
+// 申请完停下手动cat下
+static int testcase1(void)
+{
+ char pid_str[SIZE];
+
+ alloc_large_repeat(false, 1);
+ pr_info("process %d suspended, cat /proc/%d/sp_group, then kill",
+ getpid(), getpid());
+
+ sprintf(pid_str, "/proc/%d/sp_group", getpid());
+ cat_attr(pid_str);
+ sleep(1);
+
+ return 0;
+}
+
+static int testcase2(void)
+{
+ char pid_str[SIZE];
+
+ alloc_large_repeat(true, 1);
+ pr_info("process %d suspended, cat /proc/%d/sp_group, then kill",
+ getpid(), getpid());
+
+ sprintf(pid_str, "/proc/%d/sp_group", getpid());
+ cat_attr(pid_str);
+ sleep(1);
+
+ return 0;
+}
+
+static int testcase3(void)
+{
+ int ret = 0;
+ char pid_str[SIZE];
+ sprintf(pid_str, "/proc/%d/sp_group", getpid());
+
+ pr_info("process %d no connection with sharepool, cat /proc/%d/sp_group ...",
+ getpid(), getpid());
+ cat_attr(pid_str);
+ sleep(1);
+
+ ret = alloc_large_repeat(true, 1);
+
+ pr_info("process %d now alloc sharepool memory, cat /proc/%d/sp_group ...",
+ getpid(), getpid());
+ cat_attr(pid_str);
+ sleep(1);
+
+ return ret;
+}
+
+static int testcase4(void)
+{
+ char pid_str[SIZE];
+
+ for (int i = 0; i < 100; i++) {
+ sprintf(pid_str, "/proc/%d/sp_group", i);
+ cat_attr(pid_str);
+ }
+
+ sleep(1);
+ return 0;
+}
+
+
+/* testcase1: 申请共享组内存,flag为0。预期申请成功。
+ * testcase2: 申请共享组内存,flag为HP,size不对齐。预期申请到对齐大页大小size的内存。
+*/
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "单进程申请1G普通页,重复1次,然后退出。观察维测结构打印是否正常")
+ TESTCASE_CHILD(testcase2, "单进程申请1G大页,重复1次,然后退出。观察维测结构打印是否正常")
+ TESTCASE_CHILD(testcase3, "单进程未与sharepool产生关联,直接cat,然后再加组申请内存,再cat")
+ TESTCASE_CHILD(testcase4, "cat /proc/1~100/sp_group, 不死锁即可")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/scenario_test/test_vmalloc_cgroup.c b/tools/testing/sharepool/testcase/scenario_test/test_vmalloc_cgroup.c
new file mode 100644
index 000000000000..c8e5748cdebe
--- /dev/null
+++ b/tools/testing/sharepool/testcase/scenario_test/test_vmalloc_cgroup.c
@@ -0,0 +1,65 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 执行流程:sp_alloc直至触发OOM
+ */
+
+#define PROC_NUM 20
+#define PROT (PROT_READ | PROT_WRITE)
+#define HP_SIZE (2 * 1024 * 1024UL)
+
+static int test(bool ishuge)
+{
+ int ret = 0;
+ unsigned long kaddr;
+ unsigned long size = 10UL * PMD_SIZE;
+
+ kaddr = wrap_vmalloc(size, ishuge);
+ if (!kaddr) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ KAREA_ACCESS_SET('A', kaddr, size, out);
+ return ret;
+out:
+ return -1;
+}
+
+static int testcase1(void) { return test(false); }
+static int testcase2(void) { return test(true); }
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "vmalloc 20M小页内存,不释放,退出")
+ TESTCASE_CHILD(testcase1, "vmalloc 20M大页内存,不释放,退出")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/stress_test/Makefile b/tools/testing/sharepool/testcase/stress_test/Makefile
new file mode 100644
index 000000000000..30c6456130d9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/stress_test/Makefile
@@ -0,0 +1,15 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ mkdir -p $(TOOL_BIN_DIR)/stress_test
+ cp $(testcases) $(TOOL_BIN_DIR)/stress_test
+ cp stress_test.sh $(TOOL_BIN_DIR)
+
+clean:
+ rm -rf $(testcases)
diff --git a/tools/testing/sharepool/testcase/stress_test/sp_ro_fault_injection.sh b/tools/testing/sharepool/testcase/stress_test/sp_ro_fault_injection.sh
new file mode 100644
index 000000000000..2bc729f09fd0
--- /dev/null
+++ b/tools/testing/sharepool/testcase/stress_test/sp_ro_fault_injection.sh
@@ -0,0 +1,21 @@
+#!/bin/bash
+
+# fault injection test for SP_RO read only area
+
+fn_fail_page_alloc()
+{
+ echo Y > /sys/kernel/debug/fail_page_alloc/task-filter
+ echo 10 > /sys/kernel/debug/fail_page_alloc/probability
+ echo 1 > /sys/kernel/debug/fail_page_alloc/interval
+ printf %#x -1 > /sys/kernel/debug/fail_page_alloc/times
+ echo 0 > /sys/kernel/debug/fail_page_alloc/space
+ echo 2 > /sys/kernel/debug/fail_page_alloc/verbose
+ bash -c "echo 1 > /proc/self/make-it-fail && exec $*"
+}
+
+fn_page_alloc_fault()
+{
+ fn_fail_page_alloc ./api_test/ro_test
+}
+
+fn_page_alloc_fault
diff --git a/tools/testing/sharepool/testcase/stress_test/stress_test.sh b/tools/testing/sharepool/testcase/stress_test/stress_test.sh
new file mode 100755
index 000000000000..fd83c8d51015
--- /dev/null
+++ b/tools/testing/sharepool/testcase/stress_test/stress_test.sh
@@ -0,0 +1,47 @@
+#!/bin/sh
+
+set -x
+
+echo 'test_u2k_add_and_kill -g 2999 -p 1023 -n 10000
+ test_alloc_add_and_kill -g 2999 -p 1023 -n 10000
+ test_concurrent_debug -g 2999 -p 1023 -n 10000
+ test_mult_u2k -n 5000 -s 1000 -r 500
+ test_alloc_free_two_process -g 2999 -p 1023 -n 100 -s 10000
+ test_statistics_stress
+ test_mult_proc_interface' | while read line
+do
+ let flag=0
+ ./stress_test/$line
+ if [ $? -ne 0 ] ;then
+ echo "testcase stress_test/$line failed"
+ let flag=1
+ fi
+
+ sleep 3
+
+ #打印spa_stat
+ ret=`cat /proc/sharepool/spa_stat | wc -l`
+ if [ $ret -ge 15 ] ;then
+ cat /proc/sharepool/spa_stat
+ echo spa_stat not clean
+ let flag=1
+ fi
+ #打印proc_stat
+ ret=`cat /proc/sharepool/proc_stat | wc -l`
+ if [ $ret -ge 15 ] ;then
+ cat /proc/sharepool/proc_stat
+ echo proc_stat not clean
+ let flag=1
+ fi
+
+ cat /proc/sharepool/proc_overview
+ #如果泄漏 -->exit
+ if [ $flag -eq 1 ] ;then
+ exit 1
+ fi
+ echo "testcase stress_test/$line success"
+
+ cat /proc/meminfo
+ free -m
+
+done
diff --git a/tools/testing/sharepool/testcase/stress_test/test_alloc_add_and_kill.c b/tools/testing/sharepool/testcase/stress_test/test_alloc_add_and_kill.c
new file mode 100644
index 000000000000..7278815bd328
--- /dev/null
+++ b/tools/testing/sharepool/testcase/stress_test/test_alloc_add_and_kill.c
@@ -0,0 +1,347 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 执行流程:
+ * grandchild:
+ * 连续申请100次内存,然后释放掉,循环做
+ * child:
+ * 不断的kill或者创建grandchild进程
+ * 参数:
+ * -n 杀死或者创建grandchild进程次数
+ * -p 每组进程数
+ * -g sharepool组的数量
+ * -s grandchild进程每次申请内存的大小
+ */
+
+#define NR_GROUP 3000
+#define MAX_PROC_PER_GRP 1024
+#define NR_HOLD_AREAS 100
+
+static sem_t *child_sync[NR_GROUP];
+static sem_t *grandchild_sync[NR_GROUP];
+
+static int group_num = 2;
+static int process_per_group = 3;
+static int kill_num = 1000;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int grandchild_id)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", grandchild_id, getpid(), ##args)
+ int ret;
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync[grandchild_id / MAX_PROC_PER_GRP]);
+ } while ((ret != 0) && errno == EINTR);
+
+ //pr_local_info("start!!");
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ sem_post(grandchild_sync[grandchild_id / MAX_PROC_PER_GRP]);
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_infos[NR_HOLD_AREAS] = {0};
+ for (int i = 0; i < NR_HOLD_AREAS; i++) {
+ alloc_infos[i].flag = 0;
+ alloc_infos[i].spg_id = group_id;
+ alloc_infos[i].size = alloc_size;
+ }
+
+ int top = 0;
+ int count = 0;
+ while (1) {
+ struct sp_alloc_info *info = alloc_infos + top++;
+ ret = ioctl_alloc(dev_fd, info);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ return -1;
+ } else {
+ if (IS_ERR_VALUE(info->addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", info->addr);
+ return -1;
+ }
+ }
+
+ memset((void *)info->addr, 'z', info->size);
+
+ if (top == NR_HOLD_AREAS) {
+ top = 0;
+ for (int i = 0; i < NR_HOLD_AREAS; i++) {
+ ret = ioctl_free(dev_fd, &alloc_infos[i]);
+ if (ret < 0) {
+ pr_local_info("sp_free failed, %d", ret);
+ return ret;
+ }
+ }
+ pr_info("grandchild process id:%d finished %dth alloc-free-100times-run.", grandchild_id, ++count);
+ }
+ }
+
+ //pr_local_info("exit!!");
+ return 0;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ //pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ for (int i = 0; i < process_per_group; i++) {
+ int grandchild_id = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(grandchild_id);
+ exit(ret);
+ } else {
+ //pr_local_info("fork grandchild%d, pid: %d", grandchild_id, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", grandchild_id, group_id);
+ goto out;
+ } else {
+ //pr_local_info("add grandchild%d to group %d successfully", grandchild_id, group_id);
+ }
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+ pr_local_info("%dth sp group %d, create %d processes and add group success!!", arg, group_id, process_per_group);
+
+ unsigned int seed = time(NULL);
+ srand(seed);
+ pr_local_info("rand seed: %u\n", seed);
+ /* 随机杀死或者创建进程 */
+ for (int i = 0; i < kill_num; i++) {
+ sleep(1);
+ pr_info("group %d %dth interruption, %d times left.", group_id, i + 1, kill_num - i - 1);
+ int idx = rand() % process_per_group;
+
+ if (childs[idx] > 0) {
+ kill(childs[idx], SIGKILL);
+ waitpid(childs[idx], NULL, 0);
+ childs[idx] = 0;
+ } else {
+ int grandchild_id = arg * MAX_PROC_PER_GRP + idx;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(grandchild_id);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", grandchild_id, pid);
+ childs[idx] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", grandchild_id, group_id);
+ goto out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", grandchild_id, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+ }
+
+out:
+ for (int i = 0; i < process_per_group; i++) {
+ if (!childs[i])
+ continue;
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ if (!WIFSIGNALED(status)) {
+ pr_local_info("grandchild%d test failed", arg * MAX_PROC_PER_GRP + i);
+ ret = -1;
+ }
+ }
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_PER_GRP || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > NR_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 杀死或者创建子进程次数
+ kill_num = atoi(optarg);
+ if (kill_num > 100000 || kill_num <= 0) {
+ printf("kill number invalid\n");
+ return -1;
+ }
+ break;
+ case 's':
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ printf("unsupported options: '%c'\n", opt);
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+#define pr_local_info(fmt, args...) printf("[parent , pid:%d] " fmt "\n", getpid(), ##args)
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[NR_GROUP];
+
+ pr_local_info("group: %d, process_per_group: %d", group_num, process_per_group);
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_local_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_local_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+
+ pr_local_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ status = (char)WEXITSTATUS(status);
+ if (status) {
+ pr_local_info("child%d test failed, %d", i, status);
+ ret = status;
+ }
+ }
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "连续申请100次内存,然后释放掉,循环; 同时不断杀死进程/创建新进程加组")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/stress_test/test_alloc_free_two_process.c b/tools/testing/sharepool/testcase/stress_test/test_alloc_free_two_process.c
new file mode 100644
index 000000000000..63621dbf5b7d
--- /dev/null
+++ b/tools/testing/sharepool/testcase/stress_test/test_alloc_free_two_process.c
@@ -0,0 +1,303 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 两个组,每组两个进程,同时多次申请释放内存
+ *
+ * 用信号量进行同步,进程创建完成后先睡眠
+ */
+
+#define NR_GROUP 3000
+#define MAX_PROC_PER_GRP 1024
+
+static int group_num = 2;
+static int process_per_group = 3;
+static int alloc_num = 1000;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int arg, sem_t *child_sync, sem_t *grandchild_sync)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", arg, getpid(), ##args)
+ int ret;
+
+ struct sp_alloc_info *alloc_infos = malloc(sizeof(*alloc_infos) * alloc_num);
+ if (!alloc_infos) {
+ pr_local_info("malloc failed");
+ return -1;
+ }
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync);
+ } while ((ret != 0) && errno == EINTR);
+
+ sleep(1); // it seems sem_wait doesn't work as expected
+ pr_local_info("start!!, ret is %d, errno is %d", ret, errno);
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ goto error_out;
+ }
+
+ for (int i = 0; i < alloc_num; i++) {
+ (alloc_infos + i)->flag = 0,
+ (alloc_infos + i)->spg_id = group_id,
+ (alloc_infos + i)->size = alloc_size,
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, alloc_infos + i);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ goto error_out;
+ } else {
+ if (IS_ERR_VALUE(alloc_infos[i].addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", alloc_infos[i].addr);
+ goto error_out;
+ }
+ }
+
+ memset((void *)alloc_infos[i].addr, 'z', alloc_infos[i].size);
+ }
+
+ sem_post(grandchild_sync);
+ do {
+ ret = sem_wait(child_sync);
+ } while (ret < 0 && errno == EINTR);
+
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_free(dev_fd, alloc_infos + i);
+ if (ret < 0) {
+ pr_local_info("ioctl free failed, errno: %d", errno);
+ goto error_out;
+ }
+ }
+ sem_post(grandchild_sync);
+ free(alloc_infos);
+ pr_local_info("exit!!");
+ return 0;
+
+error_out:
+ sem_post(grandchild_sync);
+ free(alloc_infos);
+ return -1;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+ sem_t *child_sync[MAX_PROC_PER_GRP] = {0};
+ sem_t *grandchild_sync[MAX_PROC_PER_GRP] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ // create syncs for grandchilds
+ for (int i = 0; i < process_per_group; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_local_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // create syncs for childs
+ for (int i = 0; i < process_per_group; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_local_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // 创建子进程并将之加组
+ for (int i = 0; i < process_per_group; i++) {
+ int num = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(num, child_sync[i], grandchild_sync[i]);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", num, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", num, group_id);
+ goto error_out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", num, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[i]);
+ }
+ }
+
+ for (int i = 0; i < process_per_group; i++)
+ do {
+ ret = sem_wait(grandchild_sync[i]);
+ } while (ret < 0 && errno == EINTR);
+
+ for (int i = 0; i < process_per_group; i++)
+ sem_post(child_sync[i]);
+ pr_local_info("grandchild-processes start to do sp_free");
+
+ for (int i = 0; i < process_per_group && childs[i] > 0; i++) {
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_local_info("grandchild%d test failed, %d", arg * MAX_PROC_PER_GRP + i, status);
+ ret = -1;
+ }
+ }
+ pr_local_info("exit!!");
+ return ret;
+
+error_out:
+ for (int i = 0; i < process_per_group && childs[i] > 0; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], NULL, 0);
+ };
+ pr_local_info("exit!!");
+ return -1;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_PER_GRP || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > NR_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 内存申请次数
+ alloc_num = atoi(optarg);
+ if (alloc_num > 100000 || alloc_num <= 0) {
+ printf("alloc number invalid\n");
+ return -1;
+ }
+ break;
+ case 's':
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+#define pr_local_info(fmt, args...) printf("[parent , pid:%d] " fmt "\n", getpid(), ##args)
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[NR_GROUP];
+
+ pr_local_info("group: %d, process_per_group: %d", group_num, process_per_group);
+
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+
+ pr_local_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_local_info("child%d test failed, %d", i, status);
+ ret = -1;
+ }
+ }
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "两个组每组两个进程,同时申请释放内存,简单验证")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/stress_test/test_concurrent_debug.c b/tools/testing/sharepool/testcase/stress_test/test_concurrent_debug.c
new file mode 100644
index 000000000000..6de3001e5632
--- /dev/null
+++ b/tools/testing/sharepool/testcase/stress_test/test_concurrent_debug.c
@@ -0,0 +1,359 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 执行流程:
+ * grandchild:
+ * 连续申请100次内存,然后释放掉,循环
+ * child:
+ * 不断的kill或者创建grandchild进程
+ * 参数:
+ * -n 杀死或者创建grandchild进程次数
+ * -p 每组进程数
+ * -g sharepool组的数量
+ * -s grandchild进程每次申请内存的大小
+ */
+
+#define NR_GROUP 3000
+#define MAX_PROC_PER_GRP 1024
+#define NR_HOLD_AREAS 100
+
+static sem_t *child_sync[NR_GROUP];
+static sem_t *grandchild_sync[NR_GROUP];
+
+static int group_num = 2;
+static int process_per_group = 3;
+static int kill_num = 100;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int grandchild_id)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", grandchild_id, getpid(), ##args)
+ int ret;
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync[grandchild_id / MAX_PROC_PER_GRP]);
+ } while ((ret != 0) && errno == EINTR);
+
+ //pr_local_info("start!!");
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ sem_post(grandchild_sync[grandchild_id / MAX_PROC_PER_GRP]);
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_infos[NR_HOLD_AREAS] = {0};
+ for (int i = 0; i < NR_HOLD_AREAS; i++) {
+ alloc_infos[i].flag = 0;
+ alloc_infos[i].spg_id = group_id;
+ alloc_infos[i].size = alloc_size;
+ }
+
+ int top = 0;
+ int count = 0;
+ while (1) {
+ struct sp_alloc_info *info = alloc_infos + top++;
+ ret = ioctl_alloc(dev_fd, info);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ return -1;
+ } else {
+ if (IS_ERR_VALUE(info->addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", info->addr);
+ return -1;
+ }
+ }
+
+ memset((void *)info->addr, 'z', info->size);
+
+ if (top == NR_HOLD_AREAS) {
+ top = 0;
+ for (int i = 0; i < NR_HOLD_AREAS; i++) {
+ ret = ioctl_free(dev_fd, &alloc_infos[i]);
+ if (ret < 0) {
+ pr_local_info("sp_free failed, %d", ret);
+ return ret;
+ }
+ }
+ pr_info("grandchild process id:%d finished %dth alloc-free-100times-run.", grandchild_id, ++count);
+ }
+ }
+
+ //pr_local_info("exit!!");
+ return 0;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ //pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ for (int i = 0; i < process_per_group; i++) {
+ int grandchild_id = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(grandchild_id);
+ exit(ret);
+ } else {
+ //pr_local_info("fork grandchild%d, pid: %d", grandchild_id, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", grandchild_id, group_id);
+ goto out;
+ } else {
+ //pr_local_info("add grandchild%d to group %d successfully", grandchild_id, group_id);
+ }
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+ pr_local_info("%dth sp group %d, create %d processes and add group success!!", arg, group_id, process_per_group);
+
+ unsigned int seed = time(NULL);
+ srand(seed);
+ pr_local_info("rand seed: %u\n", seed);
+ /* 随机杀死或者创建进程 */
+ for (int i = 0; i < kill_num; i++) {
+ sleep(1);
+ pr_info("group %d %dth interruption, %d times left.", group_id, i + 1, kill_num - i - 1);
+ int idx = rand() % process_per_group;
+
+ if (childs[idx] > 0) {
+ kill(childs[idx], SIGKILL);
+ waitpid(childs[idx], NULL, 0);
+ childs[idx] = 0;
+ } else {
+ int grandchild_id = arg * MAX_PROC_PER_GRP + idx;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(grandchild_id);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", grandchild_id, pid);
+ childs[idx] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", grandchild_id, group_id);
+ goto out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", grandchild_id, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+ }
+
+out:
+ for (int i = 0; i < process_per_group; i++) {
+ if (!childs[i])
+ continue;
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ if (!WIFSIGNALED(status)) {
+ pr_local_info("grandchild%d test failed", arg * MAX_PROC_PER_GRP + i);
+ ret = -1;
+ }
+ }
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_PER_GRP || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > NR_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 杀死或者创建子进程次数
+ kill_num = atoi(optarg);
+ if (kill_num > 100000 || kill_num <= 0) {
+ printf("kill number invalid\n");
+ return -1;
+ }
+ break;
+ case 's':
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ printf("unsupported options: '%c'\n", opt);
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+#define pr_local_info(fmt, args...) printf("[parent , pid:%d] " fmt "\n", getpid(), ##args)
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[NR_GROUP];
+ pid_t pid;
+
+ pr_local_info("group: %d, process_per_group: %d", group_num, process_per_group);
+
+ pid = fork();
+ if (pid == 0) {
+ while(1) {
+ usleep(200);
+ sharepool_log("sharepool_log");
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_local_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_local_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+
+ pr_local_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ status = (char)WEXITSTATUS(status);
+ if (status) {
+ pr_local_info("child%d test failed, %d", i, status);
+ ret = status;
+ }
+ }
+
+ kill(pid, SIGKILL);
+ waitpid(pid, &status, 0);
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "持续申请->释放内存,同时循环打印维测")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/stress_test/test_mult_u2k.c b/tools/testing/sharepool/testcase/stress_test/test_mult_u2k.c
new file mode 100644
index 000000000000..04a5a3e5c6e0
--- /dev/null
+++ b/tools/testing/sharepool/testcase/stress_test/test_mult_u2k.c
@@ -0,0 +1,514 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Nov 24 15:40:31 2020
+ */
+#define _GNU_SOURCE
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <pthread.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+#define MAX_ALLOC 100000
+#define MAX_SHARE 1000
+#define MAX_READ 100000
+
+static int alloc_num = 20;
+static int share_num = 20;
+static int read_num = 20;
+
+struct __thread_info {
+ struct sp_make_share_info *u2k_info;
+ struct karea_access_info *karea_info;
+};
+
+/*
+ * 用户态进程A加组后分配并写内存N,反复调用u2k(share_num次)共享到内核,内核模块通过每个kva反复读同一块内存(read_num次)N成功。
+ * 进程A停止共享N后,内核模块读N失败。进程A释放内存N。
+ */
+static void *grandchild1(void *arg)
+{
+ struct karea_access_info *karea_info = (struct karea_access_info*)arg;
+ int ret = 0;
+ for (int j = 0; j < read_num; j++) {
+ ret = ioctl_karea_access(dev_fd, karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ pthread_exit((void*)ret);
+ }
+ pr_info("thread read u2k area %dth time success", j);
+ }
+ pr_info("thread read u2k area %d times success", read_num);
+ pthread_exit((void*)ret);
+}
+
+static int child1(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = alloc_info->spg_id;
+ pr_info("want to add group_id: %d", group_id);
+
+ // add group()
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+ pr_info("now added into group_id: %d", alloc_info->spg_id);
+
+ // alloc()
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ pr_info("alloc %0lx memory success", alloc_info->size);
+
+ // write
+ memset((void *)alloc_info->addr, 'o', alloc_info->size);
+ pr_info("memset success");
+
+ // u2k
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info->addr,
+ .size = alloc_info->size,
+ .pid = getpid(),
+ };
+
+ struct karea_access_info *karea_info = (struct karea_access_info*)malloc(share_num * sizeof(struct karea_access_info));
+
+ for (int i = 0; i < share_num; i++) { // 同一段用户态内存可以向内核共享很多次
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+ karea_info[i].mod = KAREA_CHECK;
+ karea_info[i].value = 'o';
+ karea_info[i].addr = u2k_info.addr;
+ karea_info[i].size = u2k_info.size;
+ }
+ pr_info("u2k share %d times success", share_num);
+
+ //内核反复读(太慢了!取消了。)
+ //for (int j = 0; j < read_num; j++) {
+ for (int i = 0; i < share_num; i++) {
+ ret = ioctl_karea_access(dev_fd, &karea_info[i]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+ pr_info("kernel read %dth %0lx area success", i, alloc_info->size);
+ }
+ //}
+ //pr_info("kernel read %d times success", read_num);
+
+ //内核并发读 100个线程
+ pthread_t childs[MAX_SHARE] = {0};
+ int status = 0;
+ for (int i = 0; i < share_num; i++) {
+ ret = pthread_create(&childs[i], NULL, grandchild1, (void *)&karea_info[i]);
+ if (ret != 0) {
+ pr_info("pthread_create failed, errno: %d", errno);
+ exit(-1);
+ }
+ }
+ pr_info("create %d threads success", share_num);
+
+ void *child_ret;
+ for (int i = 0; i < share_num; i++) {
+ pthread_join(childs[i], &child_ret);
+ if ((int)child_ret != 0) {
+ pr_info("grandchild1 %d test failed, %d", i, (int)child_ret);
+ return (int)child_ret;
+ }
+ }
+ pr_info("exit %d threads success", share_num);
+
+ for (int i = 0; i < share_num; i++) {
+ u2k_info.addr = karea_info[i].addr;
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+ }
+ pr_info("unshare u2k area %d times success", share_num);
+
+ /*
+ * unshare之后再访问内存会导致内核挂掉,在手工测试的时候执行
+ */
+#if 0
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info[0]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+#endif
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ free(karea_info);
+ return ret;
+}
+
+/*
+ * 用户态进程A加组后分配并写alloc_num个内存,调用u2k共享到内核,内核模块通过每个kva分别读多个内存成功(每个kva读read_num次)。
+ * 进程A停止共享后,内核模块读N失败。进程A释放内存N。
+ */
+static void* grandchild2(void *arg)
+{
+ int ret = 0;
+ struct __thread_info* thread2_info = (struct __thread_info*)arg;
+ ret = ioctl_u2k(dev_fd, thread2_info->u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ pthread_exit((void*)ret);
+ }
+ thread2_info->karea_info->mod = KAREA_CHECK;
+ thread2_info->karea_info->value = 'p';
+ thread2_info->karea_info->addr = thread2_info->u2k_info->addr;
+ thread2_info->karea_info->size = thread2_info->u2k_info->size;
+ pthread_exit((void*)ret);
+}
+
+static int child2(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = alloc_info->spg_id;
+
+ // add group
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ // alloc N块内存
+ struct sp_alloc_info *all_alloc_info = (struct sp_alloc_info*)malloc(alloc_num * sizeof(struct sp_alloc_info));
+ for (int i = 0; i < alloc_num; i++) {
+ all_alloc_info[i].flag = alloc_info->flag;
+ all_alloc_info[i].spg_id = alloc_info->spg_id;
+ all_alloc_info[i].size = alloc_info->size;
+ ret = ioctl_alloc(dev_fd, &all_alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)all_alloc_info[i].addr, 'p', all_alloc_info[i].size);
+ }
+
+ struct sp_make_share_info *all_u2k_info = (struct sp_make_share_info*)malloc(alloc_num * sizeof(struct sp_make_share_info));
+
+ struct karea_access_info *karea_info = (struct karea_access_info*)malloc(alloc_num * sizeof(struct karea_access_info));
+
+ // 并发调用u2k
+ // 创建100个线程,分别对一块alloc内存调用u2k,并存储返回地址
+ pthread_t childs[MAX_ALLOC] = {0};
+ struct __thread_info thread2_info[MAX_ALLOC];
+ int status = 0;
+ for (int i = 0; i < alloc_num; i++) {
+ all_u2k_info[i].uva = all_alloc_info[i].addr;
+ all_u2k_info[i].size = all_alloc_info[i].size;
+ all_u2k_info[i].pid = getpid();
+ thread2_info[i].u2k_info = &all_u2k_info[i];
+ thread2_info[i].karea_info = &karea_info[i];
+ ret = pthread_create(&childs[i], NULL, grandchild2, (void *)&thread2_info[i]);
+ if (ret != 0) {
+ pr_info("pthread_create failed, errno: %d", errno);
+ exit(-1);
+ }
+ }
+
+ // 结束所有线程
+ void *child_ret;
+ for (int i = 0; i < alloc_num; i++) {
+ pthread_join(childs[i], &child_ret);
+ if ((int)child_ret != 0) {
+ pr_info("grandchild2 %d test failed, %d", i, (int)child_ret);
+ return (int)child_ret;
+ }
+ }
+
+ // 内核读内存
+ for (int j = 0; j < read_num; j++) {
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_karea_access(dev_fd, &karea_info[i]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+ }
+ }
+
+ // unshare所有内存
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_unshare(dev_fd, &all_u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ /*
+ * unshare之后再访问内存会导致内核挂掉,在手工测试的时候执行
+ */
+#if 0
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info[0]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+#endif
+
+ // free所有内存
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_free(dev_fd, &all_alloc_info[i]);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ }
+ free(all_alloc_info);
+ free(all_u2k_info);
+ free(karea_info);
+ return ret;
+}
+
+/*
+ * 用户态进程A加组后分配并写内存N,反复share_num次(调用u2k共享到内核, 内核模块读内存N成功,用户态调用unshare)。
+ * 进程A最后一次停止共享后,内核模块读N失败。进程A释放内存N。
+ */
+static int child3(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = alloc_info->spg_id;
+
+ // add group
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ // alloc
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info->addr, 'q', alloc_info->size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info->addr,
+ .size = alloc_info->size,
+ .pid = getpid(),
+ };
+
+ // 反复调用u2k-内核读-unshare
+ for (int i = 0; i < share_num; i++) {
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'q',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ /*
+ * unshare之后再访问内存会导致内核挂掉,在手工测试的时候执行
+ */
+#if 0
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info[0]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+#endif
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ return ret;
+}
+
+static void print_help()
+{
+ printf("Usage:./test_multi_u2k -n alloc_num -s share_num -r read_num\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "n:s:r:")) != -1) {
+ switch (opt) {
+ case 'n': // 内存申请次数
+ alloc_num = atoi(optarg);
+ if (alloc_num > MAX_ALLOC || alloc_num <= 0) {
+ printf("alloc number invalid\n");
+ return -1;
+ }
+ break;
+ case 's': // u2k共享次数
+ share_num = atoi(optarg);
+ if (share_num > MAX_SHARE || share_num <= 0) {
+ printf("share number invalid\n");
+ return -1;
+ }
+ break;
+ case 'r': // 内核读内存次数
+ read_num = atoi(optarg);
+ if (read_num > MAX_READ || read_num <= 0) {
+ printf("read number invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+#define PROC_NUM 4
+static struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = 0,
+ .spg_id = 10,
+ .size = 100 * PAGE_SIZE, //400K
+ },
+ {
+ .flag = SP_HUGEPAGE,
+ .spg_id = 12,
+ .size = 10 * PMD_SIZE, // 20M
+ },
+ {
+ .flag = SP_DVPP,
+ .spg_id = 19,
+ .size = 100000, // 约100K
+ },
+ {
+ .flag = SP_DVPP | SP_HUGEPAGE,
+ .spg_id = 19,
+ .size = 10000000, // 约1M
+ },
+};
+
+int (*child_funcs[])(struct sp_alloc_info *) = {
+ child1,
+ child2,
+ child3,
+};
+
+static int testcase(int child_idx)
+{
+ int ret = 0;
+ pid_t procs[PROC_NUM];
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ ret = -1;
+ goto error_out;
+ } else if (pid == 0) {
+ exit(child_funcs[child_idx](alloc_infos + i));
+ } else {
+ procs[i] = pid;
+ }
+ }
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ int status = 0;
+ waitpid(procs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("testcase failed!!, func: %d, alloc info: %d", child_idx, i);
+ ret = -1;
+ } else {
+ pr_info("testcase success!!, func: %d, alloc info: %d", child_idx, i);
+ }
+ }
+
+ return 0;
+error_out:
+ return -1;
+}
+
+static int testcase1(void) { return testcase(0); }
+static int testcase2(void) { return testcase(1); }
+static int testcase3(void) { return testcase(2); }
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "用户态反复调用u2k共享到内核,内核模块通过每个kva反复读同一块内存")
+ TESTCASE_CHILD(testcase2, "创建100个线程,分别对一块alloc内存调用u2k,并存储返回地址;内核读内存")
+ TESTCASE_CHILD(testcase3, "用户态u2k-内核读-unshare,重复循环")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/stress_test/test_sharepool_enhancement_stress_cases.c b/tools/testing/sharepool/testcase/stress_test/test_sharepool_enhancement_stress_cases.c
new file mode 100644
index 000000000000..6e34d2fc8044
--- /dev/null
+++ b/tools/testing/sharepool/testcase/stress_test/test_sharepool_enhancement_stress_cases.c
@@ -0,0 +1,692 @@
+#include "sharepool_lib.h"
+#include "sem_use.h"
+#include <stdlib.h>
+#include <errno.h>
+#include <assert.h>
+#include <pthread.h>
+#include <sys/types.h>
+
+#define PROC_NUM 8
+#define THREAD_NUM 5
+#define GROUP_NUM 16
+#define ALLOC_TYPE 4
+#define REPEAT_TIMES 2
+#define ALLOC_SIZE PAGE_SIZE
+#define PROT (PROT_READ | PROT_WRITE)
+
+static int group_ids[GROUP_NUM];
+static int default_id = 1;
+static int semid;
+
+static int add_multi_group();
+static int check_multi_group();
+static int delete_multi_group();
+static int process();
+void *thread_and_process_helper(int group_id);
+void *del_group_thread(void *arg);
+void *del_proc_from_group(void *arg);
+
+
+// 共享组进程压力测试,大量进程加组退组
+static int testcase1(void)
+{
+ int group_num = 10;
+ int temp_group_id = 1;
+ int proc_num = 10;
+ int prints_num = 3;
+
+ int ret = 0;
+
+ int childs[proc_num];
+ int prints[prints_num];
+
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, temp_group_id);
+ if (ret < 0) {
+ pr_info("parent %d add into group failed. errno: %d", getpid(), ret);
+ return -1;
+ }
+
+ // 构造进程加组退组
+ for (int i = 0; i < proc_num; i++) {
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, temp_group_id);
+ if (ret < 0)
+ pr_info("child %d add into group failed. errno: %d", getpid(), ret);
+ ret = wrap_del_from_group(getpid(), temp_group_id);
+ if (ret < 0)
+ pr_info("child %d del from group failed. errno: %d", getpid(), ret);
+ }
+ } else {
+ childs[i] = pid;
+ }
+ }
+
+ // 持续打印维测接口
+ for(int i = 0; i < prints_num; i++){
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ sharepool_print();
+ usleep(10000);
+ }
+ } else {
+ prints[i] = pid;
+ }
+ }
+
+ sleep(10);
+
+ // 测试结束清理进程
+ for (int i = 0; i < PROC_NUM; i++) {
+ kill(childs[i], SIGKILL);
+ int status;
+ waitpid(childs[i], &status, 0);
+ }
+ for (int i = 0; i < prints_num; i++) {
+ kill(prints[i], SIGKILL);
+ int status;
+ waitpid(prints[i], &status, 0);
+ }
+
+ return 0;
+}
+
+// 共享组上限压力测试
+static int testcase2(void)
+{
+ int ret = 0;
+
+ int default_id = 1;
+ int group_id = 2;
+
+ int prints_num = 3;
+ int prints[prints_num];
+
+ // 持续打印维测接口
+ for(int i = 0; i < prints_num; i++){
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ sharepool_print();
+ usleep(10000);
+ }
+ } else {
+ prints[i] = pid;
+ }
+ }
+
+ // 创建默认共享组
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("parent %d add into group failed. errno: %d", getpid(), ret);
+ return -1;
+ }
+
+ // 持续创建共享组
+ int group_create_pid = fork();
+ if (group_create_pid == 0){
+ while(group_id > 0){
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, group_id);
+ if (ret < 0)
+ pr_info("group %d creation failed. errno: %d", group_id, ret);
+ group_id++;
+ }
+ }
+
+ // 持续创建进程加入默认共享组
+ int process_create_pid = fork();
+ if (process_create_pid == 0){
+ while (1){
+ int temp_pid = fork();
+ if (temp_pid == 0){
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("child %d add into group failed. errno: %d", getpid(), ret);
+ }
+ for (int i=0; i<3; i++){
+ sleep(1);
+ }
+ }
+ }
+ }
+
+ sleep(10);
+
+ int status;
+ kill(group_create_pid, SIGKILL);
+ waitpid(group_create_pid, &status, 0);
+ kill(process_create_pid, SIGKILL);
+ waitpid(process_create_pid, &status, 0);
+
+ for (int i = 0; i < prints_num; i++) {
+ kill(prints[i], SIGKILL);
+ waitpid(prints[i], &status, 0);
+ }
+
+ return 0;
+}
+
+// 共享组内存申请释放压力测试
+static int testcase3(void)
+{
+ int default_id = 1;
+ int ret = 0;
+ int proc_num = 1000;
+ int childs[proc_num];
+
+ int page_size = 4096;
+
+ int prints_num = 3;
+ int prints[prints_num];
+
+ // 持续打印维测接口
+ for(int i = 0; i < prints_num; i++){
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ sharepool_print();
+ usleep(10000);
+ }
+ } else {
+ prints[i] = pid;
+ }
+ }
+
+ // 创建子进程申请和释放4K内存
+ for(int i=0; i<proc_num; i++){
+ int pid = fork();
+ if (pid == 0){
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ while(1){
+ void *addr;
+ addr = wrap_sp_alloc(default_id, page_size, 0);
+ if (addr == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ }
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d free failed.", getpid());
+ } else {
+ pr_info("process %d free success.", getpid());
+ }
+ }
+ }
+ else{
+ childs[i] = pid;
+ }
+ }
+
+ sleep(10);
+
+ int status;
+
+ for (int i = 0; i < proc_num; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ }
+ for (int i = 0; i < prints_num; i++) {
+ kill(prints[i], SIGKILL);
+ waitpid(prints[i], &status, 0);
+ }
+
+}
+
+// 共享组内存大规格申请释放压力测试
+static int testcase4(void)
+{
+ int default_id = 1;
+ int ret = 0;
+ int proc_num = 50;
+ int childs[proc_num];
+
+ int page_size = 1073741824;
+
+ int prints_num = 3;
+ int prints[prints_num];
+
+ // print sharepool maintenance interface
+ for(int i = 0; i < prints_num; i++){
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ sharepool_print();
+ usleep(10000);
+ }
+ } else {
+ prints[i] = pid;
+ }
+ }
+
+ // apply and release memory
+ for(int i=0; i<proc_num; i++){
+ int pid = fork();
+ if (pid == 0){
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, default_id);
+ if (ret < 0) {
+ pr_info("process %d add group %d failed.", getpid(), default_id);
+ } else {
+ pr_info("process %d add group %d success.", getpid(), default_id);
+ }
+ while(1){
+ void *addr;
+ addr = wrap_sp_alloc(default_id, page_size, 0);
+ if (addr == (void *)-1) {
+ pr_info("process %d alloc failed.", getpid());
+ } else {
+ pr_info("process %d alloc success.", getpid());
+ }
+ ret = wrap_sp_free(addr);
+ if (ret < 0) {
+ pr_info("process %d free failed.", getpid());
+ } else {
+ pr_info("process %d free success.", getpid());
+ }
+ }
+ }
+ else{
+ childs[i] = pid;
+ }
+ }
+
+ sleep(10);
+
+ int status;
+
+ for (int i = 0; i < proc_num; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ }
+ for (int i = 0; i < prints_num; i++) {
+ kill(prints[i], SIGKILL);
+ waitpid(prints[i], &status, 0);
+ }
+
+}
+
+
+// 维测接口打印压力测试
+static int testcase5(void)
+{
+ int ret = 0;
+
+ int default_id = 1;
+ int group_id = 2;
+
+ int prints_num = 3;
+ int prints[prints_num];
+
+ // 持续打印维测接口
+ for(int i = 0; i < prints_num; i++){
+ int pid = fork();
+ if (pid == 0) {
+ while (1) {
+ sharepool_print();
+ usleep(10000);
+ }
+ } else {
+ prints[i] = pid;
+ }
+ }
+
+ sleep(10);
+
+ int status;
+
+ for (int i = 0; i < prints_num; i++) {
+ kill(prints[i], SIGKILL);
+ waitpid(prints[i], &status, 0);
+ }
+
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ //TESTCASE_CHILD(testcase1, "共享组进程压力测试,大量进程加组退组")
+ //TESTCASE_CHILD(testcase2, "共享组上限压力测试")
+ //TESTCASE_CHILD(testcase3, "共享组内存申请释放压力测试")
+ //TESTCASE_CHILD(testcase4, "共享组内存大规格申请释放压力测试")
+ //TESTCASE_CHILD(testcase5, "维测接口打印压力测试")
+};
+
+static int add_multi_group()
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+ for (int i = 0; i < GROUP_NUM; i++) {
+ group_ids[i] = i + 1;
+ ag_info.spg_id = group_ids[i];
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("proc%d add group%d failed", getpid(), group_ids[i]);
+ return -1;
+ }
+ }
+
+ return ret;
+}
+
+static int check_multi_group()
+{
+ int ret;
+ // query groups
+ int spg_ids[GROUP_NUM];
+ int group_num = GROUP_NUM;
+ struct sp_group_id_by_pid_info find_group_info = {
+ .num = &group_num,
+ .spg_ids = spg_ids,
+ .pid = getpid(),
+ };
+ ret = ioctl_find_group_by_pid(dev_fd, &find_group_info);
+ if (ret < 0) {
+ pr_info("find group id by pid failed");
+ return ret;
+ } else
+ for (int i = 0; i < GROUP_NUM; i++)
+ if (spg_ids[i] != group_ids[i]) {
+ pr_info("group id %d not consistent", i);
+ ret = -1;
+ }
+ return ret;
+}
+
+static int delete_multi_group()
+{
+ int ret = 0;
+ int fail = 0, suc = 0;
+ // delete from all groups
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = wrap_del_from_group(getpid(), group_ids[i]);
+ if (ret < 0) {
+ //pr_info("process %d delete from group %d failed, errno: %d", getpid(), group_ids[i], errno);
+ fail++;
+ }
+ else {
+ pr_info("process %d delete from group %d success", getpid(), group_ids[i]);
+ suc++;
+ }
+ }
+
+ return fail;
+}
+
+static int process()
+{
+ int ret = 0;
+ for (int j = 0; j < REPEAT_TIMES; j++) {
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = thread_and_process_helper(group_ids[i]);
+ if (ret < 0) {
+ pr_info("thread_and_process_helper failed");
+ return -1;
+ }
+ }
+ }
+
+ return ret;
+}
+
+static int try_del_from_group(int group_id)
+{
+ int ret = wrap_del_from_group(getpid(), group_id);
+ return -errno;
+}
+
+void *thread_and_process_helper(int group_id)
+{
+ int ret = 0, i;
+ pid_t pid;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[ALLOC_TYPE] = {0};
+ struct sp_make_share_info u2k_info[ALLOC_TYPE] = {0}, k2u_info = {0}, k2u_huge_info = {0};
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ char *addr;
+
+ /* check sp group */
+ pid = getpid();
+
+ // 大页
+ alloc_info[0].flag = SP_HUGEPAGE;
+ alloc_info[0].spg_id = group_id;
+ alloc_info[0].size = 2 * PMD_SIZE;
+
+ // 大页 DVPP
+ alloc_info[1].flag = SP_DVPP | SP_HUGEPAGE;
+ alloc_info[1].spg_id = group_id;
+ alloc_info[1].size = 2 * PMD_SIZE;
+
+ // 普通页 DVPP
+ alloc_info[2].flag = SP_DVPP;
+ alloc_info[2].spg_id = group_id;
+ alloc_info[2].size = 4 * PAGE_SIZE;
+
+ // 普通页
+ alloc_info[3].flag = 0;
+ alloc_info[3].spg_id = group_id;
+ alloc_info[3].size = 4 * PAGE_SIZE;
+
+ /* alloc & u2k */
+ for (i = 0; i < ALLOC_TYPE; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("sp_alloc return addr %lx\n", alloc_info[i].addr);
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(dev_fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("addr %lx is a valid share pool addr\n", alloc_info[i].addr);
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ pr_info("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("u2k return addr %lx, check memory content succ.\n",
+ // u2k_info[i].addr);
+ }
+ }
+ }
+ assert(try_del_from_group(group_id) == -EINVAL);
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = group_id;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = group_id;
+
+ /* k2u */
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+ assert(try_del_from_group(group_id) == -EINVAL);
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ioctl_vfree(dev_fd, &vmalloc_info);
+ ioctl_vfree(dev_fd, &vmalloc_huge_info);
+ assert(try_del_from_group(group_id) == -EINVAL);
+
+ /* unshare kva & sp_free*/
+ for (i = 0; i < ALLOC_TYPE; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ return 0;
+
+error:
+ return -1;
+}
+
+void *del_group_thread(void *arg)
+{
+ int ret = 0;
+ int i = (int)arg;
+
+ sem_inc_by_one(semid);
+ sem_check_zero(semid);
+
+ pr_info("thread %d now tries to exit from group %d", getpid() + i + 1, default_id);
+ ret = wrap_del_from_group(getpid() + i + 1, default_id);
+ if (ret < 0)
+ pthread_exit((void *)-1);
+ pthread_exit((void *)0);
+}
+
+void *del_proc_from_group(void *arg)
+{
+ sem_dec_by_one(semid);
+ pthread_exit((void *)(wrap_del_from_group((int)arg, default_id)));
+}
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
+
diff --git a/tools/testing/sharepool/testcase/stress_test/test_u2k_add_and_kill.c b/tools/testing/sharepool/testcase/stress_test/test_u2k_add_and_kill.c
new file mode 100644
index 000000000000..198c93afde76
--- /dev/null
+++ b/tools/testing/sharepool/testcase/stress_test/test_u2k_add_and_kill.c
@@ -0,0 +1,358 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 27 13:45:03 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * group_num个组,每组process_pre_group个进程,所有grandchild进程死循环执行:(sp_alloc -> u2k -> 内核读内存 -> unshare -> sp_free)
+ * child进程拉起所有grandchild后,反复kill_num次:kill或者创建grandchild进程,保证总数不超过process_pre_group
+ */
+
+#define MAX_GROUP 3000
+#define MAX_PROC_PER_GRP 1024
+#define MAX_KILL 100000
+
+static sem_t *child_sync[MAX_GROUP];
+static sem_t *grandchild_sync[MAX_GROUP];
+
+static int group_num = 100;
+static int process_per_group = 100;
+static int kill_num = 100;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", arg, getpid(), ##args)
+ int ret;
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync[arg / MAX_PROC_PER_GRP]);
+ } while ((ret != 0) && errno == EINTR);
+
+ //pr_local_info("start!!");
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ sem_post(grandchild_sync[arg / MAX_PROC_PER_GRP]);
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .spg_id = group_id,
+ .size = alloc_size,
+ };
+
+ while (1) {
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ return ret;
+ } else {
+ if (IS_ERR_VALUE(alloc_info.addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", alloc_info.addr);
+ return -1;
+ }
+ }
+ memset((void *)alloc_info.addr, 'r', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_local_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'r',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_local_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_local_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_local_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ //pr_local_info("exit!!");
+ return 0;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ //pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+
+ // add group
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ // 轮流创建该组的所有子进程
+ for (int i = 0; i < process_per_group; i++) {
+ int num = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(num);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", num, pid);
+ childs[i] = pid;
+
+ // 将子进程加组
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", num, group_id);
+ goto out;
+ } else
+ //pr_local_info("add grandchild%d to group %d successfully", num, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+ unsigned int seed = time(NULL);
+ srand(seed);
+ pr_local_info("rand seed: %u\n", seed);
+
+ /* 随机杀死或者创建进程 */
+ for (int i = 0; i < kill_num; i++) {
+ printf("kill/create %dth time.\n", i);
+ int idx = rand() % process_per_group;
+
+ if (childs[idx] > 0) {
+ // 杀死这个位置上的进程
+ kill(childs[idx], SIGKILL);
+ waitpid(childs[idx], NULL, 0);
+ printf("We killed %dth process %d!\n", idx, childs[idx]);
+ childs[idx] = 0;
+ } else {
+ printf("We are going to create a new process.\n");
+ // 该进程已经被杀死了,创建一个新进程填补该位置
+ int num = arg * MAX_PROC_PER_GRP + idx;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(num);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", num, pid);
+ childs[idx] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", num, group_id);
+ goto out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", num, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+ }
+
+out:
+ for (int i = 0; i < process_per_group; i++) {
+ if (!childs[i])
+ continue;
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ if (!WIFSIGNALED(status)) {
+ pr_local_info("grandchild%d test failed", arg * MAX_PROC_PER_GRP + i);
+ ret = -1;
+ }
+ }
+
+ //pr_local_info("exit!!");
+ return ret;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:./test_multi_u2k2 -g group_num -p proc_num -n kill_num -s alloc_size\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_PER_GRP || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > MAX_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 杀死或者创建子进程次数
+ kill_num = atoi(optarg);
+ if (kill_num > MAX_KILL || kill_num <= 0) {
+ printf("kill number invalid\n");
+ return -1;
+ }
+ break;
+ case 's':
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ printf("unsupported options: '%c'\n", opt);
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[MAX_GROUP];
+
+ // 创建组同步锁 grandchild
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // 创建组同步锁 child
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // 创建组对应的子进程
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+ pr_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ // 结束组对应的子进程
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ status = (char)WEXITSTATUS(status);
+ if (status) {
+ pr_info("child%d test failed, %d", i, status);
+ ret = status;
+ }
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "循环sp_alloc -> u2k -> 内核读内存 -> unshare -> sp_free 同时杀掉进程/创建新进程加组")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/test_all/Makefile b/tools/testing/sharepool/testcase/test_all/Makefile
new file mode 100644
index 000000000000..05243de93e4a
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_all/Makefile
@@ -0,0 +1,8 @@
+test_all: test_all.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+install: test_all
+ cp $^ $(TOOL_BIN_DIR)
+
+clean:
+ rm -rf test_all
diff --git a/tools/testing/sharepool/testcase/test_all/test_all.c b/tools/testing/sharepool/testcase/test_all/test_all.c
new file mode 100644
index 000000000000..3e0698f9b1f6
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_all/test_all.c
@@ -0,0 +1,285 @@
+/*
+ * compile: gcc test_all.c sharepool_lib.so -o test_all -lpthread
+ */
+
+#include <sys/types.h>
+#include <unistd.h>
+#include <pthread.h>
+#include <stdio.h>
+
+#include "sharepool_lib.h"
+
+#define GROUP_ID 1
+#define TIMES 4
+int fd;
+void *thread(void *arg)
+{
+ int ret, group_id, i;
+ pid_t pid;
+ unsigned long invalid_addr = 0x30000;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[TIMES] = {0};
+ struct sp_make_share_info u2k_info[TIMES] = {0}, k2u_info = {0}, k2u_huge_info = {0};
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ char *addr;
+
+ printf("enter thread, pid is %d, thread id is %lu\n\n", getpid(), pthread_self());
+
+ /* check sp group */
+ pid = getpid();
+ group_id = ioctl_find_first_group(fd, pid);
+ if (group_id != GROUP_ID) {
+ printf("query group id is %d, but expected group id is %d\n",
+ group_id, GROUP_ID);
+ goto error;
+ }
+
+ /* check invalid share pool addr */
+ judge_ret = ioctl_judge_addr(fd, invalid_addr);
+ if (judge_ret != false) {
+ printf("expect an invalid share pool addr\n");
+ goto error;
+ }
+
+ alloc_info[0].flag = SP_HUGEPAGE;
+ alloc_info[0].spg_id = GROUP_ID;
+ alloc_info[0].size = 2 * PMD_SIZE;
+
+ alloc_info[1].flag = SP_DVPP | SP_HUGEPAGE;
+ alloc_info[1].spg_id = GROUP_ID;
+ alloc_info[1].size = 2 * PMD_SIZE;
+
+ alloc_info[2].flag = SP_DVPP;
+ alloc_info[2].spg_id = GROUP_ID;
+ alloc_info[2].size = 4 * PAGE_SIZE;
+
+ alloc_info[3].flag = 0;
+ alloc_info[3].spg_id = GROUP_ID;
+ alloc_info[3].size = 4 * PAGE_SIZE;
+
+ for (i = 0; i < TIMES; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(fd, &alloc_info[i]);
+ if (ret < 0) {
+ printf("ioctl alloc failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ printf("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ printf("sp_alloc return addr %lx\n", alloc_info[i].addr);
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ printf("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ printf("addr %lx is a valid share pool addr\n", alloc_info[i].addr);
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(fd, &u2k_info[i]);
+ if (ret < 0) {
+ printf("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ printf("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ } else {
+ printf("u2k return addr %lx, check memory content succ.\n",
+ u2k_info[i].addr);
+ }
+ }
+ printf("\n");
+ }
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(fd, &vmalloc_info);
+ if (ret < 0) {
+ printf("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ printf("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = SPG_ID_DEFAULT;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = SPG_ID_DEFAULT;
+
+ /* k2u */
+ ret = ioctl_k2u(fd, &k2u_info);
+ if (ret < 0) {
+ printf("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ printf("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ printf("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(fd, &k2u_huge_info);
+ if (ret < 0) {
+ printf("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ printf("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ printf("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ printf("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ printf("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ printf("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ printf("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(fd, &k2u_info);
+ if (ret < 0) {
+ printf("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(fd, &k2u_huge_info);
+ if (ret < 0) {
+ printf("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ret = ioctl_vfree(fd, &vmalloc_info);
+ if (ret < 0) {
+ printf("vfree small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vfree(fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ printf("vfree huge page error: %d\n", ret);
+ goto error;
+ }
+
+ for (i = 0; i < TIMES; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(fd, &u2k_info[i]);
+ if (ret < 0) {
+ printf("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(fd, &alloc_info[i]);
+ if (ret < 0) {
+ printf("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ printf("\nfinish running thread\n");
+ pthread_exit((void *)0);
+
+error:
+ pthread_exit((void *)1);
+}
+
+int main(void) {
+ int ret = 0;
+ struct sp_add_group_info ag_info = {0};
+ pthread_t tid;
+ void *tret;
+
+ fd = open_device();
+ if (fd < 0) {
+ return -1;
+ }
+
+ pid_t pid = getpid();
+ ag_info.pid = pid;
+ ag_info.prot = PROT_READ | PROT_WRITE;
+ ag_info.spg_id = GROUP_ID;
+ ret = ioctl_add_group(fd, &ag_info);
+ if (ret < 0) {
+ close_device(fd);
+ return -1;
+ }
+ printf("ioctl add group pid is %d, spg_id is %d\n", pid, ag_info.spg_id);
+
+ ret = pthread_create(&tid, NULL, thread, NULL);
+ if (ret != 0)
+ printf("%s: create thread error\n", __func__);
+
+ ret = pthread_join(tid, &tret);
+ if (ret != 0)
+ printf("%s: can't join thread\n", __func__);
+
+ close_device(fd);
+
+ if ((long)tret != 0) {
+ printf("testcase execution failed\n");
+ return -1;
+ }
+ printf("testcase execution is successful\n");
+ return 0;
+}
+
diff --git a/tools/testing/sharepool/testcase/test_mult_process/Makefile b/tools/testing/sharepool/testcase/test_mult_process/Makefile
new file mode 100644
index 000000000000..9a20b0d1fa32
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/Makefile
@@ -0,0 +1,16 @@
+MODULEDIR:=mult_add_group_test mult_k2u_test mult_u2k_test mult_debug_test stress_test
+
+all:tooldir
+
+tooldir:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n; done
+install:
+ mkdir -p $(TOOL_BIN_DIR)/test_mult_process
+ cp test_proc_interface.sh $(TOOL_BIN_DIR)
+ cp test_proc_interface.sh $(TOOL_BIN_DIR)/test_mult_process
+ cp test_mult_process.sh $(TOOL_BIN_DIR)
+ for n in $(MODULEDIR); do $(MAKE) -C $$n install; done
+clean:
+ for n in $(MODULEDIR); do $(MAKE) -C $$n clean; done
+
+
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/Makefile b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/Makefile
new file mode 100644
index 000000000000..9a2b520d1b5f
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/test_mult_process
+
+clean:
+ rm -rf $(testcases)
\ No newline at end of file
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_add_multi_cases.c b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_add_multi_cases.c
new file mode 100644
index 000000000000..67398c8ac927
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_add_multi_cases.c
@@ -0,0 +1,255 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 25 08:21:16 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include <sys/types.h>
+#include <sys/ipc.h>
+#include <sys/sem.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * add_tasks_to_different_group 1000个子进程并发加入不同组,再并发退出 --- 测试多进程并发加组
+ * add_tasks_to_auto_group 1000个子进程并发加入不同组(组id自动分配),再并发退出 --- 测试auto
+ * add_tasks_and_kill 100个子进程加入同一个组,同时循环查询组信息,再依次杀掉子进程 --- 测试组信息查询接口
+ * addgroup_and_querygroup 创建子进程,查询组信息,同时父进程加随机组,然后子进程退出;循环10000次。
+ */
+
+/*
+ * 多进程并发加入不同的指定组(每个组里只有1个进程),然后并发退出。
+ * (后台进程一直反复调用sp_group_id_by_pid查询所有pid)
+ */
+#define TEST_ADDTASK_SEM_KEY 9834
+
+static int add_tasks_to_group_child(int semid, int group_id)
+{
+ int ret;
+
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = -1,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+
+ struct sp_add_group_info ag_info= {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("add group %d failed, errno: %d", group_id, errno);
+ ret = -1;
+ goto out;
+ }
+
+ if (group_id == SPG_ID_AUTO) {
+ if (ag_info.spg_id < SPG_ID_AUTO_MIN || ag_info.spg_id > SPG_ID_AUTO_MAX) {
+ pr_info("invalid spg_id returned: %d", ag_info.spg_id);
+ ret = -1;
+ goto out;
+ }
+ }
+
+out:
+ // 通知本进程加组完成
+ semop(semid, &sembuf, 1);
+ // 等待所有子进程加组完成
+ sembuf.sem_op = 0;
+ semop(semid, &sembuf, 1);
+
+ return ret;
+}
+
+#define NR_CHILD 1000
+static int add_tasks_to_group(int group_id)
+{
+ int i;
+ pid_t childs[NR_CHILD] = {0};
+ int semid = semget(TEST_ADDTASK_SEM_KEY, 1, IPC_CREAT | IPC_EXCL | 0644);
+ if (semid < 0) {
+ pr_info("open systemV semaphore failed, errno: %d", errno);
+ return -1;
+ }
+
+ int ret = semctl(semid, 0, SETVAL, 0);
+ if (ret < 0) {
+ pr_info("sem setval failed, %s", strerror(errno));
+ goto error_out;
+ }
+
+ for (i = 0; i < NR_CHILD; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ for (i--; i >= 0; i--) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], NULL, 0);
+ }
+ ret = -1;
+ goto error_out;
+ } else if (pid == 0) {
+ exit(add_tasks_to_group_child(semid, group_id == SPG_ID_AUTO ? group_id : i + 2));
+ }
+
+ childs[i] = pid;
+ }
+
+ // 通知子进程开始加组
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = NR_CHILD * 2,
+ .sem_flg = 0,
+ };
+ semop(semid, &sembuf, 1);
+
+ // 等待子进程加组完成
+ sembuf.sem_op = 0;
+ semop(semid, &sembuf, 1);
+
+ int status;
+ for (int i = 0; i < NR_CHILD; i++) {
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child(pid:%d) exit unexpected", childs[i]);
+ ret = -1;
+ }
+ }
+
+error_out:
+ if (semctl(semid, 0, IPC_RMID) < 0)
+ pr_info("sem setval failed, %s", strerror(errno));
+ return ret;
+}
+
+/*
+ * 多进程并发加入不同的指定组(每个组里只有1个进程),然后并发退出。
+ * (后台进程一直反复调用sp_group_id_by_pid查询所有pid)
+ */
+static int add_tasks_to_different_group(void)
+{
+ return add_tasks_to_group(0);
+}
+
+/*
+ * 多进程并发加入不同的指定组(每个组里只有1个进程),然后并发退出。
+ * (后台进程一直反复调用sp_group_id_by_pid查询所有pid)
+ */
+static int add_tasks_to_auto_group(void)
+{
+ return add_tasks_to_group(SPG_ID_AUTO);
+}
+
+/*
+ * 多进程逐个加入同一个组,按进组顺序将这些进程杀掉。
+ */
+#define ADD_TASKS_AND_KILL_CHILD_NUM 100
+static int add_tasks_and_kill(void)
+{
+ int group_id = 234, ret = 0, i;
+ pid_t childs[ADD_TASKS_AND_KILL_CHILD_NUM] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .spg_id = group_id,
+ .prot = PROT_READ | PROT_WRITE,
+ };
+
+ for (i = 0; i < ADD_TASKS_AND_KILL_CHILD_NUM; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ ret = -1;
+ goto out;
+ } else if (pid == 0) {
+ while (1) {
+ ioctl_find_first_group(dev_fd, getpid());
+ }
+ }
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("add task(pid:%d) to group(id:%d) failed, errno: %d", pid, group_id, errno);
+ ret = -1;
+ goto out;
+ }
+ childs[i] = pid;
+ }
+
+out:
+ for (i--; i >= 0; i--) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], NULL, 0);
+ }
+
+ return ret;
+}
+
+/*
+ * 并发执行加组和查询组操作
+ */
+static int addgroup_and_querygroup(void)
+{
+ int ret = 0;
+
+ srand((unsigned int)time(NULL));
+ for (int i = 1; !ret && i < 10000; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ ret = ioctl_find_first_group(dev_fd, getpid());
+ exit(ret);
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = rand() % (SPG_ID_AUTO_MIN - 1) + 1,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d, pid: %d, spg_id: %d",
+ errno, pid, ag_info.spg_id);
+ }
+ waitpid(pid, NULL, 0);
+ }
+
+ // 加组失败可能是用户进程退出导致的,失败不做判据,本用例充当压测用例,无异常即可
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(add_tasks_to_different_group, "多进程并发加入不同的指定组(每个组里只有1个进程),然后并发退出。")
+ TESTCASE_CHILD(add_tasks_to_auto_group, "多进程并发加入随机组(SPG_ID_AUTO),然后退出。")
+ TESTCASE_CHILD(add_tasks_and_kill, "多进程逐个加入同一个组,按进组顺序将这些进程杀掉。")
+ TESTCASE_CHILD(addgroup_and_querygroup, "对同一个进程(pid)同时进行加组和查询组的操作")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_alloc_add_and_kill.c b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_alloc_add_and_kill.c
new file mode 100644
index 000000000000..3cc88b6542bc
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_alloc_add_and_kill.c
@@ -0,0 +1,347 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 执行流程:
+ * grandchild:
+ * 连续申请100次内存,然后释放掉,循环做
+ * child:
+ * 不断的kill或者创建grandchild进程
+ * 参数:
+ * -n 杀死或者创建grandchild进程次数
+ * -p 每组进程数
+ * -g sharepool组的数量
+ * -s grandchild进程每次申请内存的大小
+ */
+
+#define NR_GROUP 100
+#define MAX_PROC_PER_GRP 100
+#define NR_HOLD_AREAS 100
+
+static sem_t *child_sync[NR_GROUP];
+static sem_t *grandchild_sync[NR_GROUP];
+
+static int group_num = 2;
+static int process_per_group = 3;
+static int kill_num = 1000;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int grandchild_id)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", grandchild_id, getpid(), ##args)
+ int ret;
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync[grandchild_id / MAX_PROC_PER_GRP]);
+ } while ((ret != 0) && errno == EINTR);
+
+ //pr_local_info("start!!");
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ sem_post(grandchild_sync[grandchild_id / MAX_PROC_PER_GRP]);
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_infos[NR_HOLD_AREAS] = {0};
+ for (int i = 0; i < NR_HOLD_AREAS; i++) {
+ alloc_infos[i].flag = 0;
+ alloc_infos[i].spg_id = group_id;
+ alloc_infos[i].size = alloc_size;
+ }
+
+ int top = 0;
+ int count = 0;
+ while (1) {
+ struct sp_alloc_info *info = alloc_infos + top++;
+ ret = ioctl_alloc(dev_fd, info);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ return -1;
+ } else {
+ if (IS_ERR_VALUE(info->addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", info->addr);
+ return -1;
+ }
+ }
+
+ memset((void *)info->addr, 'z', info->size);
+
+ if (top == NR_HOLD_AREAS) {
+ top = 0;
+ for (int i = 0; i < NR_HOLD_AREAS; i++) {
+ ret = ioctl_free(dev_fd, &alloc_infos[i]);
+ if (ret < 0) {
+ pr_local_info("sp_free failed, %d", ret);
+ return ret;
+ }
+ }
+ pr_info("grandchild process id:%d finished %dth alloc-free-100times-run.", grandchild_id, ++count);
+ }
+ }
+
+ //pr_local_info("exit!!");
+ return 0;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ //pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ for (int i = 0; i < process_per_group; i++) {
+ int grandchild_id = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(grandchild_id);
+ exit(ret);
+ } else {
+ //pr_local_info("fork grandchild%d, pid: %d", grandchild_id, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", grandchild_id, group_id);
+ goto out;
+ } else {
+ //pr_local_info("add grandchild%d to group %d successfully", grandchild_id, group_id);
+ }
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+ pr_local_info("%dth sp group %d, create %d processes and add group success!!", arg, group_id, process_per_group);
+
+ unsigned int seed = time(NULL);
+ srand(seed);
+ pr_local_info("rand seed: %u\n", seed);
+ /* 随机杀死或者创建进程 */
+ for (int i = 0; i < kill_num; i++) {
+ sleep(1);
+ pr_info("group %d %dth interruption, %d times left.", group_id, i + 1, kill_num - i - 1);
+ int idx = rand() % process_per_group;
+
+ if (childs[idx] > 0) {
+ kill(childs[idx], SIGKILL);
+ waitpid(childs[idx], NULL, 0);
+ childs[idx] = 0;
+ } else {
+ int grandchild_id = arg * MAX_PROC_PER_GRP + idx;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(grandchild_id);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", grandchild_id, pid);
+ childs[idx] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", grandchild_id, group_id);
+ goto out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", grandchild_id, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+ }
+
+out:
+ for (int i = 0; i < process_per_group; i++) {
+ if (!childs[i])
+ continue;
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ if (!WIFSIGNALED(status)) {
+ pr_local_info("grandchild%d test failed", arg * MAX_PROC_PER_GRP + i);
+ ret = -1;
+ }
+ }
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_PER_GRP || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > NR_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 杀死或者创建子进程次数
+ kill_num = atoi(optarg);
+ if (kill_num > 100000 || kill_num <= 0) {
+ printf("kill number invalid\n");
+ return -1;
+ }
+ break;
+ case 's':
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ printf("unsupported options: '%c'\n", opt);
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+#define pr_local_info(fmt, args...) printf("[parent , pid:%d] " fmt "\n", getpid(), ##args)
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[NR_GROUP];
+
+ pr_local_info("group: %d, process_per_group: %d", group_num, process_per_group);
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_local_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_local_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+
+ pr_local_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ status = (char)WEXITSTATUS(status);
+ if (status) {
+ pr_local_info("child%d test failed, %d", i, status);
+ ret = status;
+ }
+ }
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "连续申请100次内存,然后释放掉,循环; 同时不断杀死进程/创建新进程加组")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_max_group_per_process.c b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_max_group_per_process.c
new file mode 100644
index 000000000000..7218651368ee
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_max_group_per_process.c
@@ -0,0 +1,94 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Jun 08 06:47:40 2021
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <signal.h>
+#include <unistd.h>
+#include <stdlib.h> // for exit
+#include <sys/mman.h>
+#include <pthread.h>
+
+#include "sharepool_lib.h"
+
+#define PROCESS_NR 15
+#define THERAD_NUM 20
+#define MAX_GROUP_PER_PROCESS 3000
+/*
+ * 测试步骤:多进程多线程并发加组,组ID为AUTO
+ * 预期结果,所有线程加组成功次数为2999
+ */
+static void *test2_thread(void *arg)
+{
+ int i, ret = 0;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ,
+ };
+
+ for (i = 0; i < MAX_GROUP_PER_PROCESS - 1; i++) {
+ ag_info.spg_id = SPG_ID_AUTO;
+ TEST_CHECK(ioctl_add_group(dev_fd, &ag_info), out);
+ }
+
+out:
+ pr_info("thread%d: returned, %d groups has been added successfully", (int)arg, i);
+ return (void *)i;
+}
+
+static int testcase_route(int idx)
+{
+ int i, ret = 0, sum = 0;
+ void *val;
+ pthread_t th[THERAD_NUM];
+
+ for (i = 0; i < ARRAY_SIZE(th); i++)
+ TEST_CHECK(pthread_create(th + i, NULL, test2_thread, (void *)(i + idx * THERAD_NUM)), out);
+
+ for (i = 0; i < ARRAY_SIZE(th); i++) {
+ TEST_CHECK(pthread_join(th[i], &val), out);
+ sum += (int)val;
+ }
+
+ if (sum != MAX_GROUP_PER_PROCESS - 1) {
+ pr_info("MAX_GROUP_PER_PROCESS check failed, %d", sum);
+ return -1;
+ }
+
+out:
+ return ret;
+}
+
+static int testcase1(void)
+{
+ int ret = 0, i;
+ pid_t pid[PROCESS_NR];
+
+ for (i = 0; i < ARRAY_SIZE(pid); i++)
+ FORK_CHILD_ARGS(pid[i], testcase_route(i));
+
+ for (i = 0; i < ARRAY_SIZE(pid); i++)
+ WAIT_CHILD_STATUS(pid[i], out);
+
+out:
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE(testcase1, "同进程多线程并发加组,预期所有线程加组成功次数为2999")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_alloc_and_add_group.c b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_alloc_and_add_group.c
new file mode 100644
index 000000000000..201278e884eb
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_alloc_and_add_group.c
@@ -0,0 +1,138 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Feb 5 09:53:12 2021
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <pthread.h>
+
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include "sharepool_lib.h"
+
+
+#define GROUP_ID 1
+#define REPEAT_TIMES 20
+#define PROC_NUM_1 20
+#define PROC_NUM_2 60
+#define PROT (PROT_READ | PROT_WRITE)
+
+static int testcase1_add_group(int i)
+{
+ if (wrap_add_group(getpid(), PROT, GROUP_ID) < 0)
+ return -1;
+
+ pr_info("%dth process%d add group success.", i, getpid());
+
+ return 0;
+}
+
+static int testcase1_idle(int i)
+{
+ if (wrap_add_group(getpid(), PROT, GROUP_ID) < 0)
+ return -1;
+
+ pr_info("%dth process%d add group success. start idling...",
+ i, getpid());
+
+ while(1) {
+
+ }
+ return 0;
+}
+
+static int testcase1_alloc_free(int idx)
+{
+ unsigned long addr[REPEAT_TIMES];
+ void *ret_addr;
+ int count = 0;
+ int ret = 0;
+
+ if (wrap_add_group(getpid(), PROT, GROUP_ID) < 0)
+ return -1;
+ pr_info("alloc-child %d add group success.", idx);
+
+ while (1) {
+ for (int i = 0; i < REPEAT_TIMES; i++) {
+ ret_addr = wrap_sp_alloc(GROUP_ID, PMD_SIZE, 0);
+ if ((unsigned long)ret_addr == -1) {
+ pr_info("alloc failed!!!");
+ return -1;
+ }
+ addr[i] = (unsigned long)ret_addr;
+ }
+ pr_info("alloc-child %d alloc %dth time finished. start to free..",
+ idx, count);
+ sleep(3);
+
+ for (int i = 0; i < REPEAT_TIMES; i++) {
+ ret = wrap_sp_free(addr[i]);
+ if (ret < 0) {
+ pr_info("free failed!!! errno: %d", ret);
+ return ret;
+ }
+ }
+ pr_info("alloc-child %d free %dth time finished. start to alloc..",
+ idx, count);
+ count++;
+ }
+
+ return 0;
+}
+
+/*
+ * 多进程加组后不停alloc,同时让新进程加组,看会不会冲突mmap
+ */
+static int testcase1(void)
+{
+ int i;
+ int ret = 0;
+ pid_t child_idle[PROC_NUM_1];
+ pid_t child[PROC_NUM_2];
+ pid_t pid;
+
+ sleep(1);
+
+ for (i = 0; i < PROC_NUM_1; i++) {
+ FORK_CHILD_ARGS(child_idle[i], testcase1_idle(i));
+ }
+
+ pid = fork();
+ if (pid == 0)
+ exit(testcase1_alloc_free(0));
+
+ for (i = 0; i < PROC_NUM_2; i++) {
+ sleep(1);
+ FORK_CHILD_ARGS(child[i], testcase1_add_group(i));
+ }
+
+ for (i = 0; i < PROC_NUM_2; i++)
+ WAIT_CHILD_STATUS(child[i], out);
+out:
+ for (i = 0; i < PROC_NUM_1; i++)
+ KILL_CHILD(child_idle[i]);
+ KILL_CHILD(pid);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "多进程加组后不停alloc,同时让新进程加组,预期正常")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_process_thread_exit.c b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_process_thread_exit.c
new file mode 100644
index 000000000000..d8e531139f0c
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_process_thread_exit.c
@@ -0,0 +1,498 @@
+#include <stdlib.h>
+#include <pthread.h>
+#include <stdbool.h>
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+#define THREAD_NUM 50
+#define GROUP_NUM 50
+#define GROUP_BASE_ID 1
+
+static pthread_mutex_t mutex;
+static int add_success, add_fail;
+static int group_ids[GROUP_NUM];
+static int semid;
+
+static int query_group(int *group_num)
+{
+ int ret;
+ // query groups
+ int spg_ids[GROUP_NUM];
+ *group_num = GROUP_NUM;
+ struct sp_group_id_by_pid_info find_group_info = {
+ .num = group_num,
+ .spg_ids = spg_ids,
+ .pid = getpid(),
+ };
+ ret = ioctl_find_group_by_pid(dev_fd, &find_group_info);
+ if (ret < 0) {
+ pr_info("find group id by pid failed");
+ return ret;
+ } else {
+ return 0;
+ }
+}
+
+#define TIMES 4
+void *thread_and_process_helper(int group_id)
+{
+ int ret, i;
+ pid_t pid;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[TIMES] = {0};
+ struct sp_make_share_info u2k_info[TIMES] = {0}, k2u_info = {0}, k2u_huge_info = {0};
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ char *addr;
+
+ /* check sp group */
+ pid = getpid();
+
+ // 大页
+ alloc_info[0].flag = SP_HUGEPAGE;
+ alloc_info[0].spg_id = group_id;
+ alloc_info[0].size = 2 * PMD_SIZE;
+
+ // 大页 DVPP
+ alloc_info[1].flag = SP_DVPP | SP_HUGEPAGE;
+ alloc_info[1].spg_id = group_id;
+ alloc_info[1].size = 2 * PMD_SIZE;
+
+ // 普通页 DVPP
+ alloc_info[2].flag = SP_DVPP;
+ alloc_info[2].spg_id = group_id;
+ alloc_info[2].size = 4 * PAGE_SIZE;
+
+ // 普通页
+ alloc_info[3].flag = 0;
+ alloc_info[3].spg_id = group_id;
+ alloc_info[3].size = 4 * PAGE_SIZE;
+
+ /* alloc & u2k */
+ for (i = 0; i < TIMES; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("sp_alloc return addr %lx\n", alloc_info[i].addr);
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(dev_fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("addr %lx is a valid share pool addr\n", alloc_info[i].addr);
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ pr_info("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("u2k return addr %lx, check memory content succ.\n",
+ // u2k_info[i].addr);
+ }
+ }
+ //pr_info("\n");
+ }
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = group_id;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = group_id;
+
+ /* k2u */
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ret = ioctl_vfree(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vfree small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vfree(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vfree huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* unshare kva & sp_free*/
+ for (i = 0; i < TIMES; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ //close_device(dev_fd);
+ return 0;
+
+error:
+ //close_device(dev_fd);
+ return -1;
+}
+
+void *thread_query_and_work(void *arg)
+{
+ int ret = 0;
+ int group_num = 0, j = 0;
+
+ while(group_num < GROUP_NUM && j++ < 10) {
+ // query group
+ ret = query_group(&group_num);
+ if (ret < 0) {
+ pr_info("query_group failed.");
+ continue;
+ }
+ for (int i = 0; i < group_num; i++) {
+ ret = thread_and_process_helper(group_ids[i]);
+ if (ret != 0) {
+ pr_info("\nthread %lu finish running with error, spg_id: %d\n", pthread_self(), group_ids[i]);
+ pthread_exit((void *)1);
+ }
+ }
+ }
+
+ pthread_exit((void *)0);
+}
+
+void *thread_add_group(void *arg)
+{
+ int ret = 0;
+ for(int i = 1; i <= GROUP_NUM; i++)
+ {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, i + 1);
+ if (pthread_mutex_lock(&mutex) != 0) {
+ pr_info("get pthread mutex failed.");
+ }
+ if (ret < 0)
+ add_fail++;
+ else
+ add_success++;
+ pthread_mutex_unlock(&mutex);
+ }
+ pthread_exit((void *)0);
+}
+
+static int process_routine(void)
+{
+ int ret = 0;
+ // threads for alloc and u2k k2u
+ pthread_t tid1[THREAD_NUM];
+ bool loop[THREAD_NUM];
+ for (int i = 0; i < THREAD_NUM; i++) {
+ loop[i] = false;
+ ret = pthread_create(tid1 + i, NULL, thread_query_and_work, (void *) (loop + i));
+ if (ret < 0) {
+ pr_info("thread create failed.");
+ return -1;
+ }
+ }
+ // N threads for add M groups, N*M attempts, only M attempts shall success
+ pthread_t tid2[THREAD_NUM];
+ for (int j = 0; j < 1; j++) {
+ ret = pthread_create(tid2 + j, NULL, thread_add_group, NULL);
+ if (ret < 0) {
+ pr_info("thread create failed.");
+ return -1;
+ }
+ }
+
+ // wait for add_group threads to return
+ for (int j = 0; j < 1; j++) {
+ void *tret;
+ ret = pthread_join(tid2[j], &tret);
+ if (ret < 0) {
+ pr_info("thread join failed.");
+ ret = -1;
+ }
+ if ((long)tret != 0) {
+ pr_info("thread %d failed.", j);
+ ret = -1;
+ } else {
+ pr_info("add group thread %d return success!!", j);
+ }
+ }
+
+ // wait for work threads to return
+ for (int i = 0; i < THREAD_NUM; i++) {
+ void *tret;
+ ret = pthread_join(tid1[i], &tret);
+ if (ret < 0) {
+ pr_info("thread join failed.");
+ ret = -1;
+ }
+ if ((long)tret != 0) {
+ pr_info("thread %d failed.", i);
+ ret = -1;
+ } else {
+ pr_info("work thread %d return success!!", i);
+ }
+ }
+
+ return ret;
+}
+
+#define PROCESS_NUM 20
+/* testcase1: 多进程加多组,例行调用所有API后,再并发退出*/
+static int testcase1(void)
+{
+ int ret = 0;
+
+ const int semid = sem_create(1234, "concurrent");
+ // fork child processes, they should not copy parent's group
+ int childs[PROCESS_NUM];
+ for (int k = 0; k < PROCESS_NUM; k++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed.");
+ return -1;
+ } else if (pid == 0) {
+ ret = process_routine();
+ sem_inc_by_one(semid);
+ sem_check_zero(semid);
+ exit(ret);
+ }
+ childs[k] = pid;
+ }
+
+ // wait until all processes finished add group thread and work thread
+ sem_dec_by_val(semid, PROCESS_NUM);
+
+ for (int k = 0; k < PROCESS_NUM; k++) {
+ int status;
+ ret = waitpid(childs[k], &status, 0);
+ if (ret < 0) {
+ pr_info("waitpid failed");
+ ret = -1;
+ }
+ if (status != 0) {
+ pr_info("child process %d pid %d exit unexpected", k, childs[k]);
+ ret = -1;
+ }
+ childs[k] = 0;
+ }
+
+ // 检查/proc/sharepool/proc_stat 应该只有一个spg
+
+ sem_close(semid);
+ return 0;
+}
+
+void *thread_exit(void *arg)
+{
+ sem_check_zero(semid);
+ pthread_exit((void *)0);
+}
+
+/* testcase2: 多进程加多组,例行调用所有API后,让多线程并发退出*/
+static int testcase2(void)
+{
+ int ret;
+
+ semid = sem_create(1234, "concurrent");
+ // fork child processes, they should not copy parent's group
+ int childs[PROCESS_NUM];
+ for (int k = 0; k < PROCESS_NUM; k++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed.");
+ return -1;
+ } else if (pid == 0) {
+
+ ret = process_routine(); // add group and work
+
+ // 线程并发退出
+ pthread_t tid[THREAD_NUM];
+ for (int j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_create(tid + j, NULL, thread_exit, NULL);
+ if (ret < 0) {
+ pr_info("thread create failed.");
+ return -1;
+ }
+ }
+ sem_inc_by_one(semid);
+ for (int j = 0; j < THREAD_NUM; j++) {
+ void *tret;
+ ret = pthread_join(tid[j], &tret);
+ if (ret < 0) {
+ pr_info("exit thread join failed.");
+ ret = -1;
+ }
+ if ((long)tret != 0) {
+ pr_info("exit thread %d failed.", j);
+ ret = -1;
+ } else {
+ pr_info("exit thread %d return success!!", j);
+ }
+ }
+
+ exit(ret);
+ }
+ childs[k] = pid;
+ }
+
+ // wait until all processes finished add group thread and work thread
+ sleep(5);
+ sem_dec_by_val(semid, PROCESS_NUM);
+
+ for (int k = 0; k < PROCESS_NUM; k++) {
+ int status;
+ ret = waitpid(childs[k], &status, 0);
+ if (ret < 0) {
+ pr_info("waitpid failed");
+ ret = -1;
+ }
+ if (status != 0) {
+ pr_info("child process %d pid %d exit unexpected", k, childs[k]);
+ ret = -1;
+ }
+ childs[k] = 0;
+ }
+
+ // 检查/proc/sharepool/proc_stat 应该只有一个spg
+
+ sem_close(semid);
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "多进程执行完routine后,并发退出")
+ TESTCASE_CHILD(testcase2, "多线程执行完routine后,并发退出")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_thread_add_group.c b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_thread_add_group.c
new file mode 100644
index 000000000000..7660978f7e08
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_mult_thread_add_group.c
@@ -0,0 +1,220 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Feb 5 09:53:12 2021
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <pthread.h>
+
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include "sharepool_lib.h"
+
+
+#define GROUP_ID 1
+#define REPEAT_TIMES 5
+#define THREAD_NUM 20
+
+/*
+ * testcase1: 创建5个子进程,每个子进程创建20个线程加组。预计每个子进程只有一个线程可以加组成功。
+ *
+ */
+
+static void *thread_add_group(void *arg)
+{
+ int ret = 0;
+ pid_t pid = getpid();
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = GROUP_ID,
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret == 0)
+ return 0;
+ else if (ret < 0 && errno == EEXIST) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ ret = 1;
+ } else {
+ ret = -1;
+ }
+
+ return (void *)ret;
+}
+
+static int child_process(void)
+{
+ int i;
+ int group_id;
+ int ret = 0;
+ void *tret;
+ pthread_t tid[THREAD_NUM];
+ int failure = 0, success = 0;
+
+ for (i = 0; i < THREAD_NUM; i++) {
+ ret = pthread_create(tid + i, NULL, thread_add_group, NULL);
+ if (ret != 0) {
+ pr_info("create thread %d error\n", i);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < THREAD_NUM; i++) {
+ ret = pthread_join(tid[i], &tret);
+ if (ret != 0) {
+ pr_info("can't join thread %d\n", i);
+ return -1;
+ }
+ if ((long)tret == 0) {
+ success++;
+ } else if ((long)tret == 1){
+ failure++;
+ } else {
+ pr_info("testcase execution failed, thread %d exited unexpectedly\n", i);
+ return -1;
+ }
+ }
+
+ pid_t pid = getpid();
+ group_id = ioctl_find_first_group(dev_fd, pid);
+ if (group_id != GROUP_ID) {
+ printf("query group id is %d, but expected group id is %d\n",
+ group_id, GROUP_ID);
+ return -1;
+ }
+
+ if (success != 1 || success + failure != THREAD_NUM) {
+ pr_info("testcase failed, success %d times, fail %d times",
+ success, failure);
+ return -1;
+ }
+
+ return 0;
+}
+
+/*join a group and then exit the group*/
+static int child_process_multi_add_exit(void)
+{
+ int ret = 0;
+ int cnt = 0;
+ while(cnt < 600)
+ {
+ pid_t pid = getpid();
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = GROUP_ID,
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ sleep(1);
+
+ struct sp_del_from_group_info info = {
+ .pid = pid,
+ .spg_id = GROUP_ID,
+ };
+
+ ret = ioctl_del_from_group(dev_fd, &info);
+ if (ret < 0) {
+ pr_info("ioctl_del_group failed, errno: %d", errno);
+ }
+
+ cnt++;
+ }
+ return 0;
+}
+
+/*
+ * 单进程创建一批线程并发尝试加入同一个共享组,除其中一个线程加入成功之外,
+ * 其它应返回-EEXIST
+ */
+static int testcase1(void)
+{
+ int i;
+ int ret = 0;
+ pid_t pid;
+
+ for (i = 0; i < REPEAT_TIMES; i++) {
+ pid = fork();
+
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child_process());
+ }
+
+ int status;
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("thread add group test, time %d failed", i + 1);
+ ret = -1;
+ break;
+ } else {
+ pr_info("thread add group test, time %d success", i + 1);
+ }
+ }
+
+ return ret;
+}
+
+static int testcase2(void)
+{
+ int i;
+ int ret = 0;
+ pid_t pid = getpid();
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = GROUP_ID,
+ };
+
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ }
+
+ for (i = 0; i < 999; i++) {
+ pid = fork();
+
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+
+ exit(child_process_multi_add_exit());
+ }
+ }
+
+ int status;
+ wait(&status);
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "同一进程内多线程并发加入同个共享组,预期只有1个成功")
+ TESTCASE_CHILD(testcase2, "同一进程内多线程并发加入同个共享组,退出同个共享组,同时查询/proc_show接口")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_u2k_add_and_kill.c b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_u2k_add_and_kill.c
new file mode 100644
index 000000000000..dd42e674f402
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_add_group_test/test_u2k_add_and_kill.c
@@ -0,0 +1,358 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 27 13:45:03 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * group_num个组,每组process_pre_group个进程,所有grandchild进程死循环执行:(sp_alloc -> u2k -> 内核读内存 -> unshare -> sp_free)
+ * child进程拉起所有grandchild后,反复kill_num次:kill或者创建grandchild进程,保证总数不超过process_pre_group
+ */
+
+#define MAX_GROUP 500
+#define MAX_PROC_PER_GRP 500
+#define MAX_KILL 100000
+
+static sem_t *child_sync[MAX_GROUP];
+static sem_t *grandchild_sync[MAX_GROUP];
+
+static int group_num = 100;
+static int process_per_group = 100;
+static int kill_num = 100;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", arg, getpid(), ##args)
+ int ret;
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync[arg / MAX_PROC_PER_GRP]);
+ } while ((ret != 0) && errno == EINTR);
+
+ //pr_local_info("start!!");
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ sem_post(grandchild_sync[arg / MAX_PROC_PER_GRP]);
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .spg_id = group_id,
+ .size = alloc_size,
+ };
+
+ while (1) {
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ return ret;
+ } else {
+ if (IS_ERR_VALUE(alloc_info.addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", alloc_info.addr);
+ return -1;
+ }
+ }
+ memset((void *)alloc_info.addr, 'r', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_local_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'r',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_local_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_local_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_local_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ //pr_local_info("exit!!");
+ return 0;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ //pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+
+ // add group
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ // 轮流创建该组的所有子进程
+ for (int i = 0; i < process_per_group; i++) {
+ int num = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(num);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", num, pid);
+ childs[i] = pid;
+
+ // 将子进程加组
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", num, group_id);
+ goto out;
+ } else
+ //pr_local_info("add grandchild%d to group %d successfully", num, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+ unsigned int seed = time(NULL);
+ srand(seed);
+ pr_local_info("rand seed: %u\n", seed);
+
+ /* 随机杀死或者创建进程 */
+ for (int i = 0; i < kill_num; i++) {
+ printf("kill/create %dth time.\n", i);
+ int idx = rand() % process_per_group;
+
+ if (childs[idx] > 0) {
+ // 杀死这个位置上的进程
+ kill(childs[idx], SIGKILL);
+ waitpid(childs[idx], NULL, 0);
+ printf("We killed %dth process %d!\n", idx, childs[idx]);
+ childs[idx] = 0;
+ } else {
+ printf("We are going to create a new process.\n");
+ // 该进程已经被杀死了,创建一个新进程填补该位置
+ int num = arg * MAX_PROC_PER_GRP + idx;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(num);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", num, pid);
+ childs[idx] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", num, group_id);
+ goto out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", num, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+ }
+
+out:
+ for (int i = 0; i < process_per_group; i++) {
+ if (!childs[i])
+ continue;
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ if (!WIFSIGNALED(status)) {
+ pr_local_info("grandchild%d test failed", arg * MAX_PROC_PER_GRP + i);
+ ret = -1;
+ }
+ }
+
+ //pr_local_info("exit!!");
+ return ret;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:./test_multi_u2k2 -g group_num -p proc_num -n kill_num -s alloc_size\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_PER_GRP || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > MAX_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 杀死或者创建子进程次数
+ kill_num = atoi(optarg);
+ if (kill_num > MAX_KILL || kill_num <= 0) {
+ printf("kill number invalid\n");
+ return -1;
+ }
+ break;
+ case 's':
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ printf("unsupported options: '%c'\n", opt);
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[MAX_GROUP];
+
+ // 创建组同步锁 grandchild
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // 创建组同步锁 child
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // 创建组对应的子进程
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+ pr_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ // 结束组对应的子进程
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ status = (char)WEXITSTATUS(status);
+ if (status) {
+ pr_info("child%d test failed, %d", i, status);
+ ret = status;
+ }
+ }
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "循环sp_alloc -> u2k -> 内核读内存 -> unshare -> sp_free 同时杀掉进程/创建新进程加组")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/Makefile b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/Makefile
new file mode 100644
index 000000000000..9a2b520d1b5f
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/test_mult_process
+
+clean:
+ rm -rf $(testcases)
\ No newline at end of file
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_add_group_and_print.c b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_add_group_and_print.c
new file mode 100644
index 000000000000..3a070bc514ec
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_add_group_and_print.c
@@ -0,0 +1,182 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 执行流程:
+ * child0&child1: 不断地cat /proc/sharepool/proc_stat和spa_stat,预期返回成功
+ * child1&child2: 不断地创建新进程加组,预期返回成功,且无死锁
+ * proc_work: 新进程加组后执行所有功能
+ */
+
+#define PROC_ADD_GROUP 200
+#define PROT_ADD_GROUP (PROT_READ | PROT_WRITE)
+
+int proc_work(void)
+{
+ while(1) {
+
+ }
+}
+
+int start_printer(void)
+{
+ while(1) {
+ pr_info("printer working.");
+ usleep(1000);
+ sharepool_log("sharepool_log");
+ // sharepool_print();
+ }
+ return 0;
+}
+
+int start_creator(int spg_id)
+{
+ int ret = 0;
+ int i = 0;
+ int pid = 0;
+ pid_t child[PROC_ADD_GROUP];
+ memset(child, 0, sizeof(pid_t) * PROC_ADD_GROUP);
+
+ // 加组
+ TEST_CHECK(wrap_add_group(getpid(), PROT_ADD_GROUP, spg_id), out);
+
+ // 拉起子进程加组
+ for (i = 0; i < PROC_ADD_GROUP;) {
+ FORK_CHILD_ARGS(pid, proc_work());
+ child[i++] = pid;
+ TEST_CHECK(wrap_add_group(pid, PROT_ADD_GROUP, spg_id), out);
+ pr_info("%d th group: process %d add success", spg_id, i);
+ }
+out:
+ for (int j = 0; j < i; j++)
+ KILL_CHILD(child[j]);
+
+ pr_info("%s exiting. ret: %d", __FUNCTION__, ret);
+ return ret < 0 ? ret : 0;
+}
+
+static int testcase1(void)
+{
+ int ret = 0;
+ int pid;
+ int status;
+ pid_t child[4];
+
+ // 拉起打印进程
+ for (int i = 0; i < 2; i++) {
+ FORK_CHILD_ARGS(pid, start_printer());
+ child[i] = pid;
+ }
+
+ // 拉起加组进程
+ for (int i = 2; i < 4; i++) {
+ pr_info("creating process");
+ FORK_CHILD_ARGS(pid, start_creator(i)); // use i as spg_id
+ child[i] = pid;
+ }
+
+ // 回收进程
+ for (int i = 2; i < 4; i++)
+ WAIT_CHILD_STATUS(child[i], out);
+ pr_info("creators finished.");
+out:
+ for (int i = 0; i < 2; i++) {
+ kill(child[i], SIGKILL);
+ waitpid(child[i], &status, 0);
+ }
+ pr_info("printers killed.");
+
+ return ret;
+}
+#define GROUP_NUM 1000
+int start_group_creators(int use_auto)
+{
+ int ret;
+ int id = 1;
+
+ if (use_auto % 2 == 0) {
+ while (id <= GROUP_NUM) {
+ TEST_CHECK(wrap_add_group(getpid(), PROT_ADD_GROUP, id), out);
+ pr_info("create group %d success\n", ret);
+ id++;
+ }
+ } else {
+ while (id <= GROUP_NUM) {
+ TEST_CHECK(wrap_add_group(getpid(), PROT_ADD_GROUP, SPG_ID_AUTO), out);
+ pr_info("create group %d success\n", ret);
+ id++;
+ }
+ }
+
+out:
+ return ret < 0? ret : 0;
+}
+
+static int testcase2(void)
+{
+ int ret = 0;
+ int pid;
+ int status;
+ pid_t child[4];
+
+ // 拉起打印进程
+ for (int i = 0; i < 2; i++) {
+ FORK_CHILD_ARGS(pid, start_printer());
+ child[i] = pid;
+ }
+
+ // 拉起创建组进程
+ for (int i = 2; i < 4; i++) {
+ pr_info("creating add group process");
+ FORK_CHILD_ARGS(pid, start_group_creators(i)); // even: auto id, odd: use id
+ child[i] = pid;
+ }
+
+ // 回收进程
+ for (int i = 2; i < 4; i++)
+ WAIT_CHILD_STATUS(child[i], out);
+ pr_info("group creators finished.");
+
+out:
+ for (int i = 0; i < 2; i++) {
+ kill(child[i], SIGKILL);
+ waitpid(child[i], &status, 0);
+ }
+ pr_info("printers killed.");
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "加组与维测打印并发(指定组ID)")
+ TESTCASE_CHILD(testcase2, "加组与维测打印并发(auto ID)")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_concurrent_debug.c b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_concurrent_debug.c
new file mode 100644
index 000000000000..09cbe67d19cd
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_concurrent_debug.c
@@ -0,0 +1,359 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 执行流程:
+ * grandchild:
+ * 连续申请100次内存,然后释放掉,循环
+ * child:
+ * 不断的kill或者创建grandchild进程
+ * 参数:
+ * -n 杀死或者创建grandchild进程次数
+ * -p 每组进程数
+ * -g sharepool组的数量
+ * -s grandchild进程每次申请内存的大小
+ */
+
+#define NR_GROUP 100
+#define MAX_PROC_PER_GRP 100
+#define NR_HOLD_AREAS 100
+
+static sem_t *child_sync[NR_GROUP];
+static sem_t *grandchild_sync[NR_GROUP];
+
+static int group_num = 2;
+static int process_per_group = 3;
+static int kill_num = 100;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int grandchild_id)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", grandchild_id, getpid(), ##args)
+ int ret;
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync[grandchild_id / MAX_PROC_PER_GRP]);
+ } while ((ret != 0) && errno == EINTR);
+
+ //pr_local_info("start!!");
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ sem_post(grandchild_sync[grandchild_id / MAX_PROC_PER_GRP]);
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_infos[NR_HOLD_AREAS] = {0};
+ for (int i = 0; i < NR_HOLD_AREAS; i++) {
+ alloc_infos[i].flag = 0;
+ alloc_infos[i].spg_id = group_id;
+ alloc_infos[i].size = alloc_size;
+ }
+
+ int top = 0;
+ int count = 0;
+ while (1) {
+ struct sp_alloc_info *info = alloc_infos + top++;
+ ret = ioctl_alloc(dev_fd, info);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ return -1;
+ } else {
+ if (IS_ERR_VALUE(info->addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", info->addr);
+ return -1;
+ }
+ }
+
+ memset((void *)info->addr, 'z', info->size);
+
+ if (top == NR_HOLD_AREAS) {
+ top = 0;
+ for (int i = 0; i < NR_HOLD_AREAS; i++) {
+ ret = ioctl_free(dev_fd, &alloc_infos[i]);
+ if (ret < 0) {
+ pr_local_info("sp_free failed, %d", ret);
+ return ret;
+ }
+ }
+ pr_info("grandchild process id:%d finished %dth alloc-free-100times-run.", grandchild_id, ++count);
+ }
+ }
+
+ //pr_local_info("exit!!");
+ return 0;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ //pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ for (int i = 0; i < process_per_group; i++) {
+ int grandchild_id = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(grandchild_id);
+ exit(ret);
+ } else {
+ //pr_local_info("fork grandchild%d, pid: %d", grandchild_id, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", grandchild_id, group_id);
+ goto out;
+ } else {
+ //pr_local_info("add grandchild%d to group %d successfully", grandchild_id, group_id);
+ }
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+ pr_local_info("%dth sp group %d, create %d processes and add group success!!", arg, group_id, process_per_group);
+
+ unsigned int seed = time(NULL);
+ srand(seed);
+ pr_local_info("rand seed: %u\n", seed);
+ /* 随机杀死或者创建进程 */
+ for (int i = 0; i < kill_num; i++) {
+ sleep(1);
+ pr_info("group %d %dth interruption, %d times left.", group_id, i + 1, kill_num - i - 1);
+ int idx = rand() % process_per_group;
+
+ if (childs[idx] > 0) {
+ kill(childs[idx], SIGKILL);
+ waitpid(childs[idx], NULL, 0);
+ childs[idx] = 0;
+ } else {
+ int grandchild_id = arg * MAX_PROC_PER_GRP + idx;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(grandchild_id);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", grandchild_id, pid);
+ childs[idx] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", grandchild_id, group_id);
+ goto out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", grandchild_id, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+ }
+
+out:
+ for (int i = 0; i < process_per_group; i++) {
+ if (!childs[i])
+ continue;
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], &status, 0);
+ if (!WIFSIGNALED(status)) {
+ pr_local_info("grandchild%d test failed", arg * MAX_PROC_PER_GRP + i);
+ ret = -1;
+ }
+ }
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_PER_GRP || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > NR_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 杀死或者创建子进程次数
+ kill_num = atoi(optarg);
+ if (kill_num > 100000 || kill_num <= 0) {
+ printf("kill number invalid\n");
+ return -1;
+ }
+ break;
+ case 's':
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ printf("unsupported options: '%c'\n", opt);
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+#define pr_local_info(fmt, args...) printf("[parent , pid:%d] " fmt "\n", getpid(), ##args)
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[NR_GROUP];
+ pid_t pid;
+
+ pr_local_info("group: %d, process_per_group: %d", group_num, process_per_group);
+
+ pid = fork();
+ if (pid == 0) {
+ while(1) {
+ usleep(200);
+ sharepool_log("sharepool_log");
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_local_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_local_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+
+ pr_local_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ status = (char)WEXITSTATUS(status);
+ if (status) {
+ pr_local_info("child%d test failed, %d", i, status);
+ ret = status;
+ }
+ }
+
+ kill(pid, SIGKILL);
+ waitpid(pid, &status, 0);
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "持续申请->释放内存,同时循环打印维测")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_debug_loop.c b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_debug_loop.c
new file mode 100644
index 000000000000..0c4368244bf9
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_debug_loop.c
@@ -0,0 +1,43 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <time.h>
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+static int testcase1(void)
+{
+ while(1) {
+ usleep(100);
+ sharepool_print();
+ }
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "维测打印(无限循环,不要单独运行)")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_proc_interface_process.c b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_proc_interface_process.c
new file mode 100644
index 000000000000..fa10c06f1577
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_proc_interface_process.c
@@ -0,0 +1,636 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Sun Jan 31 14:42:01 2021
+ */
+#include <pthread.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+#include "sharepool_lib.h"
+
+
+#define GROUP_ID 1
+#define TIMES 4
+
+#define REPEAT_TIMES 10
+#define THREAD_NUM 30
+#define PROCESS_NUM 30
+
+void *thread_k2u_task(void *arg)
+{
+ int fd, ret;
+ pid_t pid = getpid();
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ struct sp_make_share_info k2u_info = {0}, k2u_huge_info = {0};
+ char *addr;
+
+ //pr_info("enter thread, pid is %d, thread id is %lu\n\n", pid, pthread_self());
+
+ fd = open_device();
+ if (fd < 0) {
+ pr_info("open fd error\n");
+ return NULL;
+ }
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = SP_DVPP;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = SPG_ID_DEFAULT;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = 0;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = SPG_ID_DEFAULT;
+
+ /* k2u */
+ ret = ioctl_k2u(fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ret = ioctl_vfree(fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vfree small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vfree(fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vfree huge page error: %d\n", ret);
+ goto error;
+ }
+
+ //pr_info("\nfinish running thread %lu\n", pthread_self());
+ close_device(fd);
+ pthread_exit((void *)0);
+
+error:
+ close_device(fd);
+ pthread_exit((void *)1);
+}
+
+/*
+ * alloc - u2k - vmalloc - k2u - unshare - vfree - unshare - free
+ * 对大页/大页dvpp/小页/小页dvpp都测一下
+ */
+void *thread_and_process_helper(void)
+{
+ int fd, ret, group_id, i;
+ pid_t pid;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[TIMES] = {0};
+ struct sp_make_share_info u2k_info[TIMES] = {0}, k2u_info = {0}, k2u_huge_info = {0};
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ char *addr;
+
+ fd = open_device();
+ if (fd < 0) {
+ pr_info("open fd error\n");
+ return NULL;
+ }
+
+ /* check sp group */
+ pid = getpid();
+ group_id = ioctl_find_first_group(fd, pid);
+ if (group_id != GROUP_ID) {
+ pr_info("query group id is %d, but expected group id is %d\n",
+ group_id, GROUP_ID);
+ goto error;
+ }
+
+ // 大页
+ alloc_info[0].flag = SP_HUGEPAGE;
+ alloc_info[0].spg_id = GROUP_ID;
+ alloc_info[0].size = 2 * PMD_SIZE;
+
+ // 大页 DVPP
+ alloc_info[1].flag = SP_DVPP | SP_HUGEPAGE;
+ alloc_info[1].spg_id = GROUP_ID;
+ alloc_info[1].size = 2 * PMD_SIZE;
+
+ // 普通页 DVPP
+ alloc_info[2].flag = SP_DVPP;
+ alloc_info[2].spg_id = GROUP_ID;
+ alloc_info[2].size = 4 * PAGE_SIZE;
+
+ // 普通页
+ alloc_info[3].flag = 0;
+ alloc_info[3].spg_id = GROUP_ID;
+ alloc_info[3].size = 4 * PAGE_SIZE;
+
+ for (i = 0; i < TIMES; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("sp_alloc return addr %lx\n", alloc_info[i].addr);
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("addr %lx is a valid share pool addr\n", alloc_info[i].addr);
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ pr_info("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("u2k return addr %lx, check memory content succ.\n",
+ // u2k_info[i].addr);
+ }
+ }
+ //pr_info("\n");
+ }
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = SPG_ID_DEFAULT;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = SPG_ID_DEFAULT;
+
+ /* k2u */
+ ret = ioctl_k2u(fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ret = ioctl_vfree(fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vfree small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vfree(fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vfree huge page error: %d\n", ret);
+ goto error;
+ }
+
+ for (i = 0; i < TIMES; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ close_device(fd);
+ return 0;
+
+error:
+ close_device(fd);
+ return -1;
+}
+
+void *thread(void *arg)
+{
+ int ret;
+ //pr_info("enter thread, pid is %d, thread id is %lu\n\n", getpid(), pthread_self());
+
+ ret = thread_and_process_helper();
+ if (ret == 0) {
+ //pr_info("\nfinish running thread %lu\n", pthread_self());
+ pthread_exit((void *)0);
+ } else {
+ pr_info("\nthread %lu finish running with error\n", pthread_self());
+ pthread_exit((void *)1);
+ }
+}
+
+static int child_process()
+{
+ int fd, ret;
+ struct sp_add_group_info ag_info = {0};
+ pid_t pid = getpid();
+
+ //pr_info("enter process %d\n\n", pid);
+
+ fd = open_device();
+ if (fd < 0) {
+ return -1;
+ }
+
+ ag_info.pid = pid;
+ ag_info.spg_id = GROUP_ID;
+ ag_info.prot = PROT_READ | PROT_WRITE;
+ ret = ioctl_add_group(fd, &ag_info);
+ if (ret < 0) {
+ close_device(fd);
+ return -1;
+ }
+
+ //pr_info("ioctl add group pid is %d, spg_id is %d\n", pid, ag_info.spg_id);
+ ret = thread_and_process_helper();
+
+ close_device(fd);
+ if (ret == 0) {
+ //pr_info("\nfinish running process %d\n", pid);
+ return 0;
+ } else {
+ pr_info("\nprocess %d finish running with error\n", pid);
+ return -1;
+ }
+ return 0;
+}
+
+static int testcase1(void) {
+ int fd;
+ int ret = 0;
+ int status = 0;
+ int sleep_interval = 3;
+ int i, j, k;
+ struct sp_add_group_info ag_info = {0};
+ pthread_t tid[THREAD_NUM];
+ void *tret;
+ pid_t pid_child;
+ pid_t childs[PROCESS_NUM];
+
+ // 创建15个线程,执行单k2task任务。重复创建5次。
+ pr_info("\nmain process begins thread_k2u_task test\n");
+ for (i = 0; i < REPEAT_TIMES; i++) {
+ pr_info("thread k2task %dth test, %d times in total.", i + 1, REPEAT_TIMES);
+ for (j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_create(tid + j, NULL, thread_k2u_task, NULL);
+ if (ret != 0) {
+ pr_info("create thread %d error\n", j);
+ goto finish;
+ }
+ }
+ for (j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_join(tid[j], &tret);
+ if (ret != 0) {
+ pr_info("can't join thread %d\n", j);
+ }
+ if ((long)tret != 0) {
+ pr_info("testcase execution failed, thread %d exited unexpectedly\n", j);
+ ret = -1;
+ }
+ }
+ sleep(sleep_interval);
+ }
+ if (ret) {
+ goto finish;
+ }
+ pr_info("\nthread_k2task test success!!\n");
+ sleep(3);
+
+ fd = open_device();
+ if (fd < 0) {
+ pr_info("open fd error\n");
+ return NULL;
+ }
+
+ // add group
+ pid_t pid = getpid();
+ ag_info.pid = pid;
+ ag_info.spg_id = GROUP_ID;
+ ag_info.prot = PROT_READ | PROT_WRITE;
+ ret = ioctl_add_group(fd, &ag_info);
+ if (ret < 0) {
+ pr_info("add group failed, errno: %d", ret);
+ close_device(fd);
+ return -1;
+ }
+ pr_info("\nioctl add group pid is %d, spg_id is %d\n", pid, ag_info.spg_id);
+
+ // 创建15个线程,执行u2k+k2u混合任务。重复创建5次。
+ pr_info("\nmain process begins thread test\n");
+ for (i = 0; i < REPEAT_TIMES; i++) {
+ pr_info("thread u2k+k2u %dth test, %d times in total.", i + 1, REPEAT_TIMES);
+ for (j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_create(tid + j, NULL, thread, NULL);
+ if (ret != 0) {
+ pr_info("create thread error\n");
+ ret = -1;
+ goto finish;
+ }
+ }
+
+ for (j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_join(tid[j], &tret);
+ if (ret != 0) {
+ pr_info("can't join thread %d\n", j);
+ }
+ if ((long)tret != 0) {
+ pr_info("testcase execution failed, thread %d exited unexpectedly\n", j);
+ }
+ }
+
+ sleep(sleep_interval);
+ }
+ if (ret) {
+ goto finish;
+ }
+ pr_info("\nthread u2k+k2u test success!!\n");
+ sleep(3);
+
+ // 创建15个进程,执行u2k+k2u混合任务。重复创建5次。
+ pr_info("\nmain process begins process test\n");
+ for (i = 0; i < REPEAT_TIMES; i++) {
+ pr_info("process u2k+k2u %dth test, %d times in total.", i + 1, REPEAT_TIMES);
+ for (j = 0; j < PROCESS_NUM; j++) {
+ pid_t pid_child = fork();
+ if (pid_child < 0) {
+ pr_info("fork failed, error %d", pid_child);
+ exit(-1);
+ } else if (pid_child == 0) {
+ ret = child_process();
+ exit(ret);
+ } else {
+ childs[j] = pid_child;
+ pr_info("fork child%d, pid: %d", j, pid_child);
+ }
+ }
+
+ for (int j = 0; j < PROCESS_NUM; j++) {
+ waitpid(childs[j], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", j, status);
+ ret = -1;
+ }
+ }
+
+ sleep(sleep_interval);
+ }
+ if (ret) {
+ goto finish;
+ }
+ pr_info("\nprocess u2k+k2u test success!!\n");
+ sleep(3);
+
+ // 创建15个进程和15个线程,执行u2k+k2u混合任务。重复创建5次。
+ pr_info("\nmain process begins process and thread mix test\n");
+ for (i = 0; i < REPEAT_TIMES; i++) {
+ pr_info("process+thread u2k+k2u %dth test, %d times in total.", i + 1, REPEAT_TIMES);
+ for (j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_create(tid + j, NULL, thread, NULL);
+ if (ret != 0) {
+ pr_info("create thread error\n");
+ ret = -1;
+ goto finish;
+ }
+ }
+
+ for (k = 0; k < PROCESS_NUM; k++) {
+ pid_t pid_child = fork();
+ if (pid_child < 0) {
+ pr_info("fork failed, error %d", pid_child);
+ exit(-1);
+ } else if (pid_child == 0) {
+ ret = child_process();
+ exit(ret);
+ } else {
+ childs[k] = pid_child;
+ pr_info("fork child%d, pid: %d", k, pid_child);
+ }
+ }
+
+ for (j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_join(tid[j], &tret);
+ if (ret != 0) {
+ pr_info("can't join thread %d\n", j);
+ }
+ if ((long)tret != 0) {
+ pr_info("testcase execution failed, thread %d exited unexpectedly\n", j);
+ ret = -1;
+ }
+ }
+
+ for (int k = 0; k < PROCESS_NUM; k++) {
+ waitpid(childs[k], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", k, status);
+ ret = -1;
+ }
+ }
+
+ sleep(sleep_interval);
+ }
+ pr_info("\nprocess+thread u2k+k2u test success!!\n");
+ sleep(3);
+
+finish:
+ if (!ret) {
+ pr_info("testcase execution is successful\n");
+ } else {
+ pr_info("testcase execution failed\n");
+ }
+ return 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "多进程/多线程循环执行alloc - u2k - vmalloc - k2u - unshare - vfree - unshare - free(大页/小页/dvpp)")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_statistics_stress.c b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_statistics_stress.c
new file mode 100644
index 000000000000..00da4ca55ac0
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_debug_test/test_statistics_stress.c
@@ -0,0 +1,302 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 20 01:38:40 2020
+ */
+
+#include <errno.h>
+#include <fcntl.h> /* For O_* constants */
+#include <semaphore.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/stat.h> /* For mode constants */
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <unistd.h>
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+static int semid;
+#define PROC_NUM 1023
+#define GROUP_NUM 2999
+
+// 进程加入2999个共享组,申请dvpp内存,查看spa_stat,预期成功
+static int testcase1(void)
+{
+ int ret = 0, i, spg_id;
+ struct sp_alloc_info alloc_info[GROUP_NUM + 1];
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+
+ // 再加入2999个共享组并申请内存
+ for (i = 0; i < GROUP_NUM + 1; i++) {
+ alloc_info[i].flag = SP_DVPP | SP_HUGEPAGE,
+ alloc_info[i].size = 4,
+ alloc_info[i].spg_id = i;
+ /* 1、用户态进程A加组 */
+ if (i != 0) {
+ ag_info.spg_id = i;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ return -1;
+ }
+ }
+ /* 2、申请DVPP共享内存 */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ pr_info("Alloc DVPP memory finished.\n");
+
+ // 打印/proc/sharepool/spa_stat
+ cat_attr("/proc/sharepool/spa_stat");
+ sleep(6);
+
+ // 释放所有申请的内存
+ for (i = 0; i < GROUP_NUM + 1; i++) {
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ return -1;
+ }
+ }
+ pr_info("Free DVPP memory finished.\n");
+
+ // 再次打印/proc/sharepool/spa_stat
+ cat_attr("/proc/sharepool/spa_stat");
+ sleep(3);
+
+ return 0;
+}
+
+static int addgroup(int index)
+{
+ int ret = 0;
+ unsigned long addr;
+ unsigned long size = 4096;
+
+ for (int i = 1; i < GROUP_NUM + 1; i++) {
+ /* 1、用户态进程A加组 */
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, i);
+ if (ret < 0) {
+ pr_info("process %d failed add group %d, ret: %d, errno: %d",
+ getpid(), i, ret, errno);
+ sem_inc_by_one(semid);
+ return -1;
+ }
+ }
+ pr_info("process %d add %d groups succecss", index, GROUP_NUM);
+ sem_inc_by_one(semid);
+ while (1) {
+
+ }
+
+ return 0;
+}
+
+
+//#define PROC_NUM 1023
+//#define GROUP_NUM 2999
+
+/* N个进程加入2999个组,申请dvpp内存,打印proc_stat,预期成功
+ * N不要取太大 导致速度慢 */
+static int testcase2(void)
+{
+ int ret = 0, spg_id = 1;
+ struct sp_alloc_info alloc_info[GROUP_NUM];
+ pid_t child[PROC_NUM];
+ unsigned long size = 4096;
+ int proc_num = 1000;
+
+ semid = sem_create(2345, "wait all child add group finish");
+ // N-1个子进程加组
+ for (int i = 0; i < proc_num - 1; i++)
+ FORK_CHILD_ARGS(child[i], addgroup(i));
+ sem_dec_by_val(semid, proc_num - 1);
+ pr_info("add child proc to all groups finished.\n");
+
+ // 测试进程加组后申请
+ for (int i = 1; i < GROUP_NUM + 1; i++) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, i);
+ if (ret < 0)
+ goto out;
+ }
+ ret = 0;
+ pr_info("add main proc to all groups finished.\n");
+
+ for (int i = 1; i < GROUP_NUM + 1; i++) {
+ wrap_sp_alloc(i, size, SP_DVPP);
+ }
+ pr_info("main proc alloc in all groups success");
+
+ // 等一下让所有进程都映射到内存
+ pr_info("Let groups map memory to all childs...");
+ sleep(5);
+
+ // 打印/proc/sharepool/proc_stat
+ cat_attr("/proc/sharepool/proc_stat");
+
+ for (int i = 0; i < proc_num; i++)
+ KILL_CHILD(child[i]);
+
+ sleep(5);
+ // 再次打印/proc/sharepool/proc_stat
+ cat_attr("/proc/sharepool/proc_stat");
+
+out:
+ sem_close(semid);
+ return ret;
+}
+
+/* 1023个进程加入1个共享组,申请dvpp内存,打印proc_overview,预期成功 */
+static int testcase3(void)
+{
+ int ret, spg_id = 1;
+ unsigned long addr;
+ pid_t child[PROC_NUM];
+ pid_t pid;
+ unsigned long size = 4096;
+
+ semid = sem_create(1234, "1023childs");
+
+ // N个子进程
+ for (int i = 0; i < PROC_NUM; i++) {
+ pid = fork();
+ if (pid == 0) {
+ /* 子进程加组 */
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, spg_id);
+ if (ret < 0) {
+ pr_info("child %d add group failed!", i);
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ /* 子进程申请内存 */
+ if (i == 0) {
+ addr = (unsigned long)wrap_sp_alloc(spg_id, size, SP_DVPP);
+ if (addr == -1) {
+ pr_info("child %d alloc dvpp memory failed!", i);
+ sem_inc_by_one(semid);
+ exit(-1);
+ }
+ }
+ sem_inc_by_one(semid);
+ pr_info("child %d created success!", i + 1);
+ /* 子进程idle */
+ while (1) {
+
+ }
+ exit(0);
+ }
+ child[i] = pid;
+ }
+
+ /* 等待所有N个子进程加组 */
+ sem_dec_by_val(semid, PROC_NUM);
+ sleep(8);
+ /* 打印统计信息,预期看到N个进程,且每个进程显示SP_RES是size */
+ cat_attr("/proc/sharepool/proc_overview");
+
+ /* 杀掉子进程 */
+ for (int i = 0; i < PROC_NUM; i++) {
+ KILL_CHILD(child[i]);
+ pr_info("child %d killed", i + 1);
+ }
+ pr_info("All child process killed.\n");
+
+ // 再次打印/proc/sharepool/proc_overview,预期啥也没有
+ cat_attr("/proc/sharepool/proc_overview");
+ sleep(5);
+
+ sem_close(semid);
+ return 0;
+}
+
+// 进程加入2999个共享组,申请dvpp内存,查看spa_stat,预期成功
+static int testcase4(void)
+{
+ int ret = 0, i, spg_id;
+ char attr[SIZE];
+ char cpid[SIZE];
+ struct sp_alloc_info alloc_info[GROUP_NUM + 1];
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+
+ // 再加入2999个共享组并申请内存
+ for (i = 0; i < GROUP_NUM + 1; i++) {
+ alloc_info[i].flag = SP_DVPP | SP_HUGEPAGE,
+ alloc_info[i].size = 4,
+ alloc_info[i].spg_id = i;
+ /* 1、用户态进程A加组 */
+ if (i != 0) {
+ ag_info.spg_id = i;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("failed add group, ret: %d, errno: %d", ret, errno);
+ return -1;
+ }
+ }
+ /* 2、申请DVPP共享内存 */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return -1;
+ }
+ }
+
+ pr_info("Alloc DVPP memory finished.\n");
+ sleep(6);
+
+ // 打印/proc/sharepool/spa_stat
+ strcat(attr, "/proc/");
+ sprintf(cpid, "%d", getpid());
+ strcat(attr, cpid);
+ strcat(attr, "/sp_group");
+ pr_info("attribute is %s", attr);
+ cat_attr(attr);
+
+ // 释放所有申请的内存
+ for (i = 0; i < GROUP_NUM + 1; i++) {
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("free group failed, errno: %d", errno);
+ return -1;
+ }
+ }
+ pr_info("Free DVPP memory finished.\n");
+ sleep(3);
+
+ // 再次打印/proc/sharepool/spa_stat
+ cat_attr(attr);
+
+ return 0;
+}
+
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "同一进程加入2999个共享组并申请dvpp共享内存,cat /proc/sharepool/spa_stat查看内存申请,预期打印正常。")
+ TESTCASE_CHILD(testcase2, "多个进程加入2999个共享组并申请dvpp共享内存,cat /proc/sharepool/proc_stat查看内存申请,预期打印可能出现softlockup,等待一会儿后打印完成。")
+ TESTCASE_CHILD(testcase3, "1023个进程加入同个共享组并申请dvpp共享内存,cat /proc/sharepool/proc_overview查看内存申请,预期打印正常。")
+ TESTCASE_CHILD(testcase4, "同一进程加入2999个共享组并申请dvpp共享内存,cat /proc/进程号/sp_group查看内存申请,预期打印正常。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/Makefile b/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/Makefile
new file mode 100644
index 000000000000..9a2b520d1b5f
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/test_mult_process
+
+clean:
+ rm -rf $(testcases)
\ No newline at end of file
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_k2u.c b/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_k2u.c
new file mode 100644
index 000000000000..1d7ff1116631
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_k2u.c
@@ -0,0 +1,855 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Nov 30 02:09:42 2020
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <pthread.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/shm.h>
+#include <sys/sem.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include "sem_use.h"
+
+#include "sharepool_lib.h"
+
+
+
+/*
+ * 内核分配共享内存后,调用k2u共享到同一个用户态进程,进程中多线程通过uva并发读。
+ */
+#define TEST1_THREAD_NUM 100
+#define TEST1_K2U_NUM 100
+static struct sp_make_share_info testcase1_k2u_info[TEST1_K2U_NUM];
+
+static void *testcase1_thread(void *arg)
+{
+ for (int i = 0; i < TEST1_K2U_NUM; i++) {
+ char *buf = (char *)testcase1_k2u_info[i].addr;
+ for (int j = 0; j < testcase1_k2u_info[i].size; j++)
+ if (buf[j] != 'm') {
+ pr_info("area check failed, i:%d, j:%d, buf[j]:%d", i, j, buf[j]);
+ return (void *)-1;
+ }
+ }
+
+ return NULL;
+}
+
+static int testcase1(void)
+{
+ int ret, i, j;
+ void *thread_ret = NULL;
+ pthread_t threads[TEST1_THREAD_NUM];
+
+ struct vmalloc_info ka_info = {
+ .size = 3 * PAGE_SIZE,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("ioctl_vmalloc failed");
+ return -1;
+ };
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'm',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto out_free;
+ }
+
+ for (i = 0; i < TEST1_K2U_NUM; i++) {
+ testcase1_k2u_info[i].kva = ka_info.addr,
+ testcase1_k2u_info[i].size = ka_info.size,
+ testcase1_k2u_info[i].spg_id = SPG_ID_DEFAULT,
+ testcase1_k2u_info[i].sp_flags = 0,
+ testcase1_k2u_info[i].pid = getpid(),
+
+ ret = ioctl_k2u(dev_fd, testcase1_k2u_info + i);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto out_unshare;
+ }
+ }
+
+ for (j = 0; j < TEST1_THREAD_NUM; j++) {
+ ret = pthread_create(threads + j, NULL, testcase1_thread, NULL);
+ if (ret < 0) {
+ pr_info("pthread create failed");
+ goto out_pthread_join;
+ }
+ }
+
+out_pthread_join:
+ for (j--; j >= 0; j--) {
+ pthread_join(threads[j], &thread_ret);
+ if (thread_ret != NULL) {
+ pr_info("child thread%d exited unexpected", j + 1);
+ ret = -1;
+ }
+ }
+out_unshare:
+ for (i--; i >= 0; i--)
+ if (ioctl_unshare(dev_fd, testcase1_k2u_info + i) < 0) {
+ pr_info("ioctl_unshare failed");
+ ret = -1;
+ }
+out_free:
+ ioctl_vfree(dev_fd, &ka_info);
+
+ return ret < 0 ? -1 : 0;
+}
+
+/*
+ * 内核分配共享内存后,调用k2u共享到一个共享组(该组有大量进程),该共享组所有进程通过uva并发读写。
+ */
+#define TEST2_CHILD_NUM 100
+#define TEST2_SEM_KEY 98224
+#define TEST2_SHM_KEY 98229
+#define TEST2_K2U_NUM 100
+
+static int testcase2_shmid;
+static int testcase2_semid;
+
+static int testcase2_child(int idx)
+{
+ int ret = 0;
+
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = -1,
+ .sem_flg = 0,
+ };
+ semop(testcase2_semid, &sembuf, 1);
+
+ struct sp_make_share_info *k2u_info = shmat(testcase2_shmid, NULL, 0);
+ if (k2u_info == (void *)-1) {
+ pr_info("child%d, shmat failed, errno: %d", idx, errno);
+ ret = -1;
+ goto out;
+ }
+
+ for (int i = 0; i < TEST2_K2U_NUM; i++) {
+ char *buf = (char *)k2u_info[i].addr;
+ for (int j = 0; j < k2u_info[i].size; j++) {
+ if (buf[j] != 'e') {
+ pr_info("child%d, area check failed, i:%d, j:%d, buf[j]:%d", idx, i, j, buf[j]);
+ ret = -1;
+ goto out;
+ }
+ }
+ }
+
+out:
+ semop(testcase2_semid, &sembuf, 1);
+
+ return ret;
+}
+
+static pid_t fork_and_add_group(int idx, int group_id, int (*child)(int))
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(idx));
+ }
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return -1;
+ } else
+ return pid;
+}
+
+static int testcase2(void)
+{
+ int group_id = 20;
+ int ret, i, j, status;
+ pid_t child[TEST2_CHILD_NUM];
+
+ struct vmalloc_info ka_info = {
+ .size = 3 * PAGE_SIZE,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("ioctl_vmalloc failed");
+ return -1;
+ };
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ goto out_free;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'e',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto out_free;
+ }
+
+ testcase2_semid = semget(TEST2_SEM_KEY, 1, IPC_CREAT | 0644);
+ if (testcase2_semid < 0) {
+ pr_info("open systemV semaphore failed, errno: %s", strerror(errno));
+ ret = -1;
+ goto out_free;
+ }
+
+ ret = semctl(testcase2_semid, 0, SETVAL, 0);
+ if (ret < 0) {
+ pr_info("sem setval failed, %s", strerror(errno));
+ goto sem_remove;
+ }
+
+ testcase2_shmid = shmget(TEST2_SHM_KEY, sizeof(struct sp_make_share_info) * TEST2_K2U_NUM,
+ IPC_CREAT | 0666);
+ if (testcase2_shmid < 0) {
+ pr_info("shmget failed, errno: %s", strerror(errno));
+ ret = -1;
+ goto sem_remove;
+ }
+
+ struct sp_make_share_info *k2u_info = shmat(testcase2_shmid, NULL, 0);
+ if (k2u_info == (void *)-1) {
+ pr_info("shmat failed, errno: %d", errno);
+ ret = -1;
+ goto shm_remove;
+ }
+
+ for (i = 0; i < TEST2_CHILD_NUM; i++) {
+ child[i] = fork_and_add_group(i, group_id, testcase2_child);
+ if (child[i] < 0) {
+ pr_info("fork child failed");
+ ret = -1;
+ goto kill_child;
+ }
+ }
+
+ for (j = 0; j < TEST2_K2U_NUM; j++) {
+ k2u_info[j].kva = ka_info.addr,
+ k2u_info[j].size = ka_info.size,
+ k2u_info[j].spg_id = group_id,
+ k2u_info[j].sp_flags = 0,
+ k2u_info[j].pid = getpid(),
+
+ ret = ioctl_k2u(dev_fd, k2u_info + j);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto out_unshare;
+ }
+ }
+
+ // 通知子进程开始读取内存
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = TEST2_CHILD_NUM * 2,
+ .sem_flg = 0,
+ };
+ semop(testcase2_semid, &sembuf, 1);
+
+ // 等待子进程读内存完成
+ sembuf.sem_op = 0;
+ semop(testcase2_semid, &sembuf, 1);
+
+out_unshare:
+ for (j--; j >= 0; j--)
+ if (ioctl_unshare(dev_fd, k2u_info + j) < 0) {
+ pr_info("ioctl_unshare failed");
+ ret = -1;
+ }
+kill_child:
+ for (i--; i >= 0; i--) {
+ kill(child[i], SIGKILL);
+ waitpid(child[i], &status, 0);
+ if (!ret) {
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d exited unexpected", i);
+ ret = -1;
+ }
+ }
+ }
+shm_remove:
+ if (shmctl(testcase2_shmid, IPC_RMID, NULL) < 0)
+ pr_info("shm remove failed, %s", strerror(errno));
+sem_remove:
+ if (semctl(testcase2_semid, 0, IPC_RMID) < 0)
+ pr_info("sem remove failed, %s", strerror(errno));
+out_free:
+ ioctl_vfree(dev_fd, &ka_info);
+
+ return ret < 0 ? -1 : 0;
+}
+
+/*
+ * 内核并发分配多个共享内存并调用k2u将内存都共享到同一个用户态进程,
+ * 该进程通过多个uva分别读写。(观察是否有内存泄漏)
+ */
+#define TEST3_THREAD_NUM 100
+#define TEST3_K2U_NUM 80
+static struct sp_make_share_info testcase3_k2u_info[TEST3_THREAD_NUM][TEST3_K2U_NUM];
+static struct vmalloc_info testcase3_ka_info[TEST3_THREAD_NUM];
+
+#define TEST3_SEM_KEY 88224
+static int testcase3_semid;
+
+static void *testcase3_thread(void *arg)
+{
+ int ret, i;
+ int idx = (int)arg;
+
+ sem_dec_by_one(testcase3_semid);
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = idx,
+ .addr = testcase3_ka_info[idx].addr,
+ .size = testcase3_ka_info[idx].size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto out;
+ }
+
+ for (i = 0; i < TEST3_K2U_NUM; i++) {
+ testcase3_k2u_info[idx][i].kva = testcase3_ka_info[idx].addr;
+ testcase3_k2u_info[idx][i].size = testcase3_ka_info[idx].size;
+ testcase3_k2u_info[idx][i].spg_id = SPG_ID_DEFAULT;
+ testcase3_k2u_info[idx][i].sp_flags = 0;
+ testcase3_k2u_info[idx][i].pid = getpid();
+
+ ret = ioctl_k2u(dev_fd, testcase3_k2u_info[idx] + i);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto out_unshare;
+ }
+ }
+
+ return NULL;
+
+out_unshare:
+ for (i--; i >= 0; i--)
+ if (ioctl_unshare(dev_fd, testcase3_k2u_info[idx] + i) < 0) {
+ pr_info("ioctl_unshare failed");
+ ret = -1;
+ }
+out:
+ return (void *)-1;
+}
+
+int tc3_vmalloc(void)
+{
+ int ret = 0;
+ int i = 0;
+
+ memset(testcase3_ka_info, 0, sizeof(struct vmalloc_info) * TEST3_THREAD_NUM);
+
+ for (i = 0; i < TEST3_THREAD_NUM; i++) {
+ testcase3_ka_info[i].size = 3 * PAGE_SIZE;
+ if (ioctl_vmalloc(dev_fd, testcase3_ka_info + i) < 0)
+ goto vfree;
+ }
+ return 0;
+
+vfree:
+ for (i--; i >= 0; i--)
+ ioctl_vfree(dev_fd, testcase3_ka_info + i);
+ return -1;
+}
+
+static int testcase3(void)
+{
+ int ret, thread_idx;
+ void *thread_ret = NULL;
+ pthread_t threads[TEST3_THREAD_NUM];
+
+ /* 创建sem */
+ testcase3_semid = sem_create(TEST3_SEM_KEY, "key");
+
+ /* 主进程vmalloc */
+ if (tc3_vmalloc()) {
+ pr_info("ioctl_vmalloc failed");
+ goto out_remove_sem;
+ }
+
+ /* 子进程kick start */
+ for (thread_idx = 0; thread_idx < TEST3_THREAD_NUM; thread_idx++) {
+ ret = pthread_create(threads + thread_idx, NULL, testcase3_thread, (void *)thread_idx);
+ if (ret < 0) {
+ pr_info("pthread create failed");
+ goto out_pthread_join;
+ }
+ }
+ sem_inc_by_val(testcase3_semid, TEST3_THREAD_NUM);
+
+ /* 等待子进程退出 */
+out_pthread_join:
+ for (thread_idx--; thread_idx >= 0; thread_idx--) {
+ if (ret < 0) {
+ pthread_kill(threads[thread_idx], SIGKILL);
+ pthread_join(threads[thread_idx], NULL);
+ } else {
+ pthread_join(threads[thread_idx], &thread_ret);
+ if (thread_ret != NULL) {
+ pr_info("child thread%d exited unexpected", thread_idx + 1);
+ ret = -1;
+ }
+ }
+ }
+
+ /* 主进程进行k2u的值校验和unshare */
+ for (int i = 0; i < TEST3_THREAD_NUM; i++) {
+ for (int k = 0; k < TEST3_K2U_NUM; k++) {
+ char *buf = (char *)testcase3_k2u_info[i][k].addr;
+ for (int j = 0; j < testcase3_k2u_info[i][k].size; j++)
+ if (!ret && buf[j] != i) {
+ pr_info("area check failed, i:%d, j:%d, buf[j]:%d", i, j, buf[j]);
+ ret = -1;
+ }
+ if (ioctl_unshare(dev_fd, testcase3_k2u_info[i] + k) < 0) {
+ pr_info("ioctl_unshare failed, i:%d, k:%d, addr:0x%lx",
+ i, k, testcase3_k2u_info[i][k].addr);
+ ret = -1;
+ }
+ }
+ }
+
+ /* 主进程vfree */
+ for (int i = 0; i < TEST3_THREAD_NUM; i++)
+ ioctl_vfree(dev_fd, testcase3_ka_info + i);
+
+out_remove_sem:
+ if (sem_close(testcase3_semid) < 0)
+ pr_info("sem setval failed, %s", strerror(errno));
+
+ return ret < 0 ? -1 : 0;
+}
+
+/*
+ * 多组多进程,并发调用k2u到group,进程读内存,退出
+ * 每组多进程,每进程申请多次vmalloc内存,每次申请的内存做多次k2u映射
+ * 同一组的进程通过所有的映射内存进行读操作,
+ * 然后owner进程执行写操作,其他所有进程和内核执行读操作(check成功)
+ *
+ * 同一组内的进程通过systemV共享内存共享数据,通过systemV信号量同步
+ */
+#define MAX_PROC_NUM 100
+#define GROUP_NUM 2
+#define PROC_NUM 10 //不能超过100
+#define ALLOC_NUM 11
+#define K2U_NUM 12
+#define TEST4_SHM_KEY_BASE 10240
+#define TEST4_SEM_KEY_BASE 10840
+
+static int shmids[GROUP_NUM];
+static int semids[GROUP_NUM];
+
+struct testcase4_shmbuf {
+ struct vmalloc_info ka_infos[PROC_NUM][ALLOC_NUM]; //ka_infos[10][11]
+ struct sp_make_share_info
+ k2u_infos[PROC_NUM][ALLOC_NUM][K2U_NUM]; // k2u_infos[10][11][12]
+};
+
+static void testcase4_vfree_and_unshare(struct vmalloc_info *ka_infos,
+ struct sp_make_share_info (*k2u_infos)[K2U_NUM], int group_idx, int proc_idx)
+{
+ for (int i = 0; i < ALLOC_NUM && ka_infos[i].addr; i++) {
+ for (int j = 0; j < K2U_NUM && k2u_infos[i][j].addr; j++) {
+ if (ioctl_unshare(dev_fd, k2u_infos[i] + j))
+ pr_info("ioctl unshare failed, errno: %d", errno);
+ k2u_infos[i][j].addr = 0;
+ }
+ if (ioctl_vfree(dev_fd, ka_infos + i))
+ pr_info("ioctl vfree failed, errno: %d", errno);
+ ka_infos[i].addr = 0;
+ }
+}
+
+static void testcase4_vfree_and_unshare_all(struct testcase4_shmbuf *shmbuf)
+{
+ pr_info("this is %s", __FUNCTION__);
+ for (int i = 0; i < PROC_NUM; i++)
+ testcase4_vfree_and_unshare(shmbuf->ka_infos[i], shmbuf->k2u_infos[i], -1, -1);
+}
+
+static int testcase4_check_memory(struct sp_make_share_info
+ k2u_infos[ALLOC_NUM][K2U_NUM],
+ char offset, int group_idx, int proc_idx)
+{
+ for (int j = 0; j < ALLOC_NUM; j++) {
+ char expect = offset + j;
+ for (int k = 0; k < K2U_NUM; k++) {
+ char *buf = (char *)k2u_infos[j][k].addr;
+ for (int l = 0; l < k2u_infos[j][k].size; l++)
+ if (buf[l] != expect) {
+ pr_info("memory check failed");
+ return -1;
+ }
+ }
+ }
+ return 0;
+}
+
+static int testcase4_check_memory_all(struct sp_make_share_info
+ (*k2u_infos)[ALLOC_NUM][K2U_NUM],
+ char offset)
+{
+
+ for (int i = 0; i < PROC_NUM; i++) {
+ for (int j = 0; j < ALLOC_NUM; j++) {
+ char expect = offset + j;
+ for (int k = 0; k < K2U_NUM; k++) {
+ char *buf = (char *)k2u_infos[i][j][k].addr;
+ for (int l = 0; l < k2u_infos[i][j][k].size; l++)
+ if (buf[l] != expect) {
+ pr_info("memory check failed");
+ return -1;
+ }
+ }
+ }
+ }
+
+ return 0;
+}
+
+static void testcase4_set_memory(struct sp_make_share_info
+ (*k2u_infos)[ALLOC_NUM][K2U_NUM])
+{
+ for (int i = 0; i < PROC_NUM; i++)
+ for (int j = 0; j < ALLOC_NUM; j++) {
+ char *buf = (char *)k2u_infos[i][j][0].addr;
+ for (int l = 0; l < k2u_infos[i][j][0].size; l++)
+ buf[l]++;
+ }
+}
+
+#if 0
+#define ERRPR(fun, name, idx) \
+do { \
+ int ret = fun; \
+ if (ret < 0) \
+ pr_info(#fun "failed: %s", strerror(errno)); \
+ else \
+ pr_info(name "%d, " #fun "success", idx); \
+} while (0)
+#else
+#define ERRPR(fun, name, idx) fun
+#endif
+
+static int testcase4_grandchild(int idx)
+{
+
+ int ret, i, j;
+ int proc_idx = idx % MAX_PROC_NUM; // 0, 1, 2, ..., 9
+ int group_idx = idx / MAX_PROC_NUM - 1; // 0/1/2/3
+ int group_id = (group_idx + 1) * MAX_PROC_NUM; // 100/200/300/400
+ int semid = semids[group_idx];
+ int shmid = shmids[group_idx];
+
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = -1,
+ .sem_flg = 0,
+ };
+ ERRPR(semop(semid, &sembuf, 1), "grandchild", idx);
+
+ struct testcase4_shmbuf *shmbuf = shmat(shmid, NULL, 0);
+ if (shmbuf == (void *)-1) {
+ pr_info("grandchild%d, shmat failed, errno: %d", idx, errno);
+ ret = -1;
+ goto error_out;
+ }
+
+ // 申请内存,k2group
+ struct vmalloc_info *ka_info = shmbuf->ka_infos[proc_idx];
+ struct sp_make_share_info (*k2u_info)[K2U_NUM] = shmbuf->k2u_infos[proc_idx];
+ for (i = 0; i < ALLOC_NUM; i++) {
+ ka_info[i].size = 3 * PAGE_SIZE;
+ ret = ioctl_vmalloc(dev_fd, ka_info + i);
+ if (ret < 0) {
+ pr_info("ioctl_vmalloc failed");
+ ret = -1;
+ goto out_unshare;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = i + 'A',
+ .addr = ka_info[i].addr,
+ .size = ka_info[i].size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto out_unshare;
+ }
+
+ for (j = 0; j < K2U_NUM; j++) {
+ k2u_info[i][j].kva = ka_info[i].addr;
+ k2u_info[i][j].size = ka_info[i].size;
+ k2u_info[i][j].spg_id = group_id;
+ k2u_info[i][j].sp_flags = 0;
+ k2u_info[i][j].pid = getppid();
+
+ ret = ioctl_k2u(dev_fd, k2u_info[i] + j);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto out_unshare;
+ }
+ }
+ }
+
+ // 信号量减一,通知vmalloc和k2u完成
+ ERRPR(semop(semid, &sembuf, 1), "grandchild", idx);
+ // 等待信号量变为0(所有子进程k2u完成)
+ sembuf.sem_op = 0;
+ ERRPR(semop(semid, &sembuf, 1), "grandchild", idx);
+ // 所有子进程和父进程开始读内存,进行check
+ ret = testcase4_check_memory(shmbuf->k2u_infos[proc_idx], 'A', group_idx, proc_idx);
+ // 本进程check完,通知父进程,信号量减一
+ sembuf.sem_op = -1;
+ ERRPR(semop(semid, &sembuf, 1), "grandchild", idx);
+ // 等待信号量为0(所有进程check完毕)
+ sembuf.sem_op = 0;
+ ERRPR(semop(semid, &sembuf, 1), "grandchild", idx);
+ // 等待父进程写内存完毕
+ sembuf.sem_op = -1;
+ ERRPR(semop(semid, &sembuf, 1), "grandchild", idx);
+ // 所有子进程check内存
+ if (!ret)
+ ret = testcase4_check_memory(shmbuf->k2u_infos[proc_idx], 'A' + 1, group_idx, proc_idx);
+ if (ret < 0)
+ pr_info("child %d grandchild %d check_memory('A' + 1) failed", group_idx, proc_idx);
+ else
+ pr_info("child %d grandchild %d check_memory('A' + 1) success", group_idx, proc_idx);
+ // 通知父进程子进程check内存完毕
+ ERRPR(semop(semid, &sembuf, 1), "grandchild", idx);
+ testcase4_vfree_and_unshare(ka_info, k2u_info, group_idx, proc_idx);
+ return ret;
+
+out_unshare:
+ testcase4_vfree_and_unshare(ka_info, k2u_info, group_idx, proc_idx);
+error_out:
+ kill(getppid(), SIGKILL);
+ return -1;
+}
+
+static int testcase4_child(int idx)
+{
+ int ret, child_num, status;
+ //int group_id = (idx + 1) * 100; // spg_id = 100, 200, 300, 400
+ int group_id = (idx + 1) * MAX_PROC_NUM;
+ int semid = semids[idx];
+ int shmid = shmids[idx];
+ pid_t child[PROC_NUM]; // 10
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct testcase4_shmbuf *shmbuf = shmat(shmid, NULL, 0);
+ if (shmbuf == (void *)-1) {
+ pr_info("grandchild%d, shmat failed, errno: %d", idx, errno);
+ return -1;
+ }
+ memset(shmbuf, 0, sizeof(*shmbuf));
+
+ // 创建子进程并加组
+ for (child_num = 0; child_num < PROC_NUM; child_num++) {
+ child[child_num] = fork_and_add_group(group_id + child_num, group_id, testcase4_grandchild);
+ if (child[child_num] < 0) {
+ ret = -1;
+ goto kill_child;
+ }
+ }
+
+ // 通知子进程开始调用vmalloc,并且进行k2group
+ struct sembuf sembuf = {
+ .sem_num = 0,
+ .sem_op = PROC_NUM * 2,
+ .sem_flg = 0,
+ };
+ ERRPR(semop(semid, &sembuf, 1), "child", idx);
+ // 等待子进程完成
+ sembuf.sem_op = 0;
+ ERRPR(semop(semid, &sembuf, 1), "child", idx);
+ // 所有子进程和父进程开始读内存,进行check
+ ret = testcase4_check_memory_all(shmbuf->k2u_infos, 'A');
+ if (ret < 0)
+ goto unshare;
+ // 等待所有子进程读内存完成
+ sleep(1);
+ sembuf.sem_op = PROC_NUM;
+ ERRPR(semop(semid, &sembuf, 1), "child", idx);
+ sembuf.sem_op = 0;
+ ERRPR(semop(semid, &sembuf, 1), "child", idx);
+ // 写内存
+ testcase4_set_memory(shmbuf->k2u_infos);
+ // 通知子进程写内存完毕
+ sleep(1);
+ sembuf.sem_op = PROC_NUM * 2;
+ ERRPR(semop(semid, &sembuf, 1), "child", idx);
+ // 等待子进程读完
+ sembuf.sem_op = 0;
+ ERRPR(semop(semid, &sembuf, 1), "child", idx);
+
+ // 等待子进程退出
+ for (int i = 0; i < PROC_NUM; i++) {
+ int status;
+ waitpid(child[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d exited unexpected", i);
+ ret = -1;
+ }
+ child[i] = 0;
+ }
+
+ return 0;
+
+unshare:
+ testcase4_vfree_and_unshare_all(shmbuf);
+kill_child:
+ for (child_num--; child_num >= 0; child_num--) {
+ if (ret < 0) {
+ kill(child[child_num], SIGKILL);
+ waitpid(child[child_num], NULL, 0);
+ } else {
+ waitpid(child[child_num], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("grandchild%d exited unexpected", group_id + child_num);
+ ret = -1;
+ }
+ }
+ }
+
+ return ret < 0 ? -1 : 0;
+}
+
+static int testcase4(void)
+{
+ int ret = 0;
+
+ // init semaphores and shared memory areas
+ for (int i = 0; i < GROUP_NUM; shmids[i] = semids[i] = -1, i++);
+ for (int i = 0; i < GROUP_NUM; i++) {
+ semids[i] = semget(TEST4_SEM_KEY_BASE + i, 1, IPC_CREAT | 0644);
+ if (semids[i] < 0) {
+ pr_info("open systemV semaphore failed, errno: %s", strerror(errno));
+ ret = -1;
+ goto sem_remove;
+ }
+
+ ret = semctl(semids[i], 0, SETVAL, 0);
+ if (ret < 0) {
+ pr_info("sem setval failed, %s", strerror(errno));
+ ret = -1;
+ goto sem_remove;
+ }
+ }
+
+ for (int i = 0; i < GROUP_NUM; i++) {
+ shmids[i] = shmget(TEST4_SHM_KEY_BASE + i, sizeof(struct testcase4_shmbuf), IPC_CREAT | 0666);
+ if (shmids[i] < 0) {
+ pr_info("shmget failed, errno: %s", strerror(errno));
+ ret = -1;
+ goto shm_remove;
+ }
+ }
+
+ pid_t child[GROUP_NUM] = {0};
+ for (int i = 0; i < GROUP_NUM; i++) {
+ int group_id = i + 20;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("testcase4, group%d fork failed", group_id);
+ continue;
+ } else if (pid == 0) {
+ exit(testcase4_child(i));
+ }
+ child[i] = pid;
+ }
+
+ for (int i = 0; i < GROUP_NUM; i++) {
+ if (!child[i])
+ continue;
+
+ int status = 0;
+ waitpid(child[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("testcase4, child%d, exited unexpected", i);
+ ret = -1;
+ } else
+ pr_info("testcase4, child%d success!!", i);
+ }
+
+shm_remove:
+ for (int i = 0; semids[i] >= 0 && i < GROUP_NUM; i++)
+ if (shmctl(shmids[i], IPC_RMID, NULL) < 0)
+ pr_info("shm remove failed, %s", strerror(errno));
+sem_remove:
+ for (int i = 0; semids[i] >= 0 && i < GROUP_NUM; i++)
+ if (semctl(semids[i], 0, IPC_RMID) < 0)
+ pr_info("sem remove failed, %s", strerror(errno));
+
+ return ret;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "内核分配共享内存后,反复调用k2u共享到同一个用户态进程,该进程通过多个uva并发读写。")
+ TESTCASE_CHILD(testcase2, "内核分配共享内存后,反复调用k2u共享到一个共享组(该组有大量进程),该共享组所有进程通过uva并发读写。")
+ TESTCASE_CHILD(testcase3, "内核并发分配多个共享内存并调用k2u将内存都共享到同一个用户态进程, 该进程通过多个uva分别读写。")
+ TESTCASE_CHILD(testcase4, "内核并发分配多个共享内存并调用k2u将内存都共享到同一个共享组, 该共享组所有进程通过多个uva分别读写。")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_pass_through.c b/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_pass_through.c
new file mode 100644
index 000000000000..775fc130c92d
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_pass_through.c
@@ -0,0 +1,405 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Description:
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Feb 26 15:49:35 2021
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <pthread.h>
+#include <fcntl.h> /* For O_* constants */
+#include <semaphore.h>
+
+#include <sys/wait.h>
+#include <sys/types.h>
+#include <sys/ipc.h>
+#include <sys/sem.h>
+#include <sys/stat.h> /* For mode constants */
+
+#include "sem_use.h"
+#include "sharepool_lib.h"
+
+#define pr_info(fmt, args...) \
+ printf("[file:%s, func:%s, line:%d] " fmt "\n", __FILE__, __func__, __LINE__, ##args)
+
+#define REPEAT_TIMES 10
+#define TEST1_THREAD_NUM 20
+#define TEST2_ALLOC_TIMES 10
+#define TEST2_PROCESS_NUM 20
+#define TEST4_PROCESS_NUM 10
+#define TEST4_THREAD_NUM 10
+#define TEST4_RUN_TIME 200
+
+static int dev_fd;
+
+static void *testcase1_thread(void *arg)
+{
+ struct sp_alloc_info alloc_info = {0};
+ int ret;
+
+ alloc_info.flag = 0;
+ alloc_info.spg_id = SPG_ID_DEFAULT;
+ alloc_info.size = 4 * PAGE_SIZE;
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed, errno: %d", errno);
+ goto error;
+ }
+
+ pthread_exit((void *)0);
+
+error:
+ pthread_exit((void *)1);
+}
+
+/* 多线程未加组,一起做sp_alloc */
+static int testcase1(void)
+{
+ int ret, i, j;
+ void *thread_ret = NULL;
+ pthread_t threads[TEST1_THREAD_NUM];
+
+ int semid = sem_create(0xbcd996, "hello");
+ if (semid < 0) {
+ pr_info("open systemV semaphore failed, errno: %s", strerror(errno));
+ goto test1_out;
+ }
+
+ for (i = 0; i < REPEAT_TIMES; i++) {
+ pr_info("%s start times %d", __func__, i);
+ for (j = 0; j < TEST1_THREAD_NUM; j++) {
+ ret = pthread_create(threads + j, NULL, testcase1_thread, NULL);
+ if (ret < 0) {
+ pr_info("pthread create failed");
+ goto out_pthread_join;
+ }
+ }
+
+out_pthread_join:
+ for (j--; j >= 0; j--) {
+ pthread_join(threads[j], &thread_ret);
+ if (thread_ret != NULL) {
+ pr_info("child thread%d exited unexpected", j + 1);
+ ret = -1;
+ goto test1_out;
+ }
+ }
+ }
+
+ sem_close(semid);
+test1_out:
+ return ret < 0 ? -1 : 0;
+}
+
+static int semid_tc2;
+
+static pid_t fork_process(int idx, int (*child)(int))
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(idx));
+ }
+
+ return pid;
+}
+
+static int testcase2_child(int idx)
+{
+ int i;
+ int ret = 0;
+ struct sp_alloc_info alloc_info[TEST2_ALLOC_TIMES] = {0};
+
+ sem_dec_by_one(semid_tc2);
+
+ for (i = 0; i < TEST2_ALLOC_TIMES; i++) {
+ if (idx % 2 == 0) {
+ alloc_info[i].flag = 2;
+ } else {
+ alloc_info[i].flag = 2 | SP_DVPP;
+ }
+ alloc_info[i].spg_id = SPG_ID_DEFAULT;
+ alloc_info[i].size = (idx + 1) * PAGE_SIZE;
+ }
+
+ for (i = 0; i < TEST2_ALLOC_TIMES; i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed, errno: %d", errno);
+ goto out;
+ }
+ pr_info("child %d alloc success %d time.", idx, i);
+ }
+
+ for (i = 0; i < TEST2_ALLOC_TIMES; i++) {
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl free failed, errno: %d", errno);
+ goto out;
+ }
+ pr_info("child %d free success %d time.", idx, i);
+ }
+
+out:
+ return ret;
+}
+
+/* 多进程未加组,一起做sp_alloc */
+static int testcase2(void)
+{
+ int ret = 0;
+ int i, status;
+ pid_t child[TEST2_PROCESS_NUM] = {0};
+
+ semid_tc2 = sem_create(0x1234abc, "testcase2");
+ if (semid_tc2 < 0) {
+ pr_info("open systemV semaphore failed, errno: %s", strerror(errno));
+ return -1;
+ }
+
+ for (i = 0; i < TEST2_PROCESS_NUM; i++) {
+ child[i] = fork_process(i, testcase2_child);
+ if (child[i] < 0) {
+ pr_info("fork child failed");
+ ret = -1;
+ goto kill_child;
+ }
+ }
+
+ sem_inc_by_val(semid_tc2, TEST2_PROCESS_NUM);
+
+kill_child:
+ for (i--; i >= 0; i--) {
+ waitpid(child[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d exited unexpected", i);
+ ret = -1;
+ }
+ }
+
+test2_out:
+ return ret < 0 ? -1 : 0;
+}
+
+/* 父进程直调申请内存,子进程故意释放,预期无权限失败 */
+static int testcase3(void)
+{
+ int ret = 0;
+ int status;
+ pid_t pid;
+
+ struct sp_alloc_info alloc_info = {
+ .flag = SP_HUGEPAGE_ONLY,
+ .spg_id = SPG_ID_DEFAULT,
+ .size = PMD_SIZE,
+ };
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed");
+ goto error;
+ }
+
+ pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ goto error;
+ } else if (pid == 0) {
+ /* sp_free deliberately */
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0 && errno == EINVAL) {
+ pr_info("sp_free return EINVAL as expected");
+ } else if (ret < 0) {
+ pr_info("sp_free return %d unexpectedly", errno);
+ exit(1);
+ } else {
+ pr_info("sp_free return success unexpectly");
+ exit(1);
+ }
+ exit(0);
+ }
+
+ waitpid(pid, &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child process failed, ret is %d", status);
+ goto error;
+ }
+
+ return 0;
+
+error:
+ return -1;
+}
+
+static void *testcase4_thread(void *arg)
+{
+ struct sp_alloc_info alloc_info = {0};
+ int ret;
+ int idx = arg;
+
+ struct vmalloc_info ka_info = {
+ .size = PAGE_SIZE,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("vmalloc failed, errno: %d", errno);
+ return -1;
+ }
+
+ struct karea_access_info karea_info = {
+ .mod = KAREA_SET,
+ .value = 'b',
+ .addr = ka_info.addr,
+ .size = ka_info.size,
+ };
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea set failed, errno %d", errno);
+ goto error;
+ }
+
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ alloc_info.spg_id = SPG_ID_DEFAULT;
+ if (idx % 2 == 0) {
+ alloc_info.flag = SP_DVPP;
+ alloc_info.size = (idx + 1) * PAGE_SIZE;
+ } else {
+ alloc_info.flag = SP_HUGEPAGE_ONLY;
+ alloc_info.size = (idx + 1) * PMD_SIZE;
+ }
+
+ for (int i = 0; i < TEST4_RUN_TIME; i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed, errno: %d", errno);
+ goto error;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl free failed, errno: %d", errno);
+ goto error;
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ goto error;
+ }
+
+ char *buf = (char *)k2u_info.addr;
+ for (int j = 0; j < k2u_info.size; j++) {
+ if (buf[j] != 'b') {
+ pr_info("check k2u context failed");
+ goto error;
+ }
+ }
+
+ if (ioctl_unshare(dev_fd, &k2u_info)) {
+ pr_info("unshare memory failed, errno: %d", errno);
+ goto error;
+ }
+ }
+
+ ioctl_vfree(dev_fd, &ka_info);
+ pthread_exit((void *)0);
+
+error:
+ ioctl_vfree(dev_fd, &ka_info);
+ pthread_exit((void *)1);
+}
+
+static int testcase4_child(int idx)
+{
+ int i, ret;
+ void *thread_ret = NULL;
+ pthread_t threads[TEST4_THREAD_NUM];
+
+ for (i = 0; i < TEST4_THREAD_NUM; i++) {
+ ret = pthread_create(threads + i, NULL, testcase4_thread, idx);
+ if (ret < 0) {
+ pr_info("pthread create failed");
+ goto out_pthread_join;
+ }
+ }
+
+out_pthread_join:
+ for (i--; i >= 0; i--) {
+ pthread_join(threads[i], &thread_ret);
+ if (thread_ret != NULL) {
+ pr_info("child thread%d exited unexpected", i + 1);
+ ret = -1;
+ }
+ }
+
+ return ret;
+}
+
+/*
+ * 多进程,每个进程多线程,直调申请内存与k2u to task混跑
+ */
+static int testcase4(void)
+{
+ int ret = 0;
+ int i, status;
+ pid_t child[TEST4_PROCESS_NUM] = {0};
+
+ for (i = 0; i < TEST4_PROCESS_NUM; i++) {
+ child[i] = fork_process(i, testcase4_child);
+ if (child[i] < 0) {
+ pr_info("fork child failed");
+ ret = -1;
+ goto kill_child;
+ }
+ }
+
+kill_child:
+ for (i--; i >= 0; i--) {
+ if (ret != 0) {
+ kill(child[i], SIGKILL);
+ }
+ waitpid(child[i], &status, 0);
+ if (!ret) {
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d exited unexpected", i);
+ ret = -1;
+ goto test2_out;
+ }
+ }
+ }
+
+test2_out:
+ return ret != 0 ? -1 : 0;
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "多线程未加组,并发申请直调内存")
+ TESTCASE_CHILD(testcase2, "多进程未加组,并发申请直调内存")
+ TESTCASE_CHILD(testcase3, "父进程直调申请内存,子进程故意释放,预期无权限失败")
+ TESTCASE_CHILD(testcase4, "多进程,每个进程多线程,直调申请内存与k2u to task混跑")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_thread_k2u.c b/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_thread_k2u.c
new file mode 100644
index 000000000000..01b569509ce8
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_k2u_test/test_mult_thread_k2u.c
@@ -0,0 +1,197 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Mon Dec 14 07:53:12 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <pthread.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/shm.h>
+#include <sys/sem.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include "sharepool_lib.h"
+#include "sem_use.h"
+
+
+/*
+ * 同一组,多进程,多线程,每线程执行流程:
+ * vmalloc->k2u->读写操作->unshare->vfree
+ */
+#define PROCESS_PER_GROUP 2
+#define THREAD_PER_PROCESS 128
+
+static int testcase_semid = -1;
+
+/* 线程:vmalloc -> k2u + 用户态写入 + unshare 100次 -> vfree */
+static void *testcase_thread_routing(void *arg)
+{
+ int ret = 0;
+ struct vmalloc_info ka_info = {
+ .size = PMD_SIZE * 2,
+ };
+ ret = ioctl_vmalloc(dev_fd, &ka_info);
+ if (ret < 0) {
+ pr_info("ioctl_vmalloc failed");
+ return (void *)-1;
+ };
+
+ for (int i = 0; i < 100; i++) {
+ struct sp_make_share_info k2u_info = {
+ .kva = ka_info.addr,
+ .size = ka_info.size,
+ .spg_id = SPG_ID_DEFAULT,
+ .sp_flags = 0,
+ .pid = getpid(),
+ };
+
+ // 进程未加组则k2task,否则k2spg
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl_k2u failed, errno: %d", errno);
+ ret = -1;
+ goto out;
+ }
+ memset((void *)k2u_info.addr, 'a', k2u_info.size);
+ ioctl_unshare(dev_fd, &k2u_info);
+ }
+
+out:
+ ioctl_vfree(dev_fd, &ka_info);
+
+ return (void *)ret;
+}
+
+static int testcase_child_process(int idx)
+{
+ int ret = 0;
+ pthread_t threads[THREAD_PER_PROCESS] = {0};
+
+ if (testcase_semid == -1) {
+ pr_info("unexpect semid");
+ return -1;
+ }
+
+ sem_dec_by_one(testcase_semid);
+
+ for (int i = 0; i < THREAD_PER_PROCESS; i++) {
+ ret = pthread_create(threads + i, NULL, testcase_thread_routing, NULL);
+ if (ret < 0) {
+ pr_info("pthread create failed");
+ ret = -1;
+ goto out;
+ }
+ }
+
+ pr_info("child%d create %d threads success", idx, THREAD_PER_PROCESS);
+
+out:
+ for (int i = 0; i < THREAD_PER_PROCESS; i++)
+ if (threads[i])
+ pthread_join(threads[i], NULL);
+
+ return ret;
+}
+
+static pid_t fork_and_add_group(int idx, int group_id, int (*child)(int))
+{
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ return -1;
+ } else if (pid == 0) {
+ exit(child(idx));
+ }
+
+ // 不加组
+ if (group_id == SPG_ID_DEFAULT)
+ return pid;
+
+ struct sp_add_group_info ag_info = {
+ .pid = pid,
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ int ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ kill(pid, SIGKILL);
+ waitpid(pid, NULL, 0);
+ return -1;
+ } else
+ return pid;
+}
+
+#define TEST_SEM_KEY 0xffabc
+static int testcase_routing(int group_id, int (*child_process)(int))
+{
+ int ret = 0, status;
+ pid_t child[PROCESS_PER_GROUP] = {0};
+
+ testcase_semid = sem_create(9612, "test_mult_thread_k2u");
+ if (testcase_semid < 0) {
+ pr_info("open systemV semaphore failed, errno: %s", strerror(errno));
+ return -1;
+ }
+
+ for (int i = 0; i < PROCESS_PER_GROUP; i++) {
+ pid_t pid = fork_and_add_group(i, group_id, child_process);
+ if (pid < 0)
+ goto out;
+ child[i] = pid;
+ }
+
+ sem_inc_by_val(testcase_semid, PROCESS_PER_GROUP);
+
+ for (int i = 0; i < PROCESS_PER_GROUP; i++) {
+ waitpid(child[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("childprocess%d exit unexpected", i);
+ ret = -1;
+ goto out;
+ }
+ child[i] = 0;
+ }
+ goto sem_remove;
+
+out:
+ for (int i = 0; i < PROCESS_PER_GROUP; i++)
+ if (child[i]) {
+ kill(child[i], SIGKILL);
+ waitpid(child[i], NULL, 0);
+ }
+
+sem_remove:
+ if (sem_close(testcase_semid) < 0)
+ pr_info("sem remove failed, %s", strerror(errno));
+
+ return ret;
+}
+
+static int testcase1(void) { return testcase_routing(10, testcase_child_process); }
+static int testcase2(void) { return testcase_routing(SPG_ID_DEFAULT, testcase_child_process); }
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "多线程vmalloc->k2spg->memset->unshare->vfree")
+ TESTCASE_CHILD(testcase2, "多线程vmalloc->k2task->memset->unshare->vfree")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/Makefile b/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/Makefile
new file mode 100644
index 000000000000..9a2b520d1b5f
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/test_mult_process
+
+clean:
+ rm -rf $(testcases)
\ No newline at end of file
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k.c b/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k.c
new file mode 100644
index 000000000000..04a5a3e5c6e0
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k.c
@@ -0,0 +1,514 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Tue Nov 24 15:40:31 2020
+ */
+#define _GNU_SOURCE
+#include <stdio.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <pthread.h>
+
+#include <sys/ipc.h>
+#include <sys/msg.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+
+#define MAX_ALLOC 100000
+#define MAX_SHARE 1000
+#define MAX_READ 100000
+
+static int alloc_num = 20;
+static int share_num = 20;
+static int read_num = 20;
+
+struct __thread_info {
+ struct sp_make_share_info *u2k_info;
+ struct karea_access_info *karea_info;
+};
+
+/*
+ * 用户态进程A加组后分配并写内存N,反复调用u2k(share_num次)共享到内核,内核模块通过每个kva反复读同一块内存(read_num次)N成功。
+ * 进程A停止共享N后,内核模块读N失败。进程A释放内存N。
+ */
+static void *grandchild1(void *arg)
+{
+ struct karea_access_info *karea_info = (struct karea_access_info*)arg;
+ int ret = 0;
+ for (int j = 0; j < read_num; j++) {
+ ret = ioctl_karea_access(dev_fd, karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ pthread_exit((void*)ret);
+ }
+ pr_info("thread read u2k area %dth time success", j);
+ }
+ pr_info("thread read u2k area %d times success", read_num);
+ pthread_exit((void*)ret);
+}
+
+static int child1(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = alloc_info->spg_id;
+ pr_info("want to add group_id: %d", group_id);
+
+ // add group()
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+ pr_info("now added into group_id: %d", alloc_info->spg_id);
+
+ // alloc()
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ pr_info("alloc %0lx memory success", alloc_info->size);
+
+ // write
+ memset((void *)alloc_info->addr, 'o', alloc_info->size);
+ pr_info("memset success");
+
+ // u2k
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info->addr,
+ .size = alloc_info->size,
+ .pid = getpid(),
+ };
+
+ struct karea_access_info *karea_info = (struct karea_access_info*)malloc(share_num * sizeof(struct karea_access_info));
+
+ for (int i = 0; i < share_num; i++) { // 同一段用户态内存可以向内核共享很多次
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+ karea_info[i].mod = KAREA_CHECK;
+ karea_info[i].value = 'o';
+ karea_info[i].addr = u2k_info.addr;
+ karea_info[i].size = u2k_info.size;
+ }
+ pr_info("u2k share %d times success", share_num);
+
+ //内核反复读(太慢了!取消了。)
+ //for (int j = 0; j < read_num; j++) {
+ for (int i = 0; i < share_num; i++) {
+ ret = ioctl_karea_access(dev_fd, &karea_info[i]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+ pr_info("kernel read %dth %0lx area success", i, alloc_info->size);
+ }
+ //}
+ //pr_info("kernel read %d times success", read_num);
+
+ //内核并发读 100个线程
+ pthread_t childs[MAX_SHARE] = {0};
+ int status = 0;
+ for (int i = 0; i < share_num; i++) {
+ ret = pthread_create(&childs[i], NULL, grandchild1, (void *)&karea_info[i]);
+ if (ret != 0) {
+ pr_info("pthread_create failed, errno: %d", errno);
+ exit(-1);
+ }
+ }
+ pr_info("create %d threads success", share_num);
+
+ void *child_ret;
+ for (int i = 0; i < share_num; i++) {
+ pthread_join(childs[i], &child_ret);
+ if ((int)child_ret != 0) {
+ pr_info("grandchild1 %d test failed, %d", i, (int)child_ret);
+ return (int)child_ret;
+ }
+ }
+ pr_info("exit %d threads success", share_num);
+
+ for (int i = 0; i < share_num; i++) {
+ u2k_info.addr = karea_info[i].addr;
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+ }
+ pr_info("unshare u2k area %d times success", share_num);
+
+ /*
+ * unshare之后再访问内存会导致内核挂掉,在手工测试的时候执行
+ */
+#if 0
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info[0]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+#endif
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ free(karea_info);
+ return ret;
+}
+
+/*
+ * 用户态进程A加组后分配并写alloc_num个内存,调用u2k共享到内核,内核模块通过每个kva分别读多个内存成功(每个kva读read_num次)。
+ * 进程A停止共享后,内核模块读N失败。进程A释放内存N。
+ */
+static void* grandchild2(void *arg)
+{
+ int ret = 0;
+ struct __thread_info* thread2_info = (struct __thread_info*)arg;
+ ret = ioctl_u2k(dev_fd, thread2_info->u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ pthread_exit((void*)ret);
+ }
+ thread2_info->karea_info->mod = KAREA_CHECK;
+ thread2_info->karea_info->value = 'p';
+ thread2_info->karea_info->addr = thread2_info->u2k_info->addr;
+ thread2_info->karea_info->size = thread2_info->u2k_info->size;
+ pthread_exit((void*)ret);
+}
+
+static int child2(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = alloc_info->spg_id;
+
+ // add group
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ // alloc N块内存
+ struct sp_alloc_info *all_alloc_info = (struct sp_alloc_info*)malloc(alloc_num * sizeof(struct sp_alloc_info));
+ for (int i = 0; i < alloc_num; i++) {
+ all_alloc_info[i].flag = alloc_info->flag;
+ all_alloc_info[i].spg_id = alloc_info->spg_id;
+ all_alloc_info[i].size = alloc_info->size;
+ ret = ioctl_alloc(dev_fd, &all_alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)all_alloc_info[i].addr, 'p', all_alloc_info[i].size);
+ }
+
+ struct sp_make_share_info *all_u2k_info = (struct sp_make_share_info*)malloc(alloc_num * sizeof(struct sp_make_share_info));
+
+ struct karea_access_info *karea_info = (struct karea_access_info*)malloc(alloc_num * sizeof(struct karea_access_info));
+
+ // 并发调用u2k
+ // 创建100个线程,分别对一块alloc内存调用u2k,并存储返回地址
+ pthread_t childs[MAX_ALLOC] = {0};
+ struct __thread_info thread2_info[MAX_ALLOC];
+ int status = 0;
+ for (int i = 0; i < alloc_num; i++) {
+ all_u2k_info[i].uva = all_alloc_info[i].addr;
+ all_u2k_info[i].size = all_alloc_info[i].size;
+ all_u2k_info[i].pid = getpid();
+ thread2_info[i].u2k_info = &all_u2k_info[i];
+ thread2_info[i].karea_info = &karea_info[i];
+ ret = pthread_create(&childs[i], NULL, grandchild2, (void *)&thread2_info[i]);
+ if (ret != 0) {
+ pr_info("pthread_create failed, errno: %d", errno);
+ exit(-1);
+ }
+ }
+
+ // 结束所有线程
+ void *child_ret;
+ for (int i = 0; i < alloc_num; i++) {
+ pthread_join(childs[i], &child_ret);
+ if ((int)child_ret != 0) {
+ pr_info("grandchild2 %d test failed, %d", i, (int)child_ret);
+ return (int)child_ret;
+ }
+ }
+
+ // 内核读内存
+ for (int j = 0; j < read_num; j++) {
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_karea_access(dev_fd, &karea_info[i]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+ }
+ }
+
+ // unshare所有内存
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_unshare(dev_fd, &all_u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ /*
+ * unshare之后再访问内存会导致内核挂掉,在手工测试的时候执行
+ */
+#if 0
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info[0]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+#endif
+
+ // free所有内存
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_free(dev_fd, &all_alloc_info[i]);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ }
+ free(all_alloc_info);
+ free(all_u2k_info);
+ free(karea_info);
+ return ret;
+}
+
+/*
+ * 用户态进程A加组后分配并写内存N,反复share_num次(调用u2k共享到内核, 内核模块读内存N成功,用户态调用unshare)。
+ * 进程A最后一次停止共享后,内核模块读N失败。进程A释放内存N。
+ */
+static int child3(struct sp_alloc_info *alloc_info)
+{
+ int ret;
+ int group_id = alloc_info->spg_id;
+
+ // add group
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("ioctl_add_group failed, errno: %d", errno);
+ return ret;
+ }
+
+ // alloc
+ ret = ioctl_alloc(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("ioctl_alloc failed, errno: %d", errno);
+ return ret;
+ }
+ memset((void *)alloc_info->addr, 'q', alloc_info->size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info->addr,
+ .size = alloc_info->size,
+ .pid = getpid(),
+ };
+
+ // 反复调用u2k-内核读-unshare
+ for (int i = 0; i < share_num; i++) {
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'q',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ /*
+ * unshare之后再访问内存会导致内核挂掉,在手工测试的时候执行
+ */
+#if 0
+ pr_info("recheck karea");
+ ret = ioctl_karea_access(dev_fd, &karea_info[0]);
+ if (ret < 0) {
+ pr_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+#endif
+
+ ret = ioctl_free(dev_fd, alloc_info);
+ if (ret < 0) {
+ pr_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ return ret;
+}
+
+static void print_help()
+{
+ printf("Usage:./test_multi_u2k -n alloc_num -s share_num -r read_num\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "n:s:r:")) != -1) {
+ switch (opt) {
+ case 'n': // 内存申请次数
+ alloc_num = atoi(optarg);
+ if (alloc_num > MAX_ALLOC || alloc_num <= 0) {
+ printf("alloc number invalid\n");
+ return -1;
+ }
+ break;
+ case 's': // u2k共享次数
+ share_num = atoi(optarg);
+ if (share_num > MAX_SHARE || share_num <= 0) {
+ printf("share number invalid\n");
+ return -1;
+ }
+ break;
+ case 'r': // 内核读内存次数
+ read_num = atoi(optarg);
+ if (read_num > MAX_READ || read_num <= 0) {
+ printf("read number invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+#define PROC_NUM 4
+static struct sp_alloc_info alloc_infos[] = {
+ {
+ .flag = 0,
+ .spg_id = 10,
+ .size = 100 * PAGE_SIZE, //400K
+ },
+ {
+ .flag = SP_HUGEPAGE,
+ .spg_id = 12,
+ .size = 10 * PMD_SIZE, // 20M
+ },
+ {
+ .flag = SP_DVPP,
+ .spg_id = 19,
+ .size = 100000, // 约100K
+ },
+ {
+ .flag = SP_DVPP | SP_HUGEPAGE,
+ .spg_id = 19,
+ .size = 10000000, // 约1M
+ },
+};
+
+int (*child_funcs[])(struct sp_alloc_info *) = {
+ child1,
+ child2,
+ child3,
+};
+
+static int testcase(int child_idx)
+{
+ int ret = 0;
+ pid_t procs[PROC_NUM];
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_info("fork error");
+ ret = -1;
+ goto error_out;
+ } else if (pid == 0) {
+ exit(child_funcs[child_idx](alloc_infos + i));
+ } else {
+ procs[i] = pid;
+ }
+ }
+
+ for (int i = 0; i < sizeof(alloc_infos) / sizeof(alloc_infos[0]); i++) {
+ int status = 0;
+ waitpid(procs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("testcase failed!!, func: %d, alloc info: %d", child_idx, i);
+ ret = -1;
+ } else {
+ pr_info("testcase success!!, func: %d, alloc info: %d", child_idx, i);
+ }
+ }
+
+ return 0;
+error_out:
+ return -1;
+}
+
+static int testcase1(void) { return testcase(0); }
+static int testcase2(void) { return testcase(1); }
+static int testcase3(void) { return testcase(2); }
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "用户态反复调用u2k共享到内核,内核模块通过每个kva反复读同一块内存")
+ TESTCASE_CHILD(testcase2, "创建100个线程,分别对一块alloc内存调用u2k,并存储返回地址;内核读内存")
+ TESTCASE_CHILD(testcase3, "用户态u2k-内核读-unshare,重复循环")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k3.c b/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k3.c
new file mode 100644
index 000000000000..0e4b5728df30
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k3.c
@@ -0,0 +1,314 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 27 13:45:03 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * group_num个组,每组process_pre_group个进程,所有进程并发执行:多次(sp_alloc -> u2k -> 内核读内存 -> unshare -> sp_free)
+ * 用信号量进行同步,进程创建完成后先睡眠
+ */
+
+#define MAX_GROUP 500
+#define MAX_PROC_PER_GRP 500
+#define MAX_ALLOC 100000
+
+static sem_t *child_sync[MAX_GROUP];
+static sem_t *grandchild_sync[MAX_GROUP];
+
+static int group_num = 2;
+static int process_pre_group = 100;
+static int alloc_num = 100;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", arg, getpid(), ##args)
+ int ret;
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync[arg / MAX_PROC_PER_GRP]);
+ } while ((ret != 0) && errno == EINTR);
+
+ pr_local_info("start!!");
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ sem_post(grandchild_sync[arg / MAX_PROC_PER_GRP]);
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .spg_id = group_id,
+ .size = alloc_size,
+ };
+
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ return ret;
+ } else {
+ if (IS_ERR_VALUE(alloc_info.addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", alloc_info.addr);
+ return -1;
+ }
+ }
+ memset((void *)alloc_info.addr, 'r', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_local_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'r',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+
+ ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_local_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_local_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_local_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ printf("child %d grandchild %d exit!!\n", arg / MAX_PROC_PER_GRP, getpid());
+ return 0;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ for (int i = 0; i < process_pre_group; i++) {
+ int num = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(num);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", num, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ /* 加组有时候产生-17错误. 尝试retry,有时可以成功,有时还是会一直卡在这里。
+ do {
+ pr_local_info("add grandchild%d to group %d failed, retry", num, group_id);
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ } while (ret < 0);
+ */
+ pr_local_info("add grandchild%d to group %d failed", num, group_id);
+ kill(childs[i], SIGKILL);
+ childs[i] = 0;
+ goto out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", num, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+out:
+ for (int i = 0; i < process_pre_group && childs[i] > 0; i++) {
+ waitpid(childs[i], &status, 0);
+ if (status) {
+ pr_local_info("grandchild%d test failed, %d", arg * MAX_PROC_PER_GRP + i, status);
+ ret = status;
+ }
+ }
+
+ printf("child %d exit!!\n", arg);
+ return ret;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:./test_multi_u2k3 -g group_num -p proc_num -n alloc&share_num -s alloc_size\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_pre_group = atoi(optarg);
+ if (process_pre_group > MAX_PROC_PER_GRP || process_pre_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > MAX_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 内存申请次数
+ alloc_num = atoi(optarg);
+ if (alloc_num > MAX_ALLOC || alloc_num <= 0) {
+ printf("alloc number invalid\n");
+ return -1;
+ }
+ break;
+ case 's': // 内存申请大小
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ printf("unsupported options: '%c'\n", opt);
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+#define pr_local_info(fmt, args...) printf("[parent , pid:%d] " fmt "\n", getpid(), ##args)
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[MAX_GROUP];
+
+ pr_local_info("group: %d, process_pre_group: %d", group_num, process_pre_group);
+
+ dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_local_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_local_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+ pr_local_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ if (status) {
+ pr_local_info("child%d test failed, %d", i, status);
+ ret = status;
+ }
+ }
+
+ pr_local_info("exit!!");
+ return ret;
+#undef pr_local_info
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "进程并发执行sp_alloc -> u2k -> 内核读内存 -> unshare -> sp_free(循环)")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k4.c b/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k4.c
new file mode 100644
index 000000000000..c856fd3a3caa
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/mult_u2k_test/test_mult_u2k4.c
@@ -0,0 +1,310 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Fri Nov 27 13:45:03 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * group_num个组,每组process_pre_group个进程,所有进程并发执行:多次(sp_alloc -> u2k -> 内核读内存 -> unshare -> sp_free)
+ * 用信号量进行同步,进程创建完成后先睡眠
+ */
+
+#define MAX_GROUP 500
+#define MAX_PROC_PER_GRP 500
+#define MAX_ALLOC 100000
+
+static sem_t *child_sync[MAX_GROUP];
+static sem_t *grandchild_sync[MAX_GROUP];
+
+static int group_num = 2;
+static int process_pre_group = 100;
+static int alloc_num = 100;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", arg, getpid(), ##args)
+ int ret;
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync[arg / MAX_PROC_PER_GRP]);
+ } while ((ret != 0) && errno == EINTR);
+
+ pr_local_info("start!!");
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ sem_post(grandchild_sync[arg / MAX_PROC_PER_GRP]);
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ return -1;
+ }
+
+ struct sp_alloc_info alloc_info = {
+ .flag = 0,
+ .spg_id = group_id,
+ .size = alloc_size,
+ };
+
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ return ret;
+ } else {
+ if (IS_ERR_VALUE(alloc_info.addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", alloc_info.addr);
+ return -1;
+ }
+ }
+ memset((void *)alloc_info.addr, 'r', alloc_info.size);
+
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_info.addr,
+ .size = alloc_info.size,
+ .pid = getpid(),
+ };
+
+ //ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_local_info("ioctl_u2k failed, errno: %d", errno);
+ return ret;
+ }
+ struct karea_access_info karea_info = {
+ .mod = KAREA_CHECK,
+ .value = 'r',
+ .addr = u2k_info.addr,
+ .size = u2k_info.size,
+ };
+
+ //ret = ioctl_karea_access(dev_fd, &karea_info);
+ if (ret < 0) {
+ pr_local_info("karea check failed, errno %d", errno);
+ return ret;
+ }
+
+ //ret = ioctl_unshare(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_local_info("ioctl_unshare failed, errno: %d", errno);
+ return ret;
+ }
+
+ ret = ioctl_free(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_local_info("free area failed, errno: %d", errno);
+ return ret;
+ }
+ }
+
+ printf("child %d grandchild %d exit!!\n", arg / MAX_PROC_PER_GRP, getpid());
+ return 0;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ for (int i = 0; i < process_pre_group; i++) {
+ int num = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(num);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", num, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ /* 加组有时候产生-17错误. 尝试retry,有时可以成功,有时还是会一直卡在这里。
+ do {
+ pr_local_info("add grandchild%d to group %d failed, retry", num, group_id);
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ } while (ret < 0);
+ */
+ pr_local_info("add grandchild%d to group %d failed", num, group_id);
+ kill(childs[i], SIGKILL);
+ childs[i] = 0;
+ goto out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", num, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[arg]);
+
+ /* 等待子进程获取到加组信息 */
+ do {
+ ret = sem_wait(grandchild_sync[arg]);
+ } while ((ret != 0) && errno == EINTR);
+ }
+ }
+
+out:
+ for (int i = 0; i < process_pre_group && childs[i] > 0; i++) {
+ waitpid(childs[i], &status, 0);
+ if (status) {
+ pr_local_info("grandchild%d test failed, %d", arg * MAX_PROC_PER_GRP + i, status);
+ ret = status;
+ }
+ }
+
+ printf("child %d exit!!\n", arg);
+ return ret;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:./test_multi_u2k3 -g group_num -p proc_num -n alloc&share_num -s alloc_size\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_pre_group = atoi(optarg);
+ if (process_pre_group > MAX_PROC_PER_GRP || process_pre_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > MAX_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 内存申请次数
+ alloc_num = atoi(optarg);
+ if (alloc_num > MAX_ALLOC || alloc_num <= 0) {
+ printf("alloc number invalid\n");
+ return -1;
+ }
+ break;
+ case 's': // 内存申请大小
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ printf("unsupported options: '%c'\n", opt);
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+#define pr_local_info(fmt, args...) printf("[parent , pid:%d] " fmt "\n", getpid(), ##args)
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[MAX_GROUP];
+
+ pr_local_info("group: %d, process_pre_group: %d", group_num, process_pre_group);
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_local_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_local_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+ pr_local_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ if (status) {
+ pr_local_info("child%d test failed, %d", i, status);
+ ret = status;
+ }
+ }
+
+ pr_local_info("exit!!");
+ return ret;
+#undef pr_local_info
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "进程并发执行:多次(sp_alloc -> u2k -> 内核读内存 -> unshare -> sp_free)")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/stress_test/Makefile b/tools/testing/sharepool/testcase/test_mult_process/stress_test/Makefile
new file mode 100644
index 000000000000..9a2b520d1b5f
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/stress_test/Makefile
@@ -0,0 +1,13 @@
+test%: test%.c
+ $(CC) $^ -o $@ $(sharepool_lib_ccflags) -lpthread
+
+src:=$(wildcard *.c)
+testcases:=$(patsubst %.c,%,$(src))
+
+default: $(testcases)
+
+install: $(testcases)
+ cp $(testcases) $(TOOL_BIN_DIR)/test_mult_process
+
+clean:
+ rm -rf $(testcases)
\ No newline at end of file
diff --git a/tools/testing/sharepool/testcase/test_mult_process/stress_test/test_alloc_free_two_process.c b/tools/testing/sharepool/testcase/test_mult_process/stress_test/test_alloc_free_two_process.c
new file mode 100644
index 000000000000..263821bee137
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/stress_test/test_alloc_free_two_process.c
@@ -0,0 +1,303 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Wed Nov 11 07:12:29 2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+
+#include <fcntl.h> /* For O_* constants */
+#include <sys/stat.h> /* For mode constants */
+#include <semaphore.h>
+
+#include "sharepool_lib.h"
+
+/*
+ * 两个组,每组两个进程,同时多次申请释放内存
+ *
+ * 用信号量进行同步,进程创建完成后先睡眠
+ */
+
+#define NR_GROUP 100
+#define MAX_PROC_PER_GRP 100
+
+static int group_num = 2;
+static int process_per_group = 3;
+static int alloc_num = 1000;
+static size_t alloc_size = 4 * PAGE_SIZE;
+
+static int grandchild_process(int arg, sem_t *child_sync, sem_t *grandchild_sync)
+{
+#define pr_local_info(fmt, args...) printf("[grandchild%d, pid:%d] " fmt "\n", arg, getpid(), ##args)
+ int ret;
+
+ struct sp_alloc_info *alloc_infos = malloc(sizeof(*alloc_infos) * alloc_num);
+ if (!alloc_infos) {
+ pr_local_info("malloc failed");
+ return -1;
+ }
+
+ /* 等待父进程加组 */
+ do {
+ ret = sem_wait(child_sync);
+ } while ((ret != 0) && errno == EINTR);
+
+ sleep(1); // it seems sem_wait doesn't work as expected
+ pr_local_info("start!!, ret is %d, errno is %d", ret, errno);
+
+ int group_id = ioctl_find_first_group(dev_fd, getpid());
+ if (group_id < 0) {
+ pr_local_info("ioctl_find_group_by_pid failed, %d", group_id);
+ goto error_out;
+ }
+
+ for (int i = 0; i < alloc_num; i++) {
+ (alloc_infos + i)->flag = 0,
+ (alloc_infos + i)->spg_id = group_id,
+ (alloc_infos + i)->size = alloc_size,
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, alloc_infos + i);
+ if (ret < 0) {
+ pr_local_info("ioctl alloc failed\n");
+ goto error_out;
+ } else {
+ if (IS_ERR_VALUE(alloc_infos[i].addr)) {
+ pr_local_info("sp_alloc return err is %ld\n", alloc_infos[i].addr);
+ goto error_out;
+ }
+ }
+
+ memset((void *)alloc_infos[i].addr, 'z', alloc_infos[i].size);
+ }
+
+ sem_post(grandchild_sync);
+ do {
+ ret = sem_wait(child_sync);
+ } while (ret < 0 && errno == EINTR);
+
+ for (int i = 0; i < alloc_num; i++) {
+ ret = ioctl_free(dev_fd, alloc_infos + i);
+ if (ret < 0) {
+ pr_local_info("ioctl free failed, errno: %d", errno);
+ goto error_out;
+ }
+ }
+ sem_post(grandchild_sync);
+ free(alloc_infos);
+ pr_local_info("exit!!");
+ return 0;
+
+error_out:
+ sem_post(grandchild_sync);
+ free(alloc_infos);
+ return -1;
+#undef pr_local_info
+}
+
+static int child_process(int arg)
+{
+#define pr_local_info(fmt, args...) printf("[child%d , pid:%d] " fmt "\n", arg, getpid(), ##args)
+ pr_local_info("start!!");
+
+ int ret, status = 0;
+ int group_id = arg + 100;
+ pid_t childs[MAX_PROC_PER_GRP] = {0};
+ sem_t *child_sync[MAX_PROC_PER_GRP] = {0};
+ sem_t *grandchild_sync[MAX_PROC_PER_GRP] = {0};
+
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ .spg_id = group_id,
+ };
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0)
+ return -1;
+
+ // create syncs for grandchilds
+ for (int i = 0; i < process_per_group; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_grandchild_sync%d", i);
+ grandchild_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (grandchild_sync[i] == SEM_FAILED) {
+ pr_local_info("grandchild sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // create syncs for childs
+ for (int i = 0; i < process_per_group; i++) {
+ char buf[100];
+ sprintf(buf, "/sharepool_child_sync%d", i);
+ child_sync[i] = sem_open(buf, O_CREAT, O_RDWR, 0);
+ if (child_sync[i] == SEM_FAILED) {
+ pr_local_info("child sem_open failed");
+ return -1;
+ }
+ sem_unlink(buf);
+ }
+
+ // 创建子进程并将之加组
+ for (int i = 0; i < process_per_group; i++) {
+ int num = arg * MAX_PROC_PER_GRP + i;
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed\n");
+ exit(-1);
+ } else if(pid == 0) {
+ ret = grandchild_process(num, child_sync[i], grandchild_sync[i]);
+ exit(ret);
+ } else {
+ pr_local_info("fork grandchild%d, pid: %d", num, pid);
+ childs[i] = pid;
+
+ ag_info.pid = pid;
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_local_info("add grandchild%d to group %d failed", num, group_id);
+ goto error_out;
+ } else
+ pr_local_info("add grandchild%d to group %d successfully", num, group_id);
+
+ /* 通知子进程加组成功 */
+ sem_post(child_sync[i]);
+ }
+ }
+
+ for (int i = 0; i < process_per_group; i++)
+ do {
+ ret = sem_wait(grandchild_sync[i]);
+ } while (ret < 0 && errno == EINTR);
+
+ for (int i = 0; i < process_per_group; i++)
+ sem_post(child_sync[i]);
+ pr_local_info("grandchild-processes start to do sp_free");
+
+ for (int i = 0; i < process_per_group && childs[i] > 0; i++) {
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_local_info("grandchild%d test failed, %d", arg * MAX_PROC_PER_GRP + i, status);
+ ret = -1;
+ }
+ }
+ pr_local_info("exit!!");
+ return ret;
+
+error_out:
+ for (int i = 0; i < process_per_group && childs[i] > 0; i++) {
+ kill(childs[i], SIGKILL);
+ waitpid(childs[i], NULL, 0);
+ };
+ pr_local_info("exit!!");
+ return -1;
+#undef pr_local_info
+}
+
+static void print_help()
+{
+ printf("Usage:\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:s:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_PER_GRP || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > NR_GROUP || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 内存申请次数
+ alloc_num = atoi(optarg);
+ if (alloc_num > 100000 || alloc_num <= 0) {
+ printf("alloc number invalid\n");
+ return -1;
+ }
+ break;
+ case 's':
+ alloc_size = atol(optarg);
+ if (alloc_size <= 0) {
+ printf("alloc size invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int testcase1(void)
+{
+#define pr_local_info(fmt, args...) printf("[parent , pid:%d] " fmt "\n", getpid(), ##args)
+
+ int ret = 0;
+ int status = 0;
+ pid_t childs[NR_GROUP];
+
+ pr_local_info("group: %d, process_per_group: %d", group_num, process_per_group);
+
+ for (int i = 0; i < group_num; i++) {
+ pid_t pid = fork();
+ if (pid < 0) {
+ pr_local_info("fork failed, error %d", pid);
+ exit(-1);
+ } else if (pid == 0) {
+ ret = child_process(i);
+ exit(ret);
+ } else {
+ childs[i] = pid;
+
+ pr_local_info("fork child%d, pid: %d", i, pid);
+ }
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ waitpid(childs[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_local_info("child%d test failed, %d", i, status);
+ ret = -1;
+ }
+ }
+
+ pr_local_info("exit!!");
+
+ return ret;
+#undef pr_local_info
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "两个组每组两个进程,同时申请释放内存,简单验证")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_args_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/stress_test/test_fuzz.c b/tools/testing/sharepool/testcase/test_mult_process/stress_test/test_fuzz.c
new file mode 100644
index 000000000000..88606c608217
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/stress_test/test_fuzz.c
@@ -0,0 +1,543 @@
+#include "sharepool_lib.h"
+#include <unistd.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <time.h>
+#include <sys/queue.h>
+
+#define MAX_GROUP_NUM 1000
+#define MAX_PROC_NUM 128
+#define MAX_KILL_TIME 1000
+#define FUNCTION_NUM 7 // alloc, free, u2k, unshare, group query, k2u, proc_stat
+
+#define SLEEP_TIME 1 // not used
+#define USLEEP_TIME 2000
+
+#define ALLOC_SIZE PAGE_SIZE
+#define ALLOC_MULTIPLE 10 // alloc size range from 1~10 pages
+#define VMALLOC_SIZE (2 * PAGE_SIZE)
+
+int child[MAX_PROC_NUM];
+int group_ids[MAX_GROUP_NUM];
+
+// use a LIST to keep all alloc areas
+struct alloc_area {
+ unsigned long addr;
+ unsigned long size;
+ LIST_ENTRY(alloc_area) entries;
+ int spg_id;
+};
+struct alloc_list {
+ struct alloc_area *lh_first;
+};
+
+// use a LIST to keep all u2k/k2u areas
+enum sp_share_flag {
+ U2K = 0,
+ K2U = 1
+};
+struct sp_share_area {
+ unsigned long addr;
+ unsigned long size;
+ enum sp_share_flag flag;
+ LIST_ENTRY(sp_share_area) entries;
+};
+struct u2k_list {
+ struct sp_share_area *lh_first;
+};
+struct k2u_list {
+ struct sp_share_area *lh_first;
+};
+
+// use a LIST to keep all vmalloc areas
+struct vmalloc_area {
+ unsigned long addr;
+ unsigned long size;
+ LIST_ENTRY(vmalloc_area) entries;
+};
+struct vmalloc_list {
+ struct vmalloc_area *lh_first;
+};
+
+typedef struct list_arg_ {
+ void *list_head;
+ int *list_length; // we keep the list size manually
+} list_arg;
+
+static int spalloc(list_arg alloc_arg);
+static int spfree(list_arg alloc_arg);
+static int spu2k(list_arg alloc_arg, list_arg u2k_arg);
+static int spunshare(list_arg u2k_arg, list_arg k2u_arg, list_arg vmalloc_arg);
+static int spquery();
+static int spk2u(list_arg k2u_arg, list_arg vmalloc_arg);
+static int spreadproc();
+
+static int random_num(int mod_num);
+static int add_multi_group();
+static int check_multi_group();
+static int parse_opt(int argc, char *argv[]);
+
+static int group_num = 64;
+static int process_per_group = 32;
+static int kill_time = 1000;
+
+int fuzz()
+{
+ int ret = 0;
+ int alloc_list_length = 0, u2k_list_length = 0, vmalloc_list_length = 0, k2u_list_length = 0;
+
+ // initialize lists
+ struct alloc_list alloc_list = LIST_HEAD_INITIALIZER(alloc_list);
+ LIST_INIT(&alloc_list);
+ struct u2k_list u2k_list = LIST_HEAD_INITIALIZER(u2k_list);
+ LIST_INIT(&u2k_list);
+ struct vmalloc_list vmalloc_list = LIST_HEAD_INITIALIZER(vmalloc_list);
+ LIST_INIT(&vmalloc_list);
+ struct k2u_list k2u_list = LIST_HEAD_INITIALIZER(k2u_list);
+ LIST_INIT(&k2u_list);
+
+ list_arg alloc_arg = {
+ .list_head = &alloc_list,
+ .list_length = &alloc_list_length,
+ };
+ list_arg u2k_arg = {
+ .list_head = &u2k_list,
+ .list_length = &u2k_list_length,
+ };
+ list_arg k2u_arg = {
+ .list_head = &k2u_list,
+ .list_length = &k2u_list_length,
+ };
+ list_arg vmalloc_arg = {
+ .list_head = &vmalloc_list,
+ .list_length = &vmalloc_list_length,
+ };
+
+ int repeat_time = 0;
+ // begin to fuzz
+ while (repeat_time++ <= kill_time) {
+ switch(random_num(FUNCTION_NUM)) {
+ case 0:
+ ret = spalloc(alloc_arg);
+ break;
+ case 1:
+ ret = spfree(alloc_arg);
+ break;
+ case 2:
+ ret = spu2k(alloc_arg, u2k_arg);
+ break;
+ case 3:
+ ret = spunshare(u2k_arg, k2u_arg, vmalloc_arg);
+ break;
+ case 4:
+ ret = spquery();
+ break;
+ case 5:
+ ret = spk2u(k2u_arg, vmalloc_arg);
+ break;
+ case 6:
+ ret = spreadproc();
+ break;
+ default:
+ break;
+ }
+ if (ret < 0) {
+ pr_info("test process %d failed.", getpid());
+ return ret;
+ }
+ //sleep(SLEEP_TIME);
+ usleep(USLEEP_TIME);
+ }
+
+ return 0;
+}
+
+int main(int argc, char *argv[])
+{
+ int ret;
+ dev_fd = open_device();
+ if (dev_fd < 0)
+ return -1;
+
+ // get opt args
+ ret = parse_opt(argc, argv);
+ if (ret)
+ return -1;
+ else
+ pr_info("\ngroup: %d, process_per_group: %d, kill time: %d\n", group_num, process_per_group, kill_time);
+
+ // create groups
+ for (int i = 0; i < group_num; i++) {
+ ret = wrap_add_group(getpid(), PROT_READ | PROT_WRITE, i + 1);
+ if (ret < 0) {
+ pr_info("main process add group %d failed.", i + 1);
+ return -1;
+ } else {
+ pr_info("main process add group %d success.", ret);
+ group_ids[i] = ret;
+ }
+ }
+
+ // start test processes
+ for (int i = 0; i < process_per_group; i++) {
+ int pid = fork();
+ if (pid < 0) {
+ pr_info("fork failed");
+ } else if (pid == 0) {
+ // change the seed
+ srand((unsigned)time(NULL) % getpid());
+ // child add groups
+ if (add_multi_group() < 0) {
+ pr_info("child process add all groups failed.");
+ exit(-1);
+ }
+ if (check_multi_group() < 0) {
+ pr_info("child process check all groups failed.");
+ exit(-1);
+ }
+ exit(fuzz());
+ } else {
+ pr_info("fork child %d success.", pid);
+ child[i] = pid;
+ }
+ }
+
+ ret = 0;
+ // waitpid
+ for (int i = 0; i < process_per_group; i++) {
+ int status;
+ waitpid(child[i], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", i, status);
+ ret = -1;
+ } else {
+ pr_info("child%d test success.", i);
+ }
+ }
+
+ close_device(dev_fd);
+ return ret;
+}
+
+static int spalloc(list_arg alloc_arg)
+{
+ int ret;
+ struct alloc_list *list_head = (struct alloc_list *)(alloc_arg.list_head);
+
+ struct sp_alloc_info alloc_info = {
+ .size = ALLOC_SIZE * (random_num(ALLOC_MULTIPLE) + 1),
+ .flag = 0,
+ .spg_id = group_ids[random_num(group_num)],
+ };
+
+ ret = ioctl_alloc(dev_fd, &alloc_info);
+ if (ret < 0) {
+ pr_info("test process %d alloc in group %d failed, errno: %d.",
+ getpid(), alloc_info.spg_id, errno);
+ return -1;
+ }
+
+ // alloc areas are kept in a list
+ pr_info("test process %d alloc in group %d success.", getpid(), alloc_info.spg_id);
+ struct alloc_area *a1 = malloc(sizeof(struct alloc_area));
+ a1->addr = alloc_info.addr;
+ a1->size = alloc_info.size;
+ a1->spg_id = alloc_info.spg_id;
+ LIST_INSERT_HEAD(list_head, a1, entries);
+ ++*alloc_arg.list_length;
+
+ return 0;
+}
+
+static int spfree(list_arg alloc_arg)
+{
+ int ret = 0;
+ struct alloc_list *list_head = (struct alloc_list *)alloc_arg.list_head;
+
+ // return if no alloc areas left
+ if (*alloc_arg.list_length == 0)
+ return 0;
+
+ // free a random one
+ int free_index = random_num(*alloc_arg.list_length);
+ int index = 0;
+ struct alloc_area *alloc_area;
+ LIST_FOREACH(alloc_area, list_head, entries) {
+ if (index++ == free_index) {
+ ret = wrap_sp_free(alloc_area->addr);
+ if (ret < 0) {
+ pr_info("test process %d free %lx failed, %d areas left.",
+ getpid(), alloc_area->addr, *alloc_arg.list_length);
+ return -1;
+ } else {
+ pr_info("test process %d free %lx success, %d areas left.",
+ getpid(), alloc_area->addr, --*alloc_arg.list_length);
+ LIST_REMOVE(alloc_area, entries);
+ }
+ break;
+ }
+ }
+
+ if (--index != free_index)
+ pr_info("index error");
+
+ return 0;
+}
+
+static int spu2k(list_arg alloc_arg, list_arg u2k_arg)
+{
+ int ret;
+ struct alloc_list *alloc_list_head = (struct alloc_list *)alloc_arg.list_head;
+ struct u2k_list *u2k_list_head = (struct u2k_list *)u2k_arg.list_head;
+
+ // return if no alloc areas left
+ if (*alloc_arg.list_length == 0)
+ return 0;
+
+ // u2k a random one
+ int u2k_index = random_num(*alloc_arg.list_length);
+ int index = 0;
+ struct alloc_area *alloc_area;
+ LIST_FOREACH(alloc_area, alloc_list_head, entries) {
+ if (index++ == u2k_index) {
+ struct sp_make_share_info u2k_info = {
+ .uva = alloc_area->addr,
+ .size = alloc_area->size,
+ .pid = getpid(),
+ .spg_id = alloc_area->spg_id,
+ };
+ ret = ioctl_u2k(dev_fd, &u2k_info);
+ if (ret < 0) {
+ pr_info("u2k failed.");
+ return -1;
+ }
+
+ pr_info("test process %d u2k in group %d success.", getpid(), u2k_info.spg_id);
+ struct sp_share_area *u1 = malloc(sizeof(struct sp_share_area));
+ u1->addr = u2k_info.addr;
+ u1->size = u2k_info.size;
+ u1->flag = U2K;
+ LIST_INSERT_HEAD(u2k_list_head, u1, entries);
+ (*u2k_arg.list_length)++;
+
+ break;
+ }
+ }
+
+ if (--index != u2k_index)
+ pr_info("index error");
+
+ return 0;
+}
+
+static int spunshare(list_arg u2k_arg, list_arg k2u_arg, list_arg vmalloc_arg)
+{
+ int ret = 0;
+ struct u2k_list *u2k_list_head = (struct u2k_list*)u2k_arg.list_head;
+ struct k2u_list *k2u_list_head = (struct k2u_list*)k2u_arg.list_head;
+ struct vmalloc_list *vmalloc_list_head = (struct vmalloc_list*)vmalloc_arg.list_head;
+
+ // unshare a random u2k area
+ if (*u2k_arg.list_length == 0)
+ goto k2u_unshare;
+ int u2k_unshare_index = random_num(*u2k_arg.list_length);
+ int index = 0;
+ struct sp_share_area *u2k_area;
+ LIST_FOREACH(u2k_area, u2k_list_head, entries) {
+ if (index++ == u2k_unshare_index) {
+ ret = wrap_unshare(u2k_area->addr, u2k_area->size);
+ if (ret < 0) {
+ pr_info("test process %d unshare uva %lx failed, %d areas left.",
+ getpid(), u2k_area->addr, *u2k_arg.list_length);
+ return -1;
+ } else {
+ pr_info("test process %d unshare uva %lx success, %d areas left.",
+ getpid(), u2k_area->addr, --*u2k_arg.list_length);
+ LIST_REMOVE(u2k_area, entries);
+ }
+ break;
+ }
+ }
+
+k2u_unshare:
+ if (*k2u_arg.list_length == 0)
+ return 0;
+
+ // unshare a random k2u area
+ int k2u_unshare_index = random_num(*k2u_arg.list_length);
+ index = 0;
+ struct sp_share_area *k2u_area;
+ LIST_FOREACH(k2u_area, k2u_list_head, entries) {
+ if (index++ == k2u_unshare_index) {
+ ret = wrap_unshare(k2u_area->addr, k2u_area->size);
+ if (ret < 0) {
+ pr_info("test process %d unshare kva %lx failed, %d areas left.",
+ getpid(), k2u_area->addr, *k2u_arg.list_length);
+ return -1;
+ } else {
+ pr_info("test process %d unshare kva %lx success, %d areas left.",
+ getpid(), k2u_area->addr, --*k2u_arg.list_length);
+ LIST_REMOVE(k2u_area, entries);
+ }
+ break;
+ }
+ }
+
+ // vfree the vmalloc area
+ int vfree_index = k2u_unshare_index;
+ index = 0;
+ struct vmalloc_area *vmalloc_area;
+ LIST_FOREACH(vmalloc_area, vmalloc_list_head, entries) {
+ if (index++ == vfree_index) {
+ wrap_vfree(vmalloc_area->addr);
+ pr_info("test process %d vfreed %lx, %d areas left.",
+ getpid(), vmalloc_area->addr, --*vmalloc_arg.list_length);
+ LIST_REMOVE(vmalloc_area, entries);
+ break;
+ }
+ }
+
+ return 0;
+}
+
+static int spquery()
+{
+ return check_multi_group();
+}
+
+static int spk2u(list_arg k2u_arg, list_arg vmalloc_arg)
+{
+ int ret = 0;
+ struct vmalloc_list *vmalloc_list_head = (struct vmalloc_list*)vmalloc_arg.list_head;
+ struct k2u_list *k2u_list_head = (struct k2u_list*)k2u_arg.list_head;
+
+ // vmalloc
+ struct vmalloc_info vmalloc_info = {
+ .size = VMALLOC_SIZE,
+ };
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ struct vmalloc_area *v1 = malloc(sizeof(struct vmalloc_area));
+ v1->addr = vmalloc_info.addr;
+ v1->size = vmalloc_info.size;
+ LIST_INSERT_HEAD(vmalloc_list_head, v1, entries);
+ (*vmalloc_arg.list_length)++;
+
+ // k2u to random group
+ struct sp_make_share_info k2u_info = {
+ .kva = vmalloc_info.addr,
+ .size = vmalloc_info.size,
+ .pid = getpid(),
+ .spg_id = random_num(group_num + 1), // 1~MAX_GROUP_NUM :k2spg 0:k2task
+ };
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("test process %d k2u failed, errno: %d", getpid(), errno);
+ return -1;
+ }
+ pr_info("test process %d k2u in group %d success.", getpid(), k2u_info.spg_id);
+ struct sp_share_area *k1 = malloc(sizeof(struct sp_share_area));
+ k1->addr = k2u_info.addr;
+ k1->size = k2u_info.size;
+ k1->flag = K2U;
+ LIST_INSERT_HEAD(k2u_list_head, k1, entries);
+ (*k2u_arg.list_length)++;
+
+ return 0;
+}
+
+static int spreadproc()
+{
+ return sharepool_log("-----test proc_stat & spa_stat-------");
+}
+
+static int add_multi_group()
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+ for (int i = 0; i < group_num; i++) {
+ ag_info.spg_id = group_ids[i];
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("proc%d add group%d failed", getpid(), group_ids[i]);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int check_multi_group()
+{
+ int ret;
+ // query groups
+ int spg_ids[MAX_GROUP_NUM];
+ int expect_group_num = MAX_GROUP_NUM;
+ struct sp_group_id_by_pid_info find_group_info = {
+ .num = &expect_group_num,
+ .spg_ids = spg_ids,
+ .pid = getpid(),
+ };
+ ret = ioctl_find_group_by_pid(dev_fd, &find_group_info);
+ if (ret < 0) {
+ pr_info("find group id by pid failed");
+ return ret;
+ } else {
+ for (int i = 0; i < expect_group_num; i++)
+ if (spg_ids[i] != group_ids[i]) {
+ pr_info("group id %d not consistent", i);
+ ret = -1;
+ }
+ pr_info("test process %d add all groups success.", getpid());
+ }
+
+ return ret;
+}
+
+static int random_num(int mod_num)
+{
+ return (int)(mod_num * rand() / (RAND_MAX + 1.0));
+}
+
+static void print_help()
+{
+ printf("Usage:\n");
+}
+
+static int parse_opt(int argc, char *argv[])
+{
+ int opt;
+
+ while ((opt = getopt(argc, argv, "p:g:n:")) != -1) {
+ switch (opt) {
+ case 'p': // 每组进程数
+ process_per_group = atoi(optarg);
+ if (process_per_group > MAX_PROC_NUM || process_per_group <= 0) {
+ printf("process number invalid\n");
+ return -1;
+ }
+ break;
+ case 'g': // group 数量
+ group_num = atoi(optarg);
+ if (group_num > MAX_GROUP_NUM || group_num <= 0) {
+ printf("group number invalid\n");
+ return -1;
+ }
+ break;
+ case 'n': // 循环次数
+ kill_time = atoi(optarg);
+ if (kill_time > MAX_KILL_TIME || kill_time <= 0) {
+ printf("alloc number invalid\n");
+ return -1;
+ }
+ break;
+ default:
+ print_help();
+ return -1;
+ }
+ }
+
+ return 0;
+}
\ No newline at end of file
diff --git a/tools/testing/sharepool/testcase/test_mult_process/stress_test/test_mult_proc_interface.c b/tools/testing/sharepool/testcase/test_mult_process/stress_test/test_mult_proc_interface.c
new file mode 100644
index 000000000000..d5966dc1f737
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/stress_test/test_mult_proc_interface.c
@@ -0,0 +1,701 @@
+/*
+ * Copyright (C) Huawei Technologies Co., Ltd. 2021. All rights reserved.
+ * Author: Huawei OS Kernel Lab
+ * Create: Sun Jan 31 14:42:01 2021
+ */
+#include <pthread.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+#include "sharepool_lib.h"
+
+//#define GROUP_ID 1
+#define TIMES 4
+
+#define REPEAT_TIMES 5
+#define THREAD_NUM 15
+#define PROCESS_NUM 15
+
+#define GROUP_NUM 10
+static int group_ids[GROUP_NUM];
+
+void *thread_k2u_task(void *arg)
+{
+ int ret;
+ pid_t pid = getpid();
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ struct sp_make_share_info k2u_info = {0}, k2u_huge_info = {0};
+ char *addr;
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = SP_DVPP;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = SPG_ID_DEFAULT;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = 0;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = SPG_ID_DEFAULT;
+
+ /* k2u */
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ret = ioctl_vfree(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vfree small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vfree(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vfree huge page error: %d\n", ret);
+ goto error;
+ }
+
+ //pr_info("\nfinish running thread %lu\n", pthread_self());
+ pthread_exit((void *)0);
+
+error:
+ pthread_exit((void *)1);
+}
+
+/*
+ * alloc - u2k - vmalloc - k2u - unshare - vfree - unshare - free
+ * 对大页/大页dvpp/小页/小页dvpp都测一下
+ * 多组:对每一组都测一下(query 进程所在的group)
+ */
+void *thread_and_process_helper(int group_id)
+{
+ int ret, i;
+ pid_t pid;
+ bool judge_ret = true;
+ struct sp_alloc_info alloc_info[TIMES] = {0};
+ struct sp_make_share_info u2k_info[TIMES] = {0}, k2u_info = {0}, k2u_huge_info = {0};
+ struct vmalloc_info vmalloc_info = {0}, vmalloc_huge_info = {0};
+ char *addr;
+
+ /* check sp group */
+ pid = getpid();
+
+ // 大页
+ alloc_info[0].flag = SP_HUGEPAGE;
+ alloc_info[0].spg_id = group_id;
+ alloc_info[0].size = 2 * PMD_SIZE;
+
+ // 大页 DVPP
+ alloc_info[1].flag = SP_DVPP | SP_HUGEPAGE;
+ alloc_info[1].spg_id = group_id;
+ alloc_info[1].size = 2 * PMD_SIZE;
+
+ // 普通页 DVPP
+ alloc_info[2].flag = SP_DVPP;
+ alloc_info[2].spg_id = group_id;
+ alloc_info[2].size = 4 * PAGE_SIZE;
+
+ // 普通页
+ alloc_info[3].flag = 0;
+ alloc_info[3].spg_id = group_id;
+ alloc_info[3].size = 4 * PAGE_SIZE;
+
+ /* alloc & u2k */
+ for (i = 0; i < TIMES; i++) {
+ /* sp_alloc */
+ ret = ioctl_alloc(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl alloc failed at %dth alloc.\n", i);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(alloc_info[i].addr)) {
+ pr_info("sp_alloc return err is %ld\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("sp_alloc return addr %lx\n", alloc_info[i].addr);
+ }
+ }
+
+ /* check sp_alloc addr */
+ judge_ret = ioctl_judge_addr(dev_fd, alloc_info[i].addr);
+ if (judge_ret != true) {
+ pr_info("expect a valid share pool addr %lx\n", alloc_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("addr %lx is a valid share pool addr\n", alloc_info[i].addr);
+ }
+
+ /* prepare for u2k */
+ addr = (char *)alloc_info[i].addr;
+ if (alloc_info[i].flag & SP_HUGEPAGE) {
+ addr[0] = 'd';
+ addr[PMD_SIZE - 1] = 'c';
+ addr[PMD_SIZE] = 'b';
+ addr[PMD_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_hugepage_checker = true;
+ } else {
+ addr[0] = 'd';
+ addr[PAGE_SIZE - 1] = 'c';
+ addr[PAGE_SIZE] = 'b';
+ addr[PAGE_SIZE * 2 - 1] = 'a';
+ u2k_info[i].u2k_checker = true;
+ }
+
+ u2k_info[i].uva = alloc_info[i].addr;
+ u2k_info[i].size = alloc_info[i].size;
+ u2k_info[i].pid = pid;
+
+ /* u2k */
+ ret = ioctl_u2k(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("ioctl u2k failed\n");
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(u2k_info[i].addr)) {
+ pr_info("u2k return err is %ld.\n", u2k_info[i].addr);
+ goto error;
+ } else {
+ //pr_info("u2k return addr %lx, check memory content succ.\n",
+ // u2k_info[i].addr);
+ }
+ }
+ //pr_info("\n");
+ }
+
+ /* prepare for vmalloc */
+ vmalloc_info.size = 3 * PAGE_SIZE;
+ vmalloc_huge_info.size = 3 * PMD_SIZE;
+
+ /* vmalloc */
+ ret = ioctl_vmalloc(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vmalloc small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vmalloc_hugepage(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vmalloc huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* prepare for k2u */
+ k2u_info.kva = vmalloc_info.addr;
+ k2u_info.size = vmalloc_info.size;
+ k2u_info.sp_flags = 0;
+ k2u_info.pid = pid;
+ k2u_info.spg_id = group_id;
+
+ k2u_huge_info.kva = vmalloc_huge_info.addr;
+ k2u_huge_info.size = vmalloc_huge_info.size;
+ k2u_huge_info.sp_flags = SP_DVPP;
+ k2u_huge_info.pid = pid;
+ k2u_huge_info.spg_id = group_id;
+
+ /* k2u */
+ ret = ioctl_k2u(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_info.addr)) {
+ pr_info("k2u return err is %ld.\n",
+ k2u_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u return addr %lx\n", k2u_info.addr);
+ }
+ }
+
+ ret = ioctl_k2u(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("ioctl k2u hugepage error: %d\n", ret);
+ goto error;
+ } else {
+ if (IS_ERR_VALUE(k2u_huge_info.addr)) {
+ pr_info("k2u hugepage return err is %ld.\n",
+ k2u_huge_info.addr);
+ goto error;
+ } else {
+ //pr_info("k2u hugepage return addr %lx\n", k2u_huge_info.addr);
+ }
+ }
+
+ /* check k2u memory content */
+ addr = (char *)k2u_info.addr;
+ if (addr[0] != 'a' || addr[PAGE_SIZE - 1] != 'b' ||
+ addr[PAGE_SIZE] != 'c' || addr[2 * PAGE_SIZE - 1] != 'd') {
+ pr_info("check vmalloc memory failed\n");
+ goto error;
+ } else {
+ //pr_info("check vmalloc memory succeess\n");
+ }
+
+ addr = (char *)k2u_huge_info.addr;
+ if (addr[0] != 'a' || addr[PMD_SIZE - 1] != 'b' ||
+ addr[PMD_SIZE] != 'c' || addr[2 * PMD_SIZE - 1] != 'd') {
+ pr_info("check vmalloc_hugepage memory failed: %c %c %c %c\n",
+ addr[0], addr[PMD_SIZE - 1], addr[PMD_SIZE], addr[2 * PMD_SIZE - 1]);
+ goto error;
+ } else {
+ //pr_info("check vmalloc_hugepage memory succeess\n");
+ }
+
+ /* unshare uva */
+ ret = ioctl_unshare(dev_fd, &k2u_info);
+ if (ret < 0) {
+ pr_info("sp unshare uva error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_unshare(dev_fd, &k2u_huge_info);
+ if (ret < 0) {
+ pr_info("sp unshare hugepage uva error: %d\n", ret);
+ goto error;
+ }
+
+ /* kvfree */
+ ret = ioctl_vfree(dev_fd, &vmalloc_info);
+ if (ret < 0) {
+ pr_info("vfree small page error: %d\n", ret);
+ goto error;
+ }
+ ret = ioctl_vfree(dev_fd, &vmalloc_huge_info);
+ if (ret < 0) {
+ pr_info("vfree huge page error: %d\n", ret);
+ goto error;
+ }
+
+ /* unshare kva & sp_free*/
+ for (i = 0; i < TIMES; i++) {
+ /* unshare kva */
+ ret = ioctl_unshare(dev_fd, &u2k_info[i]);
+ if (ret < 0) {
+ pr_info("sp_unshare kva return error: %d\n", ret);
+ goto error;
+ }
+
+ /* sp_free */
+ ret = ioctl_free(dev_fd, &alloc_info[i]);
+ if (ret < 0) {
+ pr_info("sp_free return error: %d\n", ret);
+ goto error;
+ }
+ }
+
+ //close_device(dev_fd);
+ return 0;
+
+error:
+ //close_device(dev_fd);
+ return -1;
+}
+
+/* for each spg, do the helper test routine */
+void *thread(void *arg)
+{
+ int ret;
+ for (int i = 0; i < GROUP_NUM; i++) {
+ ret = thread_and_process_helper(group_ids[i]);
+ if (ret != 0) {
+ pr_info("\nthread %lu finish running with error, spg_id: %d\n", pthread_self(), group_ids[i]);
+ pthread_exit((void *)1);
+ }
+ }
+ //pr_info("\nthread %lu finish running all groups succes", pthread_self());
+ pthread_exit((void *)0);
+}
+
+static int process()
+{
+ int ret;
+ int spg_ids[GROUP_NUM];
+ int group_num = GROUP_NUM;
+ struct sp_group_id_by_pid_info find_group_info = {
+ .num = &group_num,
+ .spg_ids = spg_ids,
+ .pid = getpid(),
+ };
+
+ ret = ioctl_find_group_by_pid(dev_fd, &find_group_info);
+ if (ret < 0) {
+ pr_info("find group id by pid failed");
+ return ret;
+ }
+
+ for (int i = 0; i < group_num; i++) {
+ ret = thread_and_process_helper(group_ids[i]);
+ if (ret < 0) {
+ pr_info("thread_and_process_helper failed");
+ return -1;
+ }
+ }
+
+ return ret;
+}
+
+static int add_multi_group_auto()
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+ for (int i = 0; i < GROUP_NUM; i++) {
+ group_ids[i] = SPG_ID_AUTO;
+ ag_info.spg_id = group_ids[i];
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("proc%d add group%d failed", getpid(), group_ids[i]);
+ return -1;
+ }
+ }
+
+ return ret;
+}
+
+static int add_multi_group()
+{
+ int ret;
+ struct sp_add_group_info ag_info = {
+ .pid = getpid(),
+ .prot = PROT_READ | PROT_WRITE,
+ };
+ for (int i = 0; i < GROUP_NUM; i++) {
+ group_ids[i] = i + 1;
+ ag_info.spg_id = group_ids[i];
+ ret = ioctl_add_group(dev_fd, &ag_info);
+ if (ret < 0) {
+ pr_info("proc%d add group%d failed", getpid(), group_ids[i]);
+ return -1;
+ }
+ }
+
+ return ret;
+}
+
+static int check_multi_group()
+{
+ int ret;
+ // query groups
+ int spg_ids[GROUP_NUM];
+ int group_num = GROUP_NUM;
+ struct sp_group_id_by_pid_info find_group_info = {
+ .num = &group_num,
+ .spg_ids = spg_ids,
+ .pid = getpid(),
+ };
+ ret = ioctl_find_group_by_pid(dev_fd, &find_group_info);
+ if (ret < 0) {
+ pr_info("find group id by pid failed");
+ return ret;
+ } else
+ for (int i = 0; i < GROUP_NUM; i++)
+ if (spg_ids[i] != group_ids[i]) {
+ pr_info("group id %d not consistent", i);
+ ret = -1;
+ }
+ return ret;
+}
+
+/* 创建15个线程,执行单k2task任务。重复创建5次。*/
+static int testcase1(void)
+{
+ int ret = 0;
+ pthread_t tid[THREAD_NUM];
+
+ // add groups
+ if (add_multi_group())
+ return -1;
+
+ //query groups
+ if (check_multi_group())
+ return -1;
+ pr_info("%s add to all groups success.", __FUNCTION__);
+
+ // 创建15个线程,执行单k2task任务。重复创建5次。
+ pr_info("\n --- thread k2task multi group test --- ");
+ for (int i = 0; i < REPEAT_TIMES; i++) {
+
+ pr_info("thread k2task %dth test, %d times left.", i + 1, REPEAT_TIMES - i - 1);
+
+ for (int j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_create(tid + j, NULL, thread_k2u_task, NULL);
+ if (ret != 0) {
+ pr_info("create thread error\n");
+ return -1;
+ }
+ }
+
+ for (int j = 0; j < THREAD_NUM; j++) {
+ void *tret;
+ if (pthread_join(tid[j], &tret) != 0) {
+ pr_info("can't join thread %d\n", j);
+ ret = -1;
+ }
+ if ((long)tret != 0) {
+ pr_info("testcase execution failed, thread %d exited unexpectedly\n", j);
+ ret = -1;
+ }
+ }
+
+ sleep(3);
+ }
+
+ return ret;
+}
+
+/* 创建15个线程,执行u2k+k2u混合任务。重复创建5次。*/
+static int testcase2(void)
+{
+ int ret = 0;
+
+ // add groups
+ if (add_multi_group())
+ return -1;
+
+ //query groups
+ if (check_multi_group())
+ return -1;
+ pr_info("%s add to all groups success.", __FUNCTION__);
+
+ // 创建15个线程,执行u2k+k2u混合任务。重复创建5次。
+ pthread_t tid[THREAD_NUM];
+ void *tret;
+ pr_info("\n --- thread u2k+k2u multi group test --- \n");
+ for (int i = 0; i < REPEAT_TIMES; i++) {
+ pr_info("thread u2k+k2u %dth test, %d times left.", i + 1, REPEAT_TIMES - i - 1);
+ for (int j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_create(tid + j, NULL, thread, NULL);
+ if (ret != 0) {
+ pr_info("create thread error\n");
+ return -1;
+ }
+ }
+ for (int j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_join(tid[j], &tret);
+ if (ret != 0) {
+ pr_info("can't join thread %d\n", j);
+ }
+ if ((long)tret != 0) {
+ pr_info("testcase execution failed, thread %d exited unexpectedly\n", j);
+ }
+ }
+ sleep(3);
+ }
+ return ret;
+}
+
+/* 创建15个进程,加入所有组,并执行u2k+k2u混合任务。重复创建5次。*/
+static int testcase3(void)
+{
+ int ret = 0;
+ pid_t childs[PROCESS_NUM];
+
+ // 创建15个进程,加入所有组,并执行u2k+k2u混合任务。重复创建5次。
+ pr_info("\n --- process u2k+k2u multi group test ---\n");
+ for (int i = 0; i < REPEAT_TIMES; i++) {
+ pr_info("process u2k+k2u %dth test, %d times left.", i + 1, REPEAT_TIMES - i - 1);
+ for (int j = 0; j < PROCESS_NUM; j++) {
+ pid_t pid_child = fork();
+ if (pid_child < 0) {
+ pr_info("fork failed, error %d", pid_child);
+ exit(-1);
+ } else if (pid_child == 0) {
+ if (add_multi_group())
+ pr_info("add group failed");
+ pr_info("%s add %dth child to all groups success, %d left",
+ __FUNCTION__, j + 1, PROCESS_NUM - j - 1);
+ exit(process());
+ } else {
+ childs[j] = pid_child;
+ pr_info("fork child%d, pid: %d", j, pid_child);
+ }
+ }
+
+ for (int j = 0; j < PROCESS_NUM; j++) {
+ int status;
+ waitpid(childs[j], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", j, status);
+ ret = -1;
+ }
+ }
+ sleep(1);
+ }
+
+ return ret;
+}
+
+/* 创建15个进程和15个线程,执行u2k+k2u混合任务。重复创建5次。*/
+static int testcase4(void)
+{
+ pr_info("\n --- process and thread u2k_k2u mix multi group test --- \n");
+
+ int ret = 0;
+
+ // add groups
+ if (add_multi_group())
+ return -1;
+
+ //query groups
+ if (check_multi_group())
+ return -1;
+ pr_info("%s add to all groups success.", __FUNCTION__);
+
+ // 创建15个进程和15个线程,执行u2k+k2u混合任务。重复创建5次。
+ for (int i = 0; i < REPEAT_TIMES; i++) {
+
+ pr_info("process+thread u2k+k2u %dth test, %d times left.", i + 1, REPEAT_TIMES - i - 1);
+
+ pthread_t tid[THREAD_NUM];
+ for (int j = 0; j < THREAD_NUM; j++) {
+ ret = pthread_create(tid + j, NULL, thread, NULL);
+ if (ret != 0) {
+ pr_info("create thread error\n");
+ return -1;
+ }
+ }
+
+ pid_t childs[PROCESS_NUM];
+ for (int k = 0; k < PROCESS_NUM; k++) {
+ pid_t pid_child = fork();
+ if (pid_child < 0) {
+ pr_info("fork failed, error %d", pid_child);
+ exit(-1);
+ } else if (pid_child == 0) {
+ // add groups
+ if (add_multi_group())
+ return -1;
+ //query groups
+ if (check_multi_group())
+ return -1;
+ pr_info("%s add %dth child to all groups success, %d left",
+ __FUNCTION__, k + 1, PROCESS_NUM - k - 1);
+ exit(process());
+ } else {
+ childs[k] = pid_child;
+ //pr_info("fork child%d, pid: %d", k, pid_child);
+ }
+ }
+
+ for (int j = 0; j < THREAD_NUM; j++) {
+ void *tret;
+ ret = pthread_join(tid[j], &tret);
+ if (ret != 0) {
+ pr_info("can't join thread %d\n", j);
+ }
+ if ((long)tret != 0) {
+ pr_info("testcase execution failed, thread %d exited unexpectedly\n", j);
+ ret = -1;
+ }
+ }
+
+ for (int k = 0; k < PROCESS_NUM; k++) {
+ int status;
+ waitpid(childs[k], &status, 0);
+ if (!WIFEXITED(status) || WEXITSTATUS(status)) {
+ pr_info("child%d test failed, %d", k, status);
+ ret = -1;
+ }
+ }
+
+ sleep(3);
+ }
+ pr_info("\n --- process and thread u2k_k2u mix multi group test success!! --- \n");
+ sleep(3);
+}
+
+static struct testcase_s testcases[] = {
+ TESTCASE_CHILD(testcase1, "创建15个线程,执行单k2task任务。重复创建5次。")
+ TESTCASE_CHILD(testcase2, "创建15个线程,执行u2k+k2u混合任务。重复创建5次。")
+ TESTCASE_CHILD(testcase3, "创建15个进程,加入所有组,并执行u2k+k2u混合任务。重复创建5次。")
+ TESTCASE_CHILD(testcase4, "创建15个进程和15个线程,执行u2k+k2u混合任务。重复创建5次")
+};
+
+struct test_group test_group = {
+ .testcases = testcases,
+};
+
+static int get_filename()
+{
+ extract_filename(test_group.name, __FILE__);
+ return 0;
+}
+
+#include "default_main.c"
diff --git a/tools/testing/sharepool/testcase/test_mult_process/test_mult_process.sh b/tools/testing/sharepool/testcase/test_mult_process/test_mult_process.sh
new file mode 100755
index 000000000000..06af521aadb2
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/test_mult_process.sh
@@ -0,0 +1,53 @@
+#!/bin/sh
+
+set -x
+
+echo 'test_u2k_add_and_kill -g 10 -p 20
+ test_alloc_add_and_kill -n 200
+ test_add_multi_cases
+ test_max_group_per_process
+ test_mult_alloc_and_add_group
+ test_mult_process_thread_exit
+ test_mult_thread_add_group
+ test_add_group_and_print
+ test_concurrent_debug -p 10 -g 5 -n 100
+ test_proc_interface_process
+ test_mult_k2u
+ test_mult_pass_through
+ test_mult_thread_k2u
+ test_mult_u2k -n 50
+ test_mult_u2k3
+ test_mult_u2k4
+ test_alloc_free_two_process -p 10 -g 5 -n 100 -s 1000
+ test_fuzz
+ test_mult_proc_interface' | while read line
+do
+ let flag=0
+ ./test_mult_process/$line
+ if [ $? -ne 0 ] ;then
+ echo "testcase test_mult_process/$line failed"
+ let flag=1
+ fi
+ sleep 3
+ #打印spa_stat
+ ret=`cat /proc/sharepool/spa_stat | wc -l`
+ if [ $ret -ge 15 ] ;then
+ cat /proc/sharepool/spa_stat
+ echo spa_stat not clean
+ let flag=1
+ fi
+ #打印proc_stat
+ ret=`cat /proc/sharepool/proc_stat | wc -l`
+ if [ $ret -ge 15 ] ;then
+ cat /proc/sharepool/proc_stat
+ echo proc_stat not clean
+ let flag=1
+ fi
+ #如果泄漏 -->exit
+ if [ $flag -eq 1 ] ;then
+ exit 1
+ fi
+ echo "testcase test_mult_process/$line success"
+ cat /proc/meminfo
+ free -m
+done
diff --git a/tools/testing/sharepool/testcase/test_mult_process/test_proc_interface.sh b/tools/testing/sharepool/testcase/test_mult_process/test_proc_interface.sh
new file mode 100755
index 000000000000..03040d13dd87
--- /dev/null
+++ b/tools/testing/sharepool/testcase/test_mult_process/test_proc_interface.sh
@@ -0,0 +1,19 @@
+#!/bin/sh
+
+set -x
+
+./test_mult_process/test_proc_interface_process &
+
+for i in $(seq 1 5) # 5 processes read
+do {
+ for j in $(seq 1 40) # duration: 40x2 = 80 seconds
+ do
+ cat /proc/sharepool/proc_stat
+ cat /proc/sharepool/spa_stat
+ cat /proc/sharepool/proc_overview
+ sleep 2
+ done
+} &
+done
+wait
+
--
2.43.0
2
1