From: Carlos Llamas cmllamas@google.com
stable inclusion from stable-v5.10.209 commit 7e7a0d86542b0ea903006d3f42f33c4f7ead6918 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I99JWJ
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit 9a9ab0d963621d9d12199df9817e66982582d5a5 upstream.
Task A calls binder_update_page_range() to allocate and insert pages on a remote address space from Task B. For this, Task A pins the remote mm via mmget_not_zero() first. This can race with Task B do_exit() and the final mmput() refcount decrement will come from Task A.
Task A | Task B ------------------+------------------ mmget_not_zero() | | do_exit() | exit_mm() | mmput() mmput() | exit_mmap() | remove_vma() | fput() |
In this case, the work of ____fput() from Task B is queued up in Task A as TWA_RESUME. So in theory, Task A returns to userspace and the cleanup work gets executed. However, Task A instead sleep, waiting for a reply from Task B that never comes (it's dead).
This means the binder_deferred_release() is blocked until an unrelated binder event forces Task A to go back to userspace. All the associated death notifications will also be delayed until then.
In order to fix this use mmput_async() that will schedule the work in the corresponding mm->async_put_work WQ instead of Task A.
Fixes: 457b9a6f09f0 ("Staging: android: add binder driver") Reviewed-by: Alice Ryhl aliceryhl@google.com Signed-off-by: Carlos Llamas cmllamas@google.com Link: https://lore.kernel.org/r/20231201172212.1813387-4-cmllamas@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Wang Hai wanghai38@huawei.com --- drivers/android/binder_alloc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 3c93e6c05c4d..c534b332c8d3 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -271,7 +271,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate, } if (mm) { mmap_write_unlock(mm); - mmput(mm); + mmput_async(mm); } return 0;
@@ -304,7 +304,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate, err_no_vma: if (mm) { mmap_write_unlock(mm); - mmput(mm); + mmput_async(mm); } return vma ? -ENOMEM : -ESRCH; }