From: Amir Goldstein amir73il@gmail.com
stable inclusion from stable-v5.10.121 commit f929416d5c9c9908659cde74a3e29e84c8ffe418 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5L6CQ
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
[ Upstream commit 623af4f538b5df9b416e1b82f720af7371b4c771 ]
Commit 6960b0d909cd ("fsnotify: change locking order") changed some of the mark_mutex locks in direct reclaim path to use: mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING);
This change is explained: "...It uses nested locking to avoid deadlock in case we do the final iput() on an inode which still holds marks and thus would take the mutex again when calling fsnotify_inode_delete() in destroy_inode()."
The problem is that the mutex_lock_nested() is not a nested lock at all. In fact, it has the opposite effect of preventing lockdep from warning about a very possible deadlock.
Due to these wrong annotations, a deadlock that was introduced with nfsd filecache in kernel v5.4 went unnoticed in v5.4.y for over two years until it was reported recently by Khazhismel Kumykov, only to find out that the deadlock was already fixed in kernel v5.5.
Fix the wrong lockdep annotations.
Cc: Khazhismel Kumykov khazhy@google.com Fixes: 6960b0d909cd ("fsnotify: change locking order") Link: https://lore.kernel.org/r/20220321112310.vpr7oxro2xkz5llh@quack3.lan/ Link: https://lore.kernel.org/r/20220422120327.3459282-4-amir73il@gmail.com Signed-off-by: Amir Goldstein amir73il@gmail.com Signed-off-by: Jan Kara jack@suse.cz Signed-off-by: Sasha Levin sashal@kernel.org Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com --- fs/notify/mark.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/fs/notify/mark.c b/fs/notify/mark.c index 8387937b9d01..5b44be5f93dd 100644 --- a/fs/notify/mark.c +++ b/fs/notify/mark.c @@ -430,7 +430,7 @@ void fsnotify_free_mark(struct fsnotify_mark *mark) void fsnotify_destroy_mark(struct fsnotify_mark *mark, struct fsnotify_group *group) { - mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING); + mutex_lock(&group->mark_mutex); fsnotify_detach_mark(mark); mutex_unlock(&group->mark_mutex); fsnotify_free_mark(mark); @@ -742,7 +742,7 @@ void fsnotify_clear_marks_by_group(struct fsnotify_group *group, * move marks to free to to_free list in one go and then free marks in * to_free list one by one. */ - mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING); + mutex_lock(&group->mark_mutex); list_for_each_entry_safe(mark, lmark, &group->marks_list, g_list) { if ((1U << mark->connector->type) & type_mask) list_move(&mark->g_list, &to_free); @@ -751,7 +751,7 @@ void fsnotify_clear_marks_by_group(struct fsnotify_group *group,
clear: while (1) { - mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING); + mutex_lock(&group->mark_mutex); if (list_empty(head)) { mutex_unlock(&group->mark_mutex); break;