From: Amir Goldstein amir73il@gmail.com
stable inclusion from stable-5.10.67 commit ad3ea16746cc68307165d52e22a3f75015ccd16e bugzilla: 182619 https://gitee.com/openeuler/kernel/issues/I4EWO7
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit b8cd0ee8cda68a888a317991c1e918a8cba1a568 upstream.
Event merges are expensive when event queue size is large, so limit the linear search to 128 merge tests.
[Stable backport notes] The following statement from upstream commit is irrelevant for backport: - -In combination with 128 size hash table, there is a potential to merge -with up to 16K events in the hashed queue. - [Stable backport notes] The problem is as old as fanotify and described in the linked cover letter "Performance improvement for fanotify merge". This backported patch fixes the performance issue at the cost of merging fewer potential events. Fixing the performance issue is more important than preserving the "event merge" behavior, which was not predictable in any way that applications could rely on.
Link: https://lore.kernel.org/r/20210304104826.3993892-6-amir73il@gmail.com Signed-off-by: Amir Goldstein amir73il@gmail.com Signed-off-by: Jan Kara jack@suse.cz Cc: stable@vger.kernel.org Link: https://lore.kernel.org/linux-fsdevel/20210202162010.305971-1-amir73il@gmail... Link: https://lore.kernel.org/linux-fsdevel/20210915163334.GD6166@quack2.suse.cz/ Signed-off-by: Amir Goldstein amir73il@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Chen Jun chenjun102@huawei.com Acked-by: Weilong Chen chenweilong@huawei.com
Signed-off-by: Chen Jun chenjun102@huawei.com --- fs/notify/fanotify/fanotify.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c index 1192c9953620..c3af99e94f1d 100644 --- a/fs/notify/fanotify/fanotify.c +++ b/fs/notify/fanotify/fanotify.c @@ -129,11 +129,15 @@ static bool fanotify_should_merge(struct fsnotify_event *old_fsn, return false; }
+/* Limit event merges to limit CPU overhead per event */ +#define FANOTIFY_MAX_MERGE_EVENTS 128 + /* and the list better be locked by something too! */ static int fanotify_merge(struct list_head *list, struct fsnotify_event *event) { struct fsnotify_event *test_event; struct fanotify_event *new; + int i = 0;
pr_debug("%s: list=%p event=%p\n", __func__, list, event); new = FANOTIFY_E(event); @@ -147,6 +151,8 @@ static int fanotify_merge(struct list_head *list, struct fsnotify_event *event) return 0;
list_for_each_entry_reverse(test_event, list, list) { + if (++i > FANOTIFY_MAX_MERGE_EVENTS) + break; if (fanotify_should_merge(test_event, event)) { FANOTIFY_E(test_event)->mask |= new->mask; return 1;