CVE-2023-32254 CVE-2023-32246 CVE-2023-32256 CVE-2023-32258
Namjae Jeon (3): ksmbd: fix racy issue under cocurrent smb2 tree disconnect ksmbd: call rcu_barrier() in ksmbd_server_exit() ksmbd: fix racy issue from smb2 close and logoff with multichannel
fs/ksmbd/connection.c | 54 +++++++++++++++++++++++++++--------- fs/ksmbd/connection.h | 19 +++++++++++-- fs/ksmbd/mgmt/tree_connect.c | 13 ++++++++- fs/ksmbd/mgmt/tree_connect.h | 3 ++ fs/ksmbd/mgmt/user_session.c | 36 ++++++++++++++++++++---- fs/ksmbd/server.c | 1 + fs/ksmbd/smb2pdu.c | 24 ++++++++-------- 7 files changed, 116 insertions(+), 34 deletions(-)
From: Namjae Jeon linkinjeon@kernel.org
mainline inclusion from mainline-v6.4-rc1 commit 30210947a343b6b3ca13adc9bfc88e1543e16dd5 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I74FJA CVE: CVE-2023-32254
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
--------------------------------
There is UAF issue under cocurrent smb2 tree disconnect. This patch introduce TREE_CONN_EXPIRE flags for tcon to avoid cocurrent access.
Cc: stable@vger.kernel.org Reported-by: zdi-disclosures@trendmicro.com # ZDI-CAN-20592 Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: ZhaoLong Wang wangzhaolong1@huawei.com --- fs/ksmbd/mgmt/tree_connect.c | 10 +++++++++- fs/ksmbd/mgmt/tree_connect.h | 3 +++ fs/ksmbd/smb2pdu.c | 3 ++- 3 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/fs/ksmbd/mgmt/tree_connect.c b/fs/ksmbd/mgmt/tree_connect.c index dd262daa2c4a..e3f62df65d3a 100644 --- a/fs/ksmbd/mgmt/tree_connect.c +++ b/fs/ksmbd/mgmt/tree_connect.c @@ -95,7 +95,15 @@ int ksmbd_tree_conn_disconnect(struct ksmbd_session *sess, struct ksmbd_tree_connect *ksmbd_tree_conn_lookup(struct ksmbd_session *sess, unsigned int id) { - return xa_load(&sess->tree_conns, id); + struct ksmbd_tree_connect *tcon; + + tcon = xa_load(&sess->tree_conns, id); + if (tcon) { + if (test_bit(TREE_CONN_EXPIRE, &tcon->status)) + tcon = NULL; + } + + return tcon; }
struct ksmbd_share_config *ksmbd_tree_conn_share(struct ksmbd_session *sess, diff --git a/fs/ksmbd/mgmt/tree_connect.h b/fs/ksmbd/mgmt/tree_connect.h index 71e50271dccf..5ef006c7d1cc 100644 --- a/fs/ksmbd/mgmt/tree_connect.h +++ b/fs/ksmbd/mgmt/tree_connect.h @@ -14,6 +14,8 @@ struct ksmbd_share_config; struct ksmbd_user; struct ksmbd_conn;
+#define TREE_CONN_EXPIRE 1 + struct ksmbd_tree_connect { int id;
@@ -25,6 +27,7 @@ struct ksmbd_tree_connect {
int maximal_access; bool posix_extensions; + unsigned long status; };
struct ksmbd_tree_conn_status { diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c index 546fed3daa25..d044a0faf9b6 100644 --- a/fs/ksmbd/smb2pdu.c +++ b/fs/ksmbd/smb2pdu.c @@ -2101,11 +2101,12 @@ int smb2_tree_disconnect(struct ksmbd_work *work)
ksmbd_debug(SMB, "request\n");
- if (!tcon) { + if (!tcon || test_and_set_bit(TREE_CONN_EXPIRE, &tcon->status)) { struct smb2_tree_disconnect_req *req = smb2_get_msg(work->request_buf);
ksmbd_debug(SMB, "Invalid tid %d\n", req->hdr.Id.SyncId.TreeId); + rsp->hdr.Status = STATUS_NETWORK_NAME_DELETED; smb2_set_err_rsp(work); return 0;
From: Namjae Jeon linkinjeon@kernel.org
mainline inclusion from mainline-v6.4-rc1 commit eb307d09fe15844fdaebeb8cc8c9b9e925430aa5 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I74FIB CVE: CVE-2023-32246
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
--------------------------------
racy issue is triggered the bug by racing between closing a connection and rmmod. In ksmbd, rcu_barrier() is not called at module unload time, so nothing prevents ksmbd from getting unloaded while it still has RCU callbacks pending. It leads to trigger unintended execution of kernel code locally and use to defeat protections such as Kernel Lockdown
Cc: stable@vger.kernel.org Reported-by: zdi-disclosures@trendmicro.com # ZDI-CAN-20477 Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: ZhaoLong Wang wangzhaolong1@huawei.com --- fs/ksmbd/server.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/fs/ksmbd/server.c b/fs/ksmbd/server.c index e39d31921cd9..ebd73abbabe3 100644 --- a/fs/ksmbd/server.c +++ b/fs/ksmbd/server.c @@ -625,6 +625,7 @@ static int __init ksmbd_server_init(void) static void __exit ksmbd_server_exit(void) { ksmbd_server_shutdown(); + rcu_barrier(); ksmbd_release_inode_hash(); }
From: Namjae Jeon linkinjeon@kernel.org
mainline inclusion from mainline-v6.4-rc1 commit abcc506a9a71976a8b4c9bf3ee6efd13229c1e19 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7BIK2 CVE: CVE-2023-32256,CVE-2023-32258
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
--------------------------------
When smb client send concurrent smb2 close and logoff request with multichannel connection, It can cause racy issue. logoff request free tcon and can cause UAF issues in smb2 close. When receiving logoff request with multichannel, ksmbd should wait until all remaning requests complete as well as ones in the current connection, and then make session expired.
Cc: stable@vger.kernel.org Reported-by: zdi-disclosures@trendmicro.com # ZDI-CAN-20796 ZDI-CAN-20595 Signed-off-by: Namjae Jeon linkinjeon@kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: ZhaoLong Wang wangzhaolong1@huawei.com
Conflicts: fs/ksmbd/connection.c --- fs/ksmbd/connection.c | 54 +++++++++++++++++++++++++++--------- fs/ksmbd/connection.h | 19 +++++++++++-- fs/ksmbd/mgmt/tree_connect.c | 3 ++ fs/ksmbd/mgmt/user_session.c | 36 ++++++++++++++++++++---- fs/ksmbd/smb2pdu.c | 21 +++++++------- 5 files changed, 101 insertions(+), 32 deletions(-)
diff --git a/fs/ksmbd/connection.c b/fs/ksmbd/connection.c index 902856fcd72c..d743c544c1e6 100644 --- a/fs/ksmbd/connection.c +++ b/fs/ksmbd/connection.c @@ -20,7 +20,7 @@ static DEFINE_MUTEX(init_lock); static struct ksmbd_conn_ops default_conn_ops;
LIST_HEAD(conn_list); -DEFINE_RWLOCK(conn_list_lock); +DECLARE_RWSEM(conn_list_lock);
/** * ksmbd_conn_free() - free resources of the connection instance @@ -32,9 +32,9 @@ DEFINE_RWLOCK(conn_list_lock); */ void ksmbd_conn_free(struct ksmbd_conn *conn) { - write_lock(&conn_list_lock); + down_write(&conn_list_lock); list_del(&conn->conns_list); - write_unlock(&conn_list_lock); + up_write(&conn_list_lock);
xa_destroy(&conn->sessions); kvfree(conn->request_buf); @@ -78,9 +78,9 @@ struct ksmbd_conn *ksmbd_conn_alloc(void) spin_lock_init(&conn->llist_lock); INIT_LIST_HEAD(&conn->lock_list);
- write_lock(&conn_list_lock); + down_write(&conn_list_lock); list_add(&conn->conns_list, &conn_list); - write_unlock(&conn_list_lock); + up_write(&conn_list_lock); return conn; }
@@ -89,7 +89,7 @@ bool ksmbd_conn_lookup_dialect(struct ksmbd_conn *c) struct ksmbd_conn *t; bool ret = false;
- read_lock(&conn_list_lock); + down_read(&conn_list_lock); list_for_each_entry(t, &conn_list, conns_list) { if (memcmp(t->ClientGUID, c->ClientGUID, SMB2_CLIENT_GUID_SIZE)) continue; @@ -97,7 +97,7 @@ bool ksmbd_conn_lookup_dialect(struct ksmbd_conn *c) ret = true; break; } - read_unlock(&conn_list_lock); + up_read(&conn_list_lock); return ret; }
@@ -153,9 +153,37 @@ void ksmbd_conn_unlock(struct ksmbd_conn *conn) mutex_unlock(&conn->srv_mutex); }
-void ksmbd_conn_wait_idle(struct ksmbd_conn *conn) +void ksmbd_all_conn_set_status(u64 sess_id, u32 status) { + struct ksmbd_conn *conn; + + down_read(&conn_list_lock); + list_for_each_entry(conn, &conn_list, conns_list) { + if (conn->binding || xa_load(&conn->sessions, sess_id)) + WRITE_ONCE(conn->status, status); + } + up_read(&conn_list_lock); +} + +void ksmbd_conn_wait_idle(struct ksmbd_conn *conn, u64 sess_id) +{ + struct ksmbd_conn *bind_conn; + wait_event(conn->req_running_q, atomic_read(&conn->req_running) < 2); + + down_read(&conn_list_lock); + list_for_each_entry(bind_conn, &conn_list, conns_list) { + if (bind_conn == conn) + continue; + + if ((bind_conn->binding || xa_load(&bind_conn->sessions, sess_id)) && + !ksmbd_conn_releasing(bind_conn) && + atomic_read(&bind_conn->req_running)) { + wait_event(bind_conn->req_running_q, + atomic_read(&bind_conn->req_running) == 0); + } + } + up_read(&conn_list_lock); }
int ksmbd_conn_write(struct ksmbd_work *work) @@ -361,10 +389,10 @@ int ksmbd_conn_handler_loop(void *p) }
out: + ksmbd_conn_set_releasing(conn); /* Wait till all reference dropped to the Server object*/ wait_event(conn->r_count_q, atomic_read(&conn->r_count) == 0);
- unload_nls(conn->local_nls); if (default_conn_ops.terminate_fn) default_conn_ops.terminate_fn(conn); @@ -406,7 +434,7 @@ static void stop_sessions(void) struct ksmbd_transport *t;
again: - read_lock(&conn_list_lock); + down_read(&conn_list_lock); list_for_each_entry(conn, &conn_list, conns_list) { struct task_struct *task;
@@ -417,12 +445,12 @@ static void stop_sessions(void) task->comm, task_pid_nr(task)); ksmbd_conn_set_exiting(conn); if (t->ops->shutdown) { - read_unlock(&conn_list_lock); + up_read(&conn_list_lock); t->ops->shutdown(t); - read_lock(&conn_list_lock); + down_read(&conn_list_lock); } } - read_unlock(&conn_list_lock); + up_read(&conn_list_lock);
if (!list_empty(&conn_list)) { schedule_timeout_interruptible(HZ / 10); /* 100ms */ diff --git a/fs/ksmbd/connection.h b/fs/ksmbd/connection.h index d493dbd200b1..2e3d96e63953 100644 --- a/fs/ksmbd/connection.h +++ b/fs/ksmbd/connection.h @@ -25,7 +25,8 @@ enum { KSMBD_SESS_GOOD, KSMBD_SESS_EXITING, KSMBD_SESS_NEED_RECONNECT, - KSMBD_SESS_NEED_NEGOTIATE + KSMBD_SESS_NEED_NEGOTIATE, + KSMBD_SESS_RELEASING };
struct ksmbd_stats { @@ -134,10 +135,10 @@ struct ksmbd_transport { #define KSMBD_TCP_PEER_SOCKADDR(c) ((struct sockaddr *)&((c)->peer_addr))
extern struct list_head conn_list; -extern rwlock_t conn_list_lock; +extern struct rw_semaphore conn_list_lock;
bool ksmbd_conn_alive(struct ksmbd_conn *conn); -void ksmbd_conn_wait_idle(struct ksmbd_conn *conn); +void ksmbd_conn_wait_idle(struct ksmbd_conn *conn, u64 sess_id); struct ksmbd_conn *ksmbd_conn_alloc(void); void ksmbd_conn_free(struct ksmbd_conn *conn); bool ksmbd_conn_lookup_dialect(struct ksmbd_conn *c); @@ -183,6 +184,11 @@ static inline bool ksmbd_conn_exiting(struct ksmbd_conn *conn) return READ_ONCE(conn->status) == KSMBD_SESS_EXITING; }
+static inline bool ksmbd_conn_releasing(struct ksmbd_conn *conn) +{ + return READ_ONCE(conn->status) == KSMBD_SESS_RELEASING; +} + static inline void ksmbd_conn_set_new(struct ksmbd_conn *conn) { WRITE_ONCE(conn->status, KSMBD_SESS_NEW); @@ -207,4 +213,11 @@ static inline void ksmbd_conn_set_exiting(struct ksmbd_conn *conn) { WRITE_ONCE(conn->status, KSMBD_SESS_EXITING); } + +static inline void ksmbd_conn_set_releasing(struct ksmbd_conn *conn) +{ + WRITE_ONCE(conn->status, KSMBD_SESS_RELEASING); +} + +void ksmbd_all_conn_set_status(u64 sess_id, u32 status); #endif /* __CONNECTION_H__ */ diff --git a/fs/ksmbd/mgmt/tree_connect.c b/fs/ksmbd/mgmt/tree_connect.c index e3f62df65d3a..1cdae978d1b0 100644 --- a/fs/ksmbd/mgmt/tree_connect.c +++ b/fs/ksmbd/mgmt/tree_connect.c @@ -123,6 +123,9 @@ int ksmbd_tree_conn_session_logoff(struct ksmbd_session *sess) struct ksmbd_tree_connect *tc; unsigned long id;
+ if (!sess) + return -EINVAL; + xa_for_each(&sess->tree_conns, id, tc) ret |= ksmbd_tree_conn_disconnect(sess, tc); xa_destroy(&sess->tree_conns); diff --git a/fs/ksmbd/mgmt/user_session.c b/fs/ksmbd/mgmt/user_session.c index cf83d8c3689b..d3f8e9d93c3b 100644 --- a/fs/ksmbd/mgmt/user_session.c +++ b/fs/ksmbd/mgmt/user_session.c @@ -151,10 +151,6 @@ void ksmbd_session_destroy(struct ksmbd_session *sess) if (!sess) return;
- down_write(&sessions_table_lock); - hash_del(&sess->hlist); - up_write(&sessions_table_lock); - if (sess->user) ksmbd_free_user(sess->user);
@@ -185,15 +181,18 @@ static void ksmbd_expire_session(struct ksmbd_conn *conn) unsigned long id; struct ksmbd_session *sess;
+ down_write(&sessions_table_lock); xa_for_each(&conn->sessions, id, sess) { if (sess->state != SMB2_SESSION_VALID || time_after(jiffies, sess->last_active + SMB2_SESSION_TIMEOUT)) { xa_erase(&conn->sessions, sess->id); + hash_del(&sess->hlist); ksmbd_session_destroy(sess); continue; } } + up_write(&sessions_table_lock); }
int ksmbd_session_register(struct ksmbd_conn *conn, @@ -205,15 +204,16 @@ int ksmbd_session_register(struct ksmbd_conn *conn, return xa_err(xa_store(&conn->sessions, sess->id, sess, GFP_KERNEL)); }
-static void ksmbd_chann_del(struct ksmbd_conn *conn, struct ksmbd_session *sess) +static int ksmbd_chann_del(struct ksmbd_conn *conn, struct ksmbd_session *sess) { struct channel *chann;
chann = xa_erase(&sess->ksmbd_chann_list, (long)conn); if (!chann) - return; + return -ENOENT;
kfree(chann); + return 0; }
void ksmbd_sessions_deregister(struct ksmbd_conn *conn) @@ -221,13 +221,37 @@ void ksmbd_sessions_deregister(struct ksmbd_conn *conn) struct ksmbd_session *sess; unsigned long id;
+ down_write(&sessions_table_lock); + if (conn->binding) { + int bkt; + struct hlist_node *tmp; + + hash_for_each_safe(sessions_table, bkt, tmp, sess, hlist) { + if (!ksmbd_chann_del(conn, sess) && + xa_empty(&sess->ksmbd_chann_list)) { + hash_del(&sess->hlist); + ksmbd_session_destroy(sess); + } + } + } + xa_for_each(&conn->sessions, id, sess) { + unsigned long chann_id; + struct channel *chann; + + xa_for_each(&sess->ksmbd_chann_list, chann_id, chann) { + if (chann->conn != conn) + ksmbd_conn_set_exiting(chann->conn); + } + ksmbd_chann_del(conn, sess); if (xa_empty(&sess->ksmbd_chann_list)) { xa_erase(&conn->sessions, sess->id); + hash_del(&sess->hlist); ksmbd_session_destroy(sess); } } + up_write(&sessions_table_lock); }
struct ksmbd_session *ksmbd_session_lookup(struct ksmbd_conn *conn, diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c index d044a0faf9b6..c9868f12efd5 100644 --- a/fs/ksmbd/smb2pdu.c +++ b/fs/ksmbd/smb2pdu.c @@ -2130,21 +2130,22 @@ int smb2_session_logoff(struct ksmbd_work *work) struct smb2_logoff_rsp *rsp = smb2_get_msg(work->response_buf); struct ksmbd_session *sess; struct smb2_logoff_req *req = smb2_get_msg(work->request_buf); + u64 sess_id = le64_to_cpu(req->hdr.SessionId);
rsp->StructureSize = cpu_to_le16(4); inc_rfc1001_len(work->response_buf, 4);
ksmbd_debug(SMB, "request\n");
- ksmbd_conn_set_need_reconnect(conn); + ksmbd_all_conn_set_status(sess_id, KSMBD_SESS_NEED_RECONNECT); ksmbd_close_session_fds(work); - ksmbd_conn_wait_idle(conn); + ksmbd_conn_wait_idle(conn, sess_id);
/* * Re-lookup session to validate if session is deleted * while waiting request complete */ - sess = ksmbd_session_lookup(conn, le64_to_cpu(req->hdr.SessionId)); + sess = ksmbd_session_lookup_all(conn, sess_id); if (ksmbd_tree_conn_session_logoff(sess)) { ksmbd_debug(SMB, "Invalid tid %d\n", req->hdr.Id.SyncId.TreeId); rsp->hdr.Status = STATUS_NETWORK_NAME_DELETED; @@ -2157,7 +2158,7 @@ int smb2_session_logoff(struct ksmbd_work *work)
ksmbd_free_user(sess->user); sess->user = NULL; - ksmbd_conn_set_need_negotiate(conn); + ksmbd_all_conn_set_status(sess_id, KSMBD_SESS_NEED_NEGOTIATE); return 0; }
@@ -6940,7 +6941,7 @@ int smb2_lock(struct ksmbd_work *work)
nolock = 1; /* check locks in connection list */ - read_lock(&conn_list_lock); + down_read(&conn_list_lock); list_for_each_entry(conn, &conn_list, conns_list) { spin_lock(&conn->llist_lock); list_for_each_entry_safe(cmp_lock, tmp2, &conn->lock_list, clist) { @@ -6957,7 +6958,7 @@ int smb2_lock(struct ksmbd_work *work) list_del(&cmp_lock->flist); list_del(&cmp_lock->clist); spin_unlock(&conn->llist_lock); - read_unlock(&conn_list_lock); + up_read(&conn_list_lock);
locks_free_lock(cmp_lock->fl); kfree(cmp_lock); @@ -6979,7 +6980,7 @@ int smb2_lock(struct ksmbd_work *work) cmp_lock->start > smb_lock->start && cmp_lock->start < smb_lock->end) { spin_unlock(&conn->llist_lock); - read_unlock(&conn_list_lock); + up_read(&conn_list_lock); pr_err("previous lock conflict with zero byte lock range\n"); goto out; } @@ -6988,7 +6989,7 @@ int smb2_lock(struct ksmbd_work *work) smb_lock->start > cmp_lock->start && smb_lock->start < cmp_lock->end) { spin_unlock(&conn->llist_lock); - read_unlock(&conn_list_lock); + up_read(&conn_list_lock); pr_err("current lock conflict with zero byte lock range\n"); goto out; } @@ -6999,14 +7000,14 @@ int smb2_lock(struct ksmbd_work *work) cmp_lock->end >= smb_lock->end)) && !cmp_lock->zero_len && !smb_lock->zero_len) { spin_unlock(&conn->llist_lock); - read_unlock(&conn_list_lock); + up_read(&conn_list_lock); pr_err("Not allow lock operation on exclusive lock range\n"); goto out; } } spin_unlock(&conn->llist_lock); } - read_unlock(&conn_list_lock); + up_read(&conn_list_lock); out_check_cl: if (smb_lock->fl->fl_type == F_UNLCK && nolock) { pr_err("Try to unlock nolocked range\n");
反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://gitee.com/openeuler/kernel/pulls/2438 邮件列表地址:https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/B...
FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://gitee.com/openeuler/kernel/pulls/2438 Mailing list address: https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/B...