From: Rodrigo Branco bsdaemon@google.com
stable inclusion from stable-v4.19.270 commit 940ede60d74d2fc7291b96cb38072d705333c8e0 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I6CU98 CVE: CVE-2023-0045
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?h=l...
--------------------------------
commit a664ec9158eeddd75121d39c9a0758016097fa96 upstream.
We missed the window between the TIF flag update and the next reschedule.
Signed-off-by: Rodrigo Branco bsdaemon@google.com Reviewed-by: Borislav Petkov (AMD) bp@alien8.de Signed-off-by: Ingo Molnar mingo@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yuyao Lin linyuyao1@huawei.com Reviewed-by: Wei Li liwei391@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Yongqiang Liu liuyongqiang13@huawei.com --- arch/x86/kernel/cpu/bugs.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 3ba948bbd77d..d4d114d62e2e 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1547,6 +1547,8 @@ static int ib_prctl_set(struct task_struct *task, unsigned long ctrl) if (ctrl == PR_SPEC_FORCE_DISABLE) task_set_spec_ib_force_disable(task); task_update_spec_tif(task); + if (task == current) + indirect_branch_prediction_barrier(); break; default: return -ERANGE;
From: Yu Kuai yukuai3@huawei.com
mainline inclusion from mainline-v6.2-rc5 commit 216f764716f34fe68cedc7296ae2043a7727e640 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I6DZIB CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
--------------------------------
The updating of 'bfqg->ref' should be protected by 'bfqd->lock', however, during code review, we found that bfq_pd_free() update 'bfqg->ref' without holding the lock, which is problematic:
1) bfq_pd_free() triggered by removing cgroup is called asynchronously; 2) bfqq will grab bfqg reference, and exit bfqq will drop the reference, which can concurrent with 1).
Unfortunately, 'bfqd->lock' can't be held here because 'bfqd' might already be freed in bfq_pd_free(). Fix the problem by using atomic refcount apis.
Signed-off-by: Yu Kuai yukuai3@huawei.com Reviewed-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/20230103084755.1256479-1-yukuai1@huaweicloud.com Signed-off-by: Jens Axboe axboe@kernel.dk Reviewed-by: Hou Tao houtao1@huawei.com Signed-off-by: Yongqiang Liu liuyongqiang13@huawei.com --- block/bfq-cgroup.c | 8 +++----- block/bfq-iosched.h | 2 +- 2 files changed, 4 insertions(+), 6 deletions(-)
diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index cd92f417cc80..b663cd8b9e46 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -254,14 +254,12 @@ struct bfq_group *bfqq_group(struct bfq_queue *bfqq)
static void bfqg_get(struct bfq_group *bfqg) { - bfqg->ref++; + refcount_inc(&bfqg->ref); }
static void bfqg_put(struct bfq_group *bfqg) { - bfqg->ref--; - - if (bfqg->ref == 0) + if (refcount_dec_and_test(&bfqg->ref)) kfree(bfqg); }
@@ -448,7 +446,7 @@ static struct blkg_policy_data *bfq_pd_alloc(gfp_t gfp, int node) }
/* see comments in bfq_bic_update_cgroup for why refcounting */ - bfqg_get(bfqg); + refcount_set(&bfqg->ref, 1); return &bfqg->pd; }
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index bb2b9c71048e..9bbd8ea906dd 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -862,7 +862,7 @@ struct bfq_group { char blkg_path[128];
/* reference counter (see comments in bfq_bic_update_cgroup) */ - int ref; + refcount_t ref;
struct bfq_entity entity; struct bfq_sched_data sched_data;