From: "Jason A. Donenfeld" Jason@zx2c4.com
stable inclusion from stable-v5.10.119 commit 732872aa2c412457eae31589681d4eba96263e2f category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5L6BB
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit c30c575db4858f0bbe5e315ff2e529c782f33a1f upstream.
During crng_init == 0, we never credit entropy in add_interrupt_ randomness(), but instead dump it directly into the primary_crng. That's fine, except for the fact that we then wind up throwing away that entropy later when we switch to extracting from the input pool and xoring into (and later in this series overwriting) the primary_crng key. The two other early init sites -- add_hwgenerator_randomness()'s use crng_fast_load() and add_device_ randomness()'s use of crng_slow_load() -- always additionally give their inputs to the input pool. But not add_interrupt_randomness().
This commit fixes that shortcoming by calling mix_pool_bytes() after crng_fast_load() in add_interrupt_randomness(). That's partially verboten on PREEMPT_RT, where it implies taking spinlock_t from an IRQ handler. But this also only happens during early boot and then never again after that. Plus it's a trylock so it has the same considerations as calling crng_fast_load(), which we're already using.
Cc: Theodore Ts'o tytso@mit.edu Reviewed-by: Dominik Brodowski linux@dominikbrodowski.net Reviewed-by: Eric Biggers ebiggers@google.com Suggested-by: Eric Biggers ebiggers@google.com Signed-off-by: Jason A. Donenfeld Jason@zx2c4.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com --- drivers/char/random.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/char/random.c b/drivers/char/random.c index 3f6510e2db92..d3ad07bb8990 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -850,6 +850,10 @@ void add_interrupt_randomness(int irq) crng_fast_load((u8 *)fast_pool->pool, sizeof(fast_pool->pool)) > 0) { fast_pool->count = 0; fast_pool->last = now; + if (spin_trylock(&input_pool.lock)) { + _mix_pool_bytes(&fast_pool->pool, sizeof(fast_pool->pool)); + spin_unlock(&input_pool.lock); + } } return; }