DENG Qingfang (1): net: dsa: tag_mtk: fix 802.1ad VLAN egress
Florian Fainelli (1): net: dsa: b53: Support setting learning on port
Greg Kroah-Hartman (1): Linux 4.19.182
Piotr Krysiuk (4): bpf: Prohibit alu ops for pointer types not defining ptr_limit bpf: Fix off-by-one for area size in creating mask to left bpf: Simplify alu_limit masking for pointer arithmetic bpf: Add sanity check for upper ptr_limit
Suzuki K Poulose (1): KVM: arm64: nvhe: Save the SPE context early
Makefile | 2 +- arch/arm64/include/asm/kvm_hyp.h | 3 +++ arch/arm64/kvm/hyp/debug-sr.c | 24 ++++++++++++++--------- arch/arm64/kvm/hyp/switch.c | 4 +++- drivers/net/dsa/b53/b53_common.c | 19 ++++++++++++++++++ drivers/net/dsa/b53/b53_regs.h | 1 + drivers/net/dsa/bcm_sf2.c | 5 ----- kernel/bpf/verifier.c | 33 ++++++++++++++++++++------------ net/dsa/tag_mtk.c | 19 ++++++++++++------ 9 files changed, 76 insertions(+), 34 deletions(-)
From: Suzuki K Poulose suzuki.poulose@arm.com
commit b96b0c5de685df82019e16826a282d53d86d112c upstream
The nVHE KVM hyp drains and disables the SPE buffer, before entering the guest, as the EL1&0 translation regime is going to be loaded with that of the guest.
But this operation is performed way too late, because : - The owning translation regime of the SPE buffer is transferred to EL2. (MDCR_EL2_E2PB == 0) - The guest Stage1 is loaded.
Thus the flush could use the host EL1 virtual address, but use the EL2 translations instead of host EL1, for writing out any cached data.
Fix this by moving the SPE buffer handling early enough. The restore path is doing the right thing.
Cc: stable@vger.kernel.org # v4.19 Cc: Christoffer Dall christoffer.dall@arm.com Cc: Marc Zyngier maz@kernel.org Cc: Will Deacon will@kernel.org Cc: Catalin Marinas catalin.marinas@arm.com Cc: Mark Rutland mark.rutland@arm.com Cc: Alexandru Elisei alexandru.elisei@arm.com Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Acked-by: Marc Zyngier maz@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/include/asm/kvm_hyp.h | 3 +++ arch/arm64/kvm/hyp/debug-sr.c | 24 +++++++++++++++--------- arch/arm64/kvm/hyp/switch.c | 4 +++- 3 files changed, 21 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index c7113bfe1a0ef..671c36e583adc 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -144,6 +144,9 @@ void __sysreg32_restore_state(struct kvm_vcpu *vcpu);
void __debug_switch_to_guest(struct kvm_vcpu *vcpu); void __debug_switch_to_host(struct kvm_vcpu *vcpu); +void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu); +void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu); +
void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c index 50009766e5e56..3c5414633bb7c 100644 --- a/arch/arm64/kvm/hyp/debug-sr.c +++ b/arch/arm64/kvm/hyp/debug-sr.c @@ -149,6 +149,21 @@ static void __hyp_text __debug_restore_state(struct kvm_vcpu *vcpu, write_sysreg(ctxt->sys_regs[MDCCINT_EL1], mdccint_el1); }
+void __hyp_text __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu) +{ + /* + * Non-VHE: Disable and flush SPE data generation + * VHE: The vcpu can run, but it can't hide. + */ + __debug_save_spe_nvhe(&vcpu->arch.host_debug_state.pmscr_el1); + +} + +void __hyp_text __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu) +{ + __debug_restore_spe_nvhe(vcpu->arch.host_debug_state.pmscr_el1); +} + void __hyp_text __debug_switch_to_guest(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; @@ -156,13 +171,6 @@ void __hyp_text __debug_switch_to_guest(struct kvm_vcpu *vcpu) struct kvm_guest_debug_arch *host_dbg; struct kvm_guest_debug_arch *guest_dbg;
- /* - * Non-VHE: Disable and flush SPE data generation - * VHE: The vcpu can run, but it can't hide. - */ - if (!has_vhe()) - __debug_save_spe_nvhe(&vcpu->arch.host_debug_state.pmscr_el1); - if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) return;
@@ -182,8 +190,6 @@ void __hyp_text __debug_switch_to_host(struct kvm_vcpu *vcpu) struct kvm_guest_debug_arch *host_dbg; struct kvm_guest_debug_arch *guest_dbg;
- if (!has_vhe()) - __debug_restore_spe_nvhe(vcpu->arch.host_debug_state.pmscr_el1);
if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) return; diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 98004166d1e34..53a8125ad963b 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -616,6 +616,7 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) pmu_switch_needed = __pmu_switch_to_guest(host_ctxt);
__sysreg_save_state_nvhe(host_ctxt); + __debug_save_host_buffers_nvhe(vcpu);
__activate_traps(vcpu); __activate_vm(kern_hyp_va(vcpu->kvm)); @@ -655,11 +656,12 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) __fpsimd_save_fpexc32(vcpu);
+ __debug_switch_to_host(vcpu); /* * This must come after restoring the host sysregs, since a non-VHE * system may enable SPE here and make use of the TTBRs. */ - __debug_switch_to_host(vcpu); + __debug_restore_host_buffers_nvhe(vcpu);
if (pmu_switch_needed) __pmu_switch_to_host(host_ctxt);
From: Piotr Krysiuk piotras@gmail.com
commit f232326f6966cf2a1d1db7bc917a4ce5f9f55f76 upstream.
The purpose of this patch is to streamline error propagation and in particular to propagate retrieve_ptr_limit() errors for pointer types that are not defining a ptr_limit such that register-based alu ops against these types can be rejected.
The main rationale is that a gap has been identified by Piotr in the existing protection against speculatively out-of-bounds loads, for example, in case of ctx pointers, unprivileged programs can still perform pointer arithmetic. This can be abused to execute speculatively out-of-bounds loads without restrictions and thus extract contents of kernel memory.
Fix this by rejecting unprivileged programs that attempt any pointer arithmetic on unprotected pointer types. The two affected ones are pointer to ctx as well as pointer to map. Field access to a modified ctx' pointer is rejected at a later point in time in the verifier, and 7c6967326267 ("bpf: Permit map_ptr arithmetic with opcode add and offset 0") only relevant for root-only use cases. Risk of unprivileged program breakage is considered very low.
Fixes: 7c6967326267 ("bpf: Permit map_ptr arithmetic with opcode add and offset 0") Fixes: b2157399cc98 ("bpf: prevent out-of-bounds speculation") Signed-off-by: Piotr Krysiuk piotras@gmail.com Co-developed-by: Daniel Borkmann daniel@iogearbox.net Signed-off-by: Daniel Borkmann daniel@iogearbox.net Acked-by: Alexei Starovoitov ast@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/bpf/verifier.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index f6b7920917f92..66d2f5a163f21 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2724,6 +2724,7 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env, u32 alu_state, alu_limit; struct bpf_reg_state tmp; bool ret; + int err;
if (can_skip_alu_sanitation(env, insn)) return 0; @@ -2739,10 +2740,13 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env, alu_state |= ptr_is_dst_reg ? BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST;
- if (retrieve_ptr_limit(ptr_reg, &alu_limit, opcode, off_is_neg)) - return 0; - if (update_alu_sanitation_state(aux, alu_state, alu_limit)) - return -EACCES; + err = retrieve_ptr_limit(ptr_reg, &alu_limit, opcode, off_is_neg); + if (err < 0) + return err; + + err = update_alu_sanitation_state(aux, alu_state, alu_limit); + if (err < 0) + return err; do_sim: /* Simulate and find potential out-of-bounds access under * speculative execution from truncation as a result of @@ -2840,7 +2844,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, case BPF_ADD: ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0); if (ret < 0) { - verbose(env, "R%d tried to add from different maps or paths\n", dst); + verbose(env, "R%d tried to add from different maps, paths, or prohibited types\n", dst); return ret; } /* We can take a fixed offset as long as it doesn't overflow @@ -2895,7 +2899,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, case BPF_SUB: ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0); if (ret < 0) { - verbose(env, "R%d tried to sub from different maps or paths\n", dst); + verbose(env, "R%d tried to sub from different maps, paths, or prohibited types\n", dst); return ret; } if (dst_reg == off_reg) {
From: Piotr Krysiuk piotras@gmail.com
commit 10d2bb2e6b1d8c4576c56a748f697dbeb8388899 upstream.
retrieve_ptr_limit() computes the ptr_limit for registers with stack and map_value type. ptr_limit is the size of the memory area that is still valid / in-bounds from the point of the current position and direction of the operation (add / sub). This size will later be used for masking the operation such that attempting out-of-bounds access in the speculative domain is redirected to remain within the bounds of the current map value.
When masking to the right the size is correct, however, when masking to the left, the size is off-by-one which would lead to an incorrect mask and thus incorrect arithmetic operation in the non-speculative domain. Piotr found that if the resulting alu_limit value is zero, then the BPF_MOV32_IMM() from the fixup_bpf_calls() rewrite will end up loading 0xffffffff into AX instead of sign-extending to the full 64 bit range, and as a result, this allows abuse for executing speculatively out-of- bounds loads against 4GB window of address space and thus extracting the contents of kernel memory via side-channel.
Fixes: 979d63d50c0c ("bpf: prevent out of bounds speculation on pointer arithmetic") Signed-off-by: Piotr Krysiuk piotras@gmail.com Co-developed-by: Daniel Borkmann daniel@iogearbox.net Signed-off-by: Daniel Borkmann daniel@iogearbox.net Acked-by: Alexei Starovoitov ast@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/bpf/verifier.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 66d2f5a163f21..a7a97042d1ded 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2660,13 +2660,13 @@ static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg, case PTR_TO_STACK: off = ptr_reg->off + ptr_reg->var_off.value; if (mask_to_left) - *ptr_limit = MAX_BPF_STACK + off; + *ptr_limit = MAX_BPF_STACK + off + 1; else *ptr_limit = -off; return 0; case PTR_TO_MAP_VALUE: if (mask_to_left) { - *ptr_limit = ptr_reg->umax_value + ptr_reg->off; + *ptr_limit = ptr_reg->umax_value + ptr_reg->off + 1; } else { off = ptr_reg->smin_value + ptr_reg->off; *ptr_limit = ptr_reg->map_ptr->value_size - off;
From: Piotr Krysiuk piotras@gmail.com
commit b5871dca250cd391885218b99cc015aca1a51aea upstream.
Instead of having the mov32 with aux->alu_limit - 1 immediate, move this operation to retrieve_ptr_limit() instead to simplify the logic and to allow for subsequent sanity boundary checks inside retrieve_ptr_limit(). This avoids in future that at the time of the verifier masking rewrite we'd run into an underflow which would not sign extend due to the nature of mov32 instruction.
Signed-off-by: Piotr Krysiuk piotras@gmail.com Co-developed-by: Daniel Borkmann daniel@iogearbox.net Signed-off-by: Daniel Borkmann daniel@iogearbox.net Acked-by: Alexei Starovoitov ast@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/bpf/verifier.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index a7a97042d1ded..5e2072a5c5f01 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2660,16 +2660,16 @@ static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg, case PTR_TO_STACK: off = ptr_reg->off + ptr_reg->var_off.value; if (mask_to_left) - *ptr_limit = MAX_BPF_STACK + off + 1; + *ptr_limit = MAX_BPF_STACK + off; else - *ptr_limit = -off; + *ptr_limit = -off - 1; return 0; case PTR_TO_MAP_VALUE: if (mask_to_left) { - *ptr_limit = ptr_reg->umax_value + ptr_reg->off + 1; + *ptr_limit = ptr_reg->umax_value + ptr_reg->off; } else { off = ptr_reg->smin_value + ptr_reg->off; - *ptr_limit = ptr_reg->map_ptr->value_size - off; + *ptr_limit = ptr_reg->map_ptr->value_size - off - 1; } return 0; default: @@ -6030,7 +6030,7 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) off_reg = issrc ? insn->src_reg : insn->dst_reg; if (isneg) *patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1); - *patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit - 1); + *patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit); *patch++ = BPF_ALU64_REG(BPF_SUB, BPF_REG_AX, off_reg); *patch++ = BPF_ALU64_REG(BPF_OR, BPF_REG_AX, off_reg); *patch++ = BPF_ALU64_IMM(BPF_NEG, BPF_REG_AX, 0);
From: Piotr Krysiuk piotras@gmail.com
commit 1b1597e64e1a610c7a96710fc4717158e98a08b3 upstream.
Given we know the max possible value of ptr_limit at the time of retrieving the latter, add basic assertions, so that the verifier can bail out if anything looks odd and reject the program. Nothing triggered this so far, but it also does not hurt to have these.
Signed-off-by: Piotr Krysiuk piotras@gmail.com Co-developed-by: Daniel Borkmann daniel@iogearbox.net Signed-off-by: Daniel Borkmann daniel@iogearbox.net Acked-by: Alexei Starovoitov ast@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/bpf/verifier.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 5e2072a5c5f01..f9d8fc953b70d 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2654,24 +2654,29 @@ static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg, { bool mask_to_left = (opcode == BPF_ADD && off_is_neg) || (opcode == BPF_SUB && !off_is_neg); - u32 off; + u32 off, max;
switch (ptr_reg->type) { case PTR_TO_STACK: + /* Offset 0 is out-of-bounds, but acceptable start for the + * left direction, see BPF_REG_FP. + */ + max = MAX_BPF_STACK + mask_to_left; off = ptr_reg->off + ptr_reg->var_off.value; if (mask_to_left) *ptr_limit = MAX_BPF_STACK + off; else *ptr_limit = -off - 1; - return 0; + return *ptr_limit >= max ? -ERANGE : 0; case PTR_TO_MAP_VALUE: + max = ptr_reg->map_ptr->value_size; if (mask_to_left) { *ptr_limit = ptr_reg->umax_value + ptr_reg->off; } else { off = ptr_reg->smin_value + ptr_reg->off; *ptr_limit = ptr_reg->map_ptr->value_size - off - 1; } - return 0; + return *ptr_limit >= max ? -ERANGE : 0; default: return -EINVAL; }
From: DENG Qingfang dqfext@gmail.com
commit 9200f515c41f4cbaeffd8fdd1d8b6373a18b1b67 upstream.
A different TPID bit is used for 802.1ad VLAN frames.
Reported-by: Ilario Gelmetti iochesonome@gmail.com Fixes: f0af34317f4b ("net: dsa: mediatek: combine MediaTek tag with VLAN tag") Signed-off-by: DENG Qingfang dqfext@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/dsa/tag_mtk.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/net/dsa/tag_mtk.c b/net/dsa/tag_mtk.c index 6574425ad94c1..1f57dce969f70 100644 --- a/net/dsa/tag_mtk.c +++ b/net/dsa/tag_mtk.c @@ -20,6 +20,7 @@ #define MTK_HDR_LEN 4 #define MTK_HDR_XMIT_UNTAGGED 0 #define MTK_HDR_XMIT_TAGGED_TPID_8100 1 +#define MTK_HDR_XMIT_TAGGED_TPID_88A8 2 #define MTK_HDR_RECV_SOURCE_PORT_MASK GENMASK(2, 0) #define MTK_HDR_XMIT_DP_BIT_MASK GENMASK(5, 0) #define MTK_HDR_XMIT_SA_DIS BIT(6) @@ -28,8 +29,8 @@ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb, struct net_device *dev) { struct dsa_port *dp = dsa_slave_to_port(dev); + u8 xmit_tpid; u8 *mtk_tag; - bool is_vlan_skb = true; unsigned char *dest = eth_hdr(skb)->h_dest; bool is_multicast_skb = is_multicast_ether_addr(dest) && !is_broadcast_ether_addr(dest); @@ -40,13 +41,20 @@ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb, * the both special and VLAN tag at the same time and then look up VLAN * table with VID. */ - if (!skb_vlan_tagged(skb)) { + switch (skb->protocol) { + case htons(ETH_P_8021Q): + xmit_tpid = MTK_HDR_XMIT_TAGGED_TPID_8100; + break; + case htons(ETH_P_8021AD): + xmit_tpid = MTK_HDR_XMIT_TAGGED_TPID_88A8; + break; + default: if (skb_cow_head(skb, MTK_HDR_LEN) < 0) return NULL;
+ xmit_tpid = MTK_HDR_XMIT_UNTAGGED; skb_push(skb, MTK_HDR_LEN); memmove(skb->data, skb->data + MTK_HDR_LEN, 2 * ETH_ALEN); - is_vlan_skb = false; }
mtk_tag = skb->data + 2 * ETH_ALEN; @@ -54,8 +62,7 @@ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb, /* Mark tag attribute on special tag insertion to notify hardware * whether that's a combined special tag with 802.1Q header. */ - mtk_tag[0] = is_vlan_skb ? MTK_HDR_XMIT_TAGGED_TPID_8100 : - MTK_HDR_XMIT_UNTAGGED; + mtk_tag[0] = xmit_tpid; mtk_tag[1] = (1 << dp->index) & MTK_HDR_XMIT_DP_BIT_MASK;
/* Disable SA learning for multicast frames */ @@ -63,7 +70,7 @@ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb, mtk_tag[1] |= MTK_HDR_XMIT_SA_DIS;
/* Tag control information is kept for 802.1Q */ - if (!is_vlan_skb) { + if (xmit_tpid == MTK_HDR_XMIT_UNTAGGED) { mtk_tag[2] = 0; mtk_tag[3] = 0; }
From: Florian Fainelli f.fainelli@gmail.com
commit f9b3827ee66cfcf297d0acd6ecf33653a5f297ef upstream.
Add support for being able to set the learning attribute on port, and make sure that the standalone ports start up with learning disabled.
We can remove the code in bcm_sf2 that configured the ports learning attribute because we want the standalone ports to have learning disabled by default and port 7 cannot be bridged, so its learning attribute will not change past its initial configuration.
Signed-off-by: Florian Fainelli f.fainelli@gmail.com Reviewed-by: Vladimir Oltean olteanv@gmail.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/dsa/b53/b53_common.c | 19 +++++++++++++++++++ drivers/net/dsa/b53/b53_regs.h | 1 + drivers/net/dsa/bcm_sf2.c | 5 ----- 3 files changed, 20 insertions(+), 5 deletions(-)
diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c index 335ce1e849044..7eaeab65d39f5 100644 --- a/drivers/net/dsa/b53/b53_common.c +++ b/drivers/net/dsa/b53/b53_common.c @@ -507,12 +507,27 @@ void b53_imp_vlan_setup(struct dsa_switch *ds, int cpu_port) } EXPORT_SYMBOL(b53_imp_vlan_setup);
+static void b53_port_set_learning(struct b53_device *dev, int port, + bool learning) +{ + u16 reg; + + b53_read16(dev, B53_CTRL_PAGE, B53_DIS_LEARNING, ®); + if (learning) + reg &= ~BIT(port); + else + reg |= BIT(port); + b53_write16(dev, B53_CTRL_PAGE, B53_DIS_LEARNING, reg); +} + int b53_enable_port(struct dsa_switch *ds, int port, struct phy_device *phy) { struct b53_device *dev = ds->priv; unsigned int cpu_port = ds->ports[port].cpu_dp->index; u16 pvlan;
+ b53_port_set_learning(dev, port, false); + /* Clear the Rx and Tx disable bits and set to no spanning tree */ b53_write8(dev, B53_CTRL_PAGE, B53_PORT_CTRL(port), 0);
@@ -620,6 +635,7 @@ static void b53_enable_cpu_port(struct b53_device *dev, int port) b53_write8(dev, B53_CTRL_PAGE, B53_PORT_CTRL(port), port_ctrl);
b53_brcm_hdr_setup(dev->ds, port); + b53_port_set_learning(dev, port, false); }
static void b53_enable_mib(struct b53_device *dev) @@ -1517,6 +1533,8 @@ int b53_br_join(struct dsa_switch *ds, int port, struct net_device *br) b53_write16(dev, B53_PVLAN_PAGE, B53_PVLAN_PORT_MASK(port), pvlan); dev->ports[port].vlan_ctl_mask = pvlan;
+ b53_port_set_learning(dev, port, true); + return 0; } EXPORT_SYMBOL(b53_br_join); @@ -1564,6 +1582,7 @@ void b53_br_leave(struct dsa_switch *ds, int port, struct net_device *br) vl->untag |= BIT(port) | BIT(cpu_port); b53_set_vlan_entry(dev, pvid, vl); } + b53_port_set_learning(dev, port, false); } EXPORT_SYMBOL(b53_br_leave);
diff --git a/drivers/net/dsa/b53/b53_regs.h b/drivers/net/dsa/b53/b53_regs.h index c90985c294a2e..b2c539a421545 100644 --- a/drivers/net/dsa/b53/b53_regs.h +++ b/drivers/net/dsa/b53/b53_regs.h @@ -115,6 +115,7 @@ #define B53_UC_FLOOD_MASK 0x32 #define B53_MC_FLOOD_MASK 0x34 #define B53_IPMC_FLOOD_MASK 0x36 +#define B53_DIS_LEARNING 0x3c
/* * Override Ports 0-7 State on devices with xMII interfaces (8 bit) diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c index 613f03f9d9eca..e9fe3897bd9cc 100644 --- a/drivers/net/dsa/bcm_sf2.c +++ b/drivers/net/dsa/bcm_sf2.c @@ -173,11 +173,6 @@ static int bcm_sf2_port_setup(struct dsa_switch *ds, int port, reg &= ~P_TXQ_PSM_VDD(port); core_writel(priv, reg, CORE_MEM_PSM_VDD_CTRL);
- /* Enable learning */ - reg = core_readl(priv, CORE_DIS_LEARN); - reg &= ~BIT(port); - core_writel(priv, reg, CORE_DIS_LEARN); - /* Enable Broadcom tags for that port if requested */ if (priv->brcm_tag_mask & BIT(port)) b53_brcm_hdr_setup(ds, port);
From: Greg Kroah-Hartman gregkh@linuxfoundation.org
Merge 8 patches from 4.19.182 stable branch (9 total) beside 1 already merged patches: a75a5d163857c ext4: check journal inode extents more carefully
Tested-by: Pavel Machek (CIP) pavel@denx.de Tested-by: Jason Self jason@bluehome.net Tested-by: Guenter Roeck linux@roeck-us.net Tested-by: Linux Kernel Functional Testing lkft@linaro.org Link: https://lore.kernel.org/r/20210319121744.114946147@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Makefile b/Makefile index 6511e496c965d..93141519b6fea 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 4 PATCHLEVEL = 19 -SUBLEVEL = 181 +SUBLEVEL = 182 EXTRAVERSION = NAME = "People's Front"