Merge 45 patches from 4.19.91 stable branch (48 total) beside 3 already merged patches: 248e65f PCI: pciehp: Avoid returning prematurely from sysfs requests 6970c15 dm btree: increase rebalance threshold in __rebalance2() 5dc3f40 scsi: qla2xxx: Change discovery state before PLOGI
Aaron Conole (1): openvswitch: support asymmetric conntrack
Alex Deucher (1): drm/radeon: fix r1xx/r2xx register checker for POT textures
Alexander Lobakin (1): net: dsa: fix flow dissection on Tx path
Arun Kumar Neelakantam (2): rpmsg: glink: Fix reuse intents memory leak issue rpmsg: glink: Fix use after free in open_ack TIMEOUT case
Bart Van Assche (1): scsi: iscsi: Fix a potential deadlock in the timeout handler
Bjorn Andersson (2): rpmsg: glink: Don't send pending rx_done during remove rpmsg: glink: Free pending deferred work on remove
Chaotian Jing (2): mmc: block: Make card_busy_detect() a bit more generic mmc: block: Add CMD13 polling for MMC IOCTLS with R1B response
Chris Lew (3): rpmsg: glink: Set tail pointer to 0 at end of FIFO rpmsg: glink: Put an extra reference during cleanup rpmsg: glink: Fix rpmsg_register_device err handling
Dexuan Cui (1): PCI/PM: Always return devices to D0 when thawing
Dmitry Osipenko (1): ARM: tegra: Fix FLOW_CTLR_HALT register clobbering by tegra_resume()
Dust Li (1): net: sched: fix dump qlen for sch_mq/sch_mqprio with NOLOCK subqueues
Eric Dumazet (2): inet: protect against too small mtu values. tcp: md5: fix potential overestimation of TCP option space
George Cherian (1): PCI: Apply Cavium ACS quirk to ThunderX2 and ThunderX3
Greg Kroah-Hartman (2): Revert "arm64: preempt: Fix big-endian when checking preempt count in assembly" Linux 4.19.91
Grygorii Strashko (1): net: ethernet: ti: cpsw: fix extra rx interrupt
Guillaume Nault (3): tcp: fix rejected syncookies due to stale timestamps tcp: tighten acceptance of ACKs not matching a child socket tcp: Protect accesses to .ts_recent_stamp with {READ, WRITE}_ONCE()
Huy Nguyen (1): net/mlx5e: Query global pause state before setting prio2buffer
Jian-Hong Pan (1): PCI/MSI: Fix incorrect MSI-X masking on resume
Jiang Yi (1): vfio/pci: call irq_bypass_unregister_producer() before freeing irq
Lihua Yao (1): ARM: dts: s3c64xx: Fix init order of clock providers
Long Li (4): cifs: smbd: Return -EAGAIN when transport is reconnecting cifs: smbd: Add messages on RDMA session destroy and reconnection cifs: smbd: Return -EINVAL when the number of iovs exceeds SMBDIRECT_MAX_SGE cifs: Don't display RDMA transport on reconnect
Martin Blumenstingl (1): drm: meson: venc: cvbs: fix CVBS mode matching
Mathias Nyman (1): xhci: fix USB3 device initiated resume race with roothub autosuspend
Max Filippov (1): xtensa: fix TLB sanity checker
Mian Yousaf Kaukab (1): net: thunderx: start phy before starting autonegotiation
Mike Snitzer (1): dm mpath: remove harmful bio-based optimization
Navid Emamdoost (1): dma-buf: Fix memory leak in sync_file_merge()
Nikolay Aleksandrov (1): net: bridge: deny dev_set_mac_address() when unregistering
Pavel Shilovsky (2): CIFS: Respect O_SYNC and O_DIRECT flags during reconnect CIFS: Close open handle after interrupted close
Steffen Liebergeld (1): PCI: Fix Intel ACS quirk UPDCR register address
Taehee Yoo (1): tipc: fix ordering of tipc module init and exit routine
Vladyslav Tarasiuk (1): mqprio: Fix out-of-bounds access in mqprio_dump
Makefile | 2 +- arch/arm/boot/dts/s3c6410-mini6410.dts | 4 + arch/arm/boot/dts/s3c6410-smdk6410.dts | 4 + arch/arm/mach-tegra/reset-handler.S | 6 +- arch/arm64/include/asm/assembler.h | 8 +- arch/arm64/kernel/entry.S | 8 +- arch/xtensa/mm/tlb.c | 4 +- drivers/dma-buf/sync_file.c | 2 +- drivers/gpu/drm/meson/meson_venc_cvbs.c | 48 ++++--- drivers/gpu/drm/radeon/r100.c | 4 +- drivers/gpu/drm/radeon/r200.c | 4 +- drivers/md/dm-mpath.c | 37 +---- drivers/mmc/core/block.c | 151 ++++++++------------- drivers/net/ethernet/cavium/thunder/thunder_bgx.c | 2 +- .../ethernet/mellanox/mlx5/core/en/port_buffer.c | 27 +++- drivers/net/ethernet/ti/cpsw.c | 2 +- drivers/pci/msi.c | 2 +- drivers/pci/pci-driver.c | 17 ++- drivers/pci/quirks.c | 22 +-- drivers/rpmsg/qcom_glink_native.c | 53 ++++++-- drivers/rpmsg/qcom_glink_smem.c | 2 +- drivers/scsi/libiscsi.c | 4 +- drivers/usb/host/xhci-hub.c | 8 ++ drivers/usb/host/xhci-ring.c | 3 +- drivers/vfio/pci/vfio_pci_intrs.c | 2 +- fs/cifs/cifs_debug.c | 5 + fs/cifs/file.c | 7 + fs/cifs/smb2misc.c | 59 ++++++-- fs/cifs/smb2pdu.c | 16 ++- fs/cifs/smb2proto.h | 3 + fs/cifs/smbdirect.c | 8 +- fs/cifs/transport.c | 7 +- include/linux/netdevice.h | 5 + include/linux/time.h | 13 ++ include/net/ip.h | 5 + include/net/tcp.h | 27 ++-- net/bridge/br_device.c | 6 + net/core/dev.c | 3 +- net/core/flow_dissector.c | 5 +- net/ipv4/devinet.c | 5 - net/ipv4/ip_output.c | 13 +- net/ipv4/tcp_output.c | 5 +- net/openvswitch/conntrack.c | 11 ++ net/sched/sch_mq.c | 1 + net/sched/sch_mqprio.c | 3 +- net/tipc/core.c | 29 ++-- 46 files changed, 400 insertions(+), 262 deletions(-)
From: Eric Dumazet edumazet@google.com
[ Upstream commit 501a90c945103e8627406763dac418f20f3837b2 ]
syzbot was once again able to crash a host by setting a very small mtu on loopback device.
Let's make inetdev_valid_mtu() available in include/net/ip.h, and use it in ip_setup_cork(), so that we protect both ip_append_page() and __ip_append_data()
Also add a READ_ONCE() when the device mtu is read.
Pairs this lockless read with one WRITE_ONCE() in __dev_set_mtu(), even if other code paths might write over this field.
Add a big comment in include/linux/netdevice.h about dev->mtu needing READ_ONCE()/WRITE_ONCE() annotations.
Hopefully we will add the missing ones in followup patches.
[1]
refcount_t: saturated; leaking memory. WARNING: CPU: 0 PID: 9464 at lib/refcount.c:22 refcount_warn_saturate+0x138/0x1f0 lib/refcount.c:22 Kernel panic - not syncing: panic_on_warn set ... CPU: 0 PID: 9464 Comm: syz-executor850 Not tainted 5.4.0-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x197/0x210 lib/dump_stack.c:118 panic+0x2e3/0x75c kernel/panic.c:221 __warn.cold+0x2f/0x3e kernel/panic.c:582 report_bug+0x289/0x300 lib/bug.c:195 fixup_bug arch/x86/kernel/traps.c:174 [inline] fixup_bug arch/x86/kernel/traps.c:169 [inline] do_error_trap+0x11b/0x200 arch/x86/kernel/traps.c:267 do_invalid_op+0x37/0x50 arch/x86/kernel/traps.c:286 invalid_op+0x23/0x30 arch/x86/entry/entry_64.S:1027 RIP: 0010:refcount_warn_saturate+0x138/0x1f0 lib/refcount.c:22 Code: 06 31 ff 89 de e8 c8 f5 e6 fd 84 db 0f 85 6f ff ff ff e8 7b f4 e6 fd 48 c7 c7 e0 71 4f 88 c6 05 56 a6 a4 06 01 e8 c7 a8 b7 fd <0f> 0b e9 50 ff ff ff e8 5c f4 e6 fd 0f b6 1d 3d a6 a4 06 31 ff 89 RSP: 0018:ffff88809689f550 EFLAGS: 00010286 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 RDX: 0000000000000000 RSI: ffffffff815e4336 RDI: ffffed1012d13e9c RBP: ffff88809689f560 R08: ffff88809c50a3c0 R09: fffffbfff15d31b1 R10: fffffbfff15d31b0 R11: ffffffff8ae98d87 R12: 0000000000000001 R13: 0000000000040100 R14: ffff888099041104 R15: ffff888218d96e40 refcount_add include/linux/refcount.h:193 [inline] skb_set_owner_w+0x2b6/0x410 net/core/sock.c:1999 sock_wmalloc+0xf1/0x120 net/core/sock.c:2096 ip_append_page+0x7ef/0x1190 net/ipv4/ip_output.c:1383 udp_sendpage+0x1c7/0x480 net/ipv4/udp.c:1276 inet_sendpage+0xdb/0x150 net/ipv4/af_inet.c:821 kernel_sendpage+0x92/0xf0 net/socket.c:3794 sock_sendpage+0x8b/0xc0 net/socket.c:936 pipe_to_sendpage+0x2da/0x3c0 fs/splice.c:458 splice_from_pipe_feed fs/splice.c:512 [inline] __splice_from_pipe+0x3ee/0x7c0 fs/splice.c:636 splice_from_pipe+0x108/0x170 fs/splice.c:671 generic_splice_sendpage+0x3c/0x50 fs/splice.c:842 do_splice_from fs/splice.c:861 [inline] direct_splice_actor+0x123/0x190 fs/splice.c:1035 splice_direct_to_actor+0x3b4/0xa30 fs/splice.c:990 do_splice_direct+0x1da/0x2a0 fs/splice.c:1078 do_sendfile+0x597/0xd00 fs/read_write.c:1464 __do_sys_sendfile64 fs/read_write.c:1525 [inline] __se_sys_sendfile64 fs/read_write.c:1511 [inline] __x64_sys_sendfile64+0x1dd/0x220 fs/read_write.c:1511 do_syscall_64+0xfa/0x790 arch/x86/entry/common.c:294 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x441409 Code: e8 ac e8 ff ff 48 83 c4 18 c3 0f 1f 80 00 00 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 eb 08 fc ff c3 66 2e 0f 1f 84 00 00 00 00 RSP: 002b:00007fffb64c4f78 EFLAGS: 00000246 ORIG_RAX: 0000000000000028 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 0000000000441409 RDX: 0000000000000000 RSI: 0000000000000006 RDI: 0000000000000005 RBP: 0000000000073b8a R08: 0000000000000010 R09: 0000000000000010 R10: 0000000000010001 R11: 0000000000000246 R12: 0000000000402180 R13: 0000000000402210 R14: 0000000000000000 R15: 0000000000000000 Kernel Offset: disabled Rebooting in 86400 seconds..
Fixes: 1470ddf7f8ce ("inet: Remove explicit write references to sk/inet in ip_append_data") Signed-off-by: Eric Dumazet edumazet@google.com Reported-by: syzbot syzkaller@googlegroups.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- include/linux/netdevice.h | 5 +++++ include/net/ip.h | 5 +++++ net/core/dev.c | 3 ++- net/ipv4/devinet.c | 5 ----- net/ipv4/ip_output.c | 13 ++++++++----- 5 files changed, 20 insertions(+), 11 deletions(-)
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 3afa421..42ac916 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1929,6 +1929,11 @@ struct net_device { unsigned char if_port; unsigned char dma;
+ /* Note : dev->mtu is often read without holding a lock. + * Writers usually hold RTNL. + * It is recommended to use READ_ONCE() to annotate the reads, + * and to use WRITE_ONCE() to annotate the writes. + */ unsigned int mtu; unsigned int min_mtu; unsigned int max_mtu; diff --git a/include/net/ip.h b/include/net/ip.h index 677ee70..7d9e63b 100644 --- a/include/net/ip.h +++ b/include/net/ip.h @@ -694,4 +694,9 @@ static inline void ip_cmsg_recv(struct msghdr *msg, struct sk_buff *skb) int rtm_getroute_parse_ip_proto(struct nlattr *attr, u8 *ip_proto, u8 family, struct netlink_ext_ack *extack);
+static inline bool inetdev_valid_mtu(unsigned int mtu) +{ + return likely(mtu >= IPV4_MIN_MTU); +} + #endif /* _IP_H */ diff --git a/net/core/dev.c b/net/core/dev.c index 91179fe..8ff21d4 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -7595,7 +7595,8 @@ int __dev_set_mtu(struct net_device *dev, int new_mtu) if (ops->ndo_change_mtu) return ops->ndo_change_mtu(dev, new_mtu);
- dev->mtu = new_mtu; + /* Pairs with all the lockless reads of dev->mtu in the stack */ + WRITE_ONCE(dev->mtu, new_mtu); return 0; } EXPORT_SYMBOL(__dev_set_mtu); diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c index d237461..fabece4 100644 --- a/net/ipv4/devinet.c +++ b/net/ipv4/devinet.c @@ -1441,11 +1441,6 @@ static void inetdev_changename(struct net_device *dev, struct in_device *in_dev) } }
-static bool inetdev_valid_mtu(unsigned int mtu) -{ - return mtu >= IPV4_MIN_MTU; -} - static void inetdev_send_gratuitous_arp(struct net_device *dev, struct in_device *in_dev)
diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index 58bc731..fbf3012 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -1143,15 +1143,18 @@ static int ip_setup_cork(struct sock *sk, struct inet_cork *cork, cork->addr = ipc->addr; }
- /* - * We steal reference to this route, caller should not release it - */ - *rtp = NULL; cork->fragsize = ip_sk_use_pmtu(sk) ? - dst_mtu(&rt->dst) : rt->dst.dev->mtu; + dst_mtu(&rt->dst) : READ_ONCE(rt->dst.dev->mtu); + + if (!inetdev_valid_mtu(cork->fragsize)) + return -ENETUNREACH;
cork->gso_size = ipc->gso_size; + cork->dst = &rt->dst; + /* We stole this route, caller should not release it. */ + *rtp = NULL; + cork->length = 0; cork->ttl = ipc->ttl; cork->tos = ipc->tos;
From: Vladyslav Tarasiuk vladyslavt@mellanox.com
[ Upstream commit 9f104c7736904ac72385bbb48669e0c923ca879b ]
When user runs a command like tc qdisc add dev eth1 root mqprio KASAN stack-out-of-bounds warning is emitted. Currently, NLA_ALIGN macro used in mqprio_dump provides too large buffer size as argument for nla_put and memcpy down the call stack. The flow looks like this: 1. nla_put expects exact object size as an argument; 2. Later it provides this size to memcpy; 3. To calculate correct padding for SKB, nla_put applies NLA_ALIGN macro itself.
Therefore, NLA_ALIGN should not be applied to the nla_put parameter. Otherwise it will lead to out-of-bounds memory access in memcpy.
Fixes: 4e8b86c06269 ("mqprio: Introduce new hardware offload mode and shaper in mqprio") Signed-off-by: Vladyslav Tarasiuk vladyslavt@mellanox.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- net/sched/sch_mqprio.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c index 68ce86c..7f280a5 100644 --- a/net/sched/sch_mqprio.c +++ b/net/sched/sch_mqprio.c @@ -435,7 +435,7 @@ static int mqprio_dump(struct Qdisc *sch, struct sk_buff *skb) opt.offset[tc] = dev->tc_to_txq[tc].offset; }
- if (nla_put(skb, TCA_OPTIONS, NLA_ALIGN(sizeof(opt)), &opt)) + if (nla_put(skb, TCA_OPTIONS, sizeof(opt), &opt)) goto nla_put_failure;
if ((priv->flags & TC_MQPRIO_F_MODE) &&
From: Nikolay Aleksandrov nikolay@cumulusnetworks.com
[ Upstream commit c4b4c421857dc7b1cf0dccbd738472360ff2cd70 ]
We have an interesting memory leak in the bridge when it is being unregistered and is a slave to a master device which would change the mac of its slaves on unregister (e.g. bond, team). This is a very unusual setup but we do end up leaking 1 fdb entry because dev_set_mac_address() would cause the bridge to insert the new mac address into its table after all fdbs are flushed, i.e. after dellink() on the bridge has finished and we call NETDEV_UNREGISTER the bond/team would release it and will call dev_set_mac_address() to restore its original address and that in turn will add an fdb in the bridge. One fix is to check for the bridge dev's reg_state in its ndo_set_mac_address callback and return an error if the bridge is not in NETREG_REGISTERED.
Easy steps to reproduce: 1. add bond in mode != A/B 2. add any slave to the bond 3. add bridge dev as a slave to the bond 4. destroy the bridge device
Trace: unreferenced object 0xffff888035c4d080 (size 128): comm "ip", pid 4068, jiffies 4296209429 (age 1413.753s) hex dump (first 32 bytes): 41 1d c9 36 80 88 ff ff 00 00 00 00 00 00 00 00 A..6............ d2 19 c9 5e 3f d7 00 00 00 00 00 00 00 00 00 00 ...^?........... backtrace: [<00000000ddb525dc>] kmem_cache_alloc+0x155/0x26f [<00000000633ff1e0>] fdb_create+0x21/0x486 [bridge] [<0000000092b17e9c>] fdb_insert+0x91/0xdc [bridge] [<00000000f2a0f0ff>] br_fdb_change_mac_address+0xb3/0x175 [bridge] [<000000001de02dbd>] br_stp_change_bridge_id+0xf/0xff [bridge] [<00000000ac0e32b1>] br_set_mac_address+0x76/0x99 [bridge] [<000000006846a77f>] dev_set_mac_address+0x63/0x9b [<00000000d30738fc>] __bond_release_one+0x3f6/0x455 [bonding] [<00000000fc7ec01d>] bond_netdev_event+0x2f2/0x400 [bonding] [<00000000305d7795>] notifier_call_chain+0x38/0x56 [<0000000028885d4a>] call_netdevice_notifiers+0x1e/0x23 [<000000008279477b>] rollback_registered_many+0x353/0x6a4 [<0000000018ef753a>] unregister_netdevice_many+0x17/0x6f [<00000000ba854b7a>] rtnl_delete_link+0x3c/0x43 [<00000000adf8618d>] rtnl_dellink+0x1dc/0x20a [<000000009b6395fd>] rtnetlink_rcv_msg+0x23d/0x268
Fixes: 43598813386f ("bridge: add local MAC address to forwarding table (v2)") Reported-by: syzbot+2add91c08eb181fea1bf@syzkaller.appspotmail.com Signed-off-by: Nikolay Aleksandrov nikolay@cumulusnetworks.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- net/bridge/br_device.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/net/bridge/br_device.c b/net/bridge/br_device.c index e682a66..9ce661e 100644 --- a/net/bridge/br_device.c +++ b/net/bridge/br_device.c @@ -246,6 +246,12 @@ static int br_set_mac_address(struct net_device *dev, void *p) if (!is_valid_ether_addr(addr->sa_data)) return -EADDRNOTAVAIL;
+ /* dev_set_mac_addr() can be called by a master device on bridge's + * NETDEV_UNREGISTER, but since it's being destroyed do nothing + */ + if (dev->reg_state != NETREG_REGISTERED) + return -EBUSY; + spin_lock_bh(&br->lock); if (!ether_addr_equal(dev->dev_addr, addr->sa_data)) { /* Mac address will be changed in br_stp_change_bridge_id(). */
From: Alexander Lobakin alobakin@dlink.ru
[ Upstream commit 8bef0af09a5415df761b04fa487a6c34acae74bc ]
Commit 43e665287f93 ("net-next: dsa: fix flow dissection") added an ability to override protocol and network offset during flow dissection for DSA-enabled devices (i.e. controllers shipped as switch CPU ports) in order to fix skb hashing for RPS on Rx path.
However, skb_hash() and added part of code can be invoked not only on Rx, but also on Tx path if we have a multi-queued device and: - kernel is running on UP system or - XPS is not configured.
The call stack in this two cases will be like: dev_queue_xmit() -> __dev_queue_xmit() -> netdev_core_pick_tx() -> netdev_pick_tx() -> skb_tx_hash() -> skb_get_hash().
The problem is that skbs queued for Tx have both network offset and correct protocol already set up even after inserting a CPU tag by DSA tagger, so calling tag_ops->flow_dissect() on this path actually only breaks flow dissection and hashing.
This can be observed by adding debug prints just before and right after tag_ops->flow_dissect() call to the related block of code:
Before the patch:
Rx path (RPS):
[ 19.240001] Rx: proto: 0x00f8, nhoff: 0 /* ETH_P_XDSA */ [ 19.244271] tag_ops->flow_dissect() [ 19.247811] Rx: proto: 0x0800, nhoff: 8 /* ETH_P_IP */
[ 19.215435] Rx: proto: 0x00f8, nhoff: 0 /* ETH_P_XDSA */ [ 19.219746] tag_ops->flow_dissect() [ 19.223241] Rx: proto: 0x0806, nhoff: 8 /* ETH_P_ARP */
[ 18.654057] Rx: proto: 0x00f8, nhoff: 0 /* ETH_P_XDSA */ [ 18.658332] tag_ops->flow_dissect() [ 18.661826] Rx: proto: 0x8100, nhoff: 8 /* ETH_P_8021Q */
Tx path (UP system):
[ 18.759560] Tx: proto: 0x0800, nhoff: 26 /* ETH_P_IP */ [ 18.763933] tag_ops->flow_dissect() [ 18.767485] Tx: proto: 0x920b, nhoff: 34 /* junk */
[ 22.800020] Tx: proto: 0x0806, nhoff: 26 /* ETH_P_ARP */ [ 22.804392] tag_ops->flow_dissect() [ 22.807921] Tx: proto: 0x920b, nhoff: 34 /* junk */
[ 16.898342] Tx: proto: 0x86dd, nhoff: 26 /* ETH_P_IPV6 */ [ 16.902705] tag_ops->flow_dissect() [ 16.906227] Tx: proto: 0x920b, nhoff: 34 /* junk */
After:
Rx path (RPS):
[ 16.520993] Rx: proto: 0x00f8, nhoff: 0 /* ETH_P_XDSA */ [ 16.525260] tag_ops->flow_dissect() [ 16.528808] Rx: proto: 0x0800, nhoff: 8 /* ETH_P_IP */
[ 15.484807] Rx: proto: 0x00f8, nhoff: 0 /* ETH_P_XDSA */ [ 15.490417] tag_ops->flow_dissect() [ 15.495223] Rx: proto: 0x0806, nhoff: 8 /* ETH_P_ARP */
[ 17.134621] Rx: proto: 0x00f8, nhoff: 0 /* ETH_P_XDSA */ [ 17.138895] tag_ops->flow_dissect() [ 17.142388] Rx: proto: 0x8100, nhoff: 8 /* ETH_P_8021Q */
Tx path (UP system):
[ 15.499558] Tx: proto: 0x0800, nhoff: 26 /* ETH_P_IP */
[ 20.664689] Tx: proto: 0x0806, nhoff: 26 /* ETH_P_ARP */
[ 18.565782] Tx: proto: 0x86dd, nhoff: 26 /* ETH_P_IPV6 */
In order to fix that we can add the check 'proto == htons(ETH_P_XDSA)' to prevent code from calling tag_ops->flow_dissect() on Tx. I also decided to initialize 'offset' variable so tagger callbacks can now safely leave it untouched without provoking a chaos.
Fixes: 43e665287f93 ("net-next: dsa: fix flow dissection") Signed-off-by: Alexander Lobakin alobakin@dlink.ru Reviewed-by: Florian Fainelli f.fainelli@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- net/core/flow_dissector.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c index 4362ffe9..994dd15 100644 --- a/net/core/flow_dissector.c +++ b/net/core/flow_dissector.c @@ -630,9 +630,10 @@ bool __skb_flow_dissect(const struct sk_buff *skb, nhoff = skb_network_offset(skb); hlen = skb_headlen(skb); #if IS_ENABLED(CONFIG_NET_DSA) - if (unlikely(skb->dev && netdev_uses_dsa(skb->dev))) { + if (unlikely(skb->dev && netdev_uses_dsa(skb->dev) && + proto == htons(ETH_P_XDSA))) { const struct dsa_device_ops *ops; - int offset; + int offset = 0;
ops = skb->dev->dsa_ptr->tag_ops; if (ops->flow_dissect &&
From: Grygorii Strashko grygorii.strashko@ti.com
[ Upstream commit 51302f77bedab8768b761ed1899c08f89af9e4e2 ]
Now RX interrupt is triggered twice every time, because in cpsw_rx_interrupt() it is asked first and then disabled. So there will be pending interrupt always, when RX interrupt is enabled again in NAPI handler.
Fix it by first disabling IRQ and then do ask.
Fixes: 870915feabdc ("drivers: net: cpsw: remove disable_irq/enable_irq as irq can be masked from cpsw itself") Signed-off-by: Grygorii Strashko grygorii.strashko@ti.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/net/ethernet/ti/cpsw.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index 8417d4c..0b7a3eb 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -954,8 +954,8 @@ static irqreturn_t cpsw_rx_interrupt(int irq, void *dev_id) { struct cpsw_common *cpsw = dev_id;
- cpdma_ctlr_eoi(cpsw->dma, CPDMA_EOI_RX); writel(0, &cpsw->wr_regs->rx_en); + cpdma_ctlr_eoi(cpsw->dma, CPDMA_EOI_RX);
if (cpsw->quirk_irq) { disable_irq_nosync(cpsw->irqs_table[0]);
From: Dust Li dust.li@linux.alibaba.com
[ Upstream commit 2f23cd42e19c22c24ff0e221089b7b6123b117c5 ]
sch->q.len hasn't been set if the subqueue is a NOLOCK qdisc in mq_dump() and mqprio_dump().
Fixes: ce679e8df7ed ("net: sched: add support for TCQ_F_NOLOCK subqueues to sch_mqprio") Signed-off-by: Dust Li dust.li@linux.alibaba.com Signed-off-by: Tony Lu tonylu@linux.alibaba.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- net/sched/sch_mq.c | 1 + net/sched/sch_mqprio.c | 1 + 2 files changed, 2 insertions(+)
diff --git a/net/sched/sch_mq.c b/net/sched/sch_mq.c index 15c6eb5..c008a31 100644 --- a/net/sched/sch_mq.c +++ b/net/sched/sch_mq.c @@ -158,6 +158,7 @@ static int mq_dump(struct Qdisc *sch, struct sk_buff *skb) __gnet_stats_copy_queue(&sch->qstats, qdisc->cpu_qstats, &qdisc->qstats, qlen); + sch->q.qlen += qlen; } else { sch->q.qlen += qdisc->q.qlen; sch->bstats.bytes += qdisc->bstats.bytes; diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c index 7f280a5..008db8d 100644 --- a/net/sched/sch_mqprio.c +++ b/net/sched/sch_mqprio.c @@ -413,6 +413,7 @@ static int mqprio_dump(struct Qdisc *sch, struct sk_buff *skb) __gnet_stats_copy_queue(&sch->qstats, qdisc->cpu_qstats, &qdisc->qstats, qlen); + sch->q.qlen += qlen; } else { sch->q.qlen += qdisc->q.qlen; sch->bstats.bytes += qdisc->bstats.bytes;
From: Mian Yousaf Kaukab ykaukab@suse.de
[ Upstream commit a350d2e7adbb57181d33e3aa6f0565632747feaa ]
Since commit 2b3e88ea6528 ("net: phy: improve phy state checking") phy_start_aneg() expects phy state to be >= PHY_UP. Call phy_start() before calling phy_start_aneg() during probe so that autonegotiation is initiated.
As phy_start() takes care of calling phy_start_aneg(), drop the explicit call to phy_start_aneg().
Network fails without this patch on Octeon TX.
Fixes: 2b3e88ea6528 ("net: phy: improve phy state checking") Signed-off-by: Mian Yousaf Kaukab ykaukab@suse.de Reviewed-by: Andrew Lunn andrew@lunn.ch Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/net/ethernet/cavium/thunder/thunder_bgx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c index e337da6..8ae28f8 100644 --- a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c +++ b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c @@ -1118,7 +1118,7 @@ static int bgx_lmac_enable(struct bgx *bgx, u8 lmacid) phy_interface_mode(lmac->lmac_type))) return -ENODEV;
- phy_start_aneg(lmac->phydev); + phy_start(lmac->phydev); return 0; }
From: Aaron Conole aconole@redhat.com
[ Upstream commit 5d50aa83e2c8e91ced2cca77c198b468ca9210f4 ]
The openvswitch module shares a common conntrack and NAT infrastructure exposed via netfilter. It's possible that a packet needs both SNAT and DNAT manipulation, due to e.g. tuple collision. Netfilter can support this because it runs through the NAT table twice - once on ingress and again after egress. The openvswitch module doesn't have such capability.
Like netfilter hook infrastructure, we should run through NAT twice to keep the symmetry.
Fixes: 05752523e565 ("openvswitch: Interface with NAT.") Signed-off-by: Aaron Conole aconole@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- net/openvswitch/conntrack.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c index 781a04a..5a59bdd 100644 --- a/net/openvswitch/conntrack.c +++ b/net/openvswitch/conntrack.c @@ -897,6 +897,17 @@ static int ovs_ct_nat(struct net *net, struct sw_flow_key *key, } err = ovs_ct_nat_execute(skb, ct, ctinfo, &info->range, maniptype);
+ if (err == NF_ACCEPT && + ct->status & IPS_SRC_NAT && ct->status & IPS_DST_NAT) { + if (maniptype == NF_NAT_MANIP_SRC) + maniptype = NF_NAT_MANIP_DST; + else + maniptype = NF_NAT_MANIP_SRC; + + err = ovs_ct_nat_execute(skb, ct, ctinfo, &info->range, + maniptype); + } + /* Mark NAT done if successful and update the flow key. */ if (err == NF_ACCEPT) ovs_nat_update_key(key, skb, maniptype);
From: Eric Dumazet edumazet@google.com
[ Upstream commit 9424e2e7ad93ffffa88f882c9bc5023570904b55 ]
Back in 2008, Adam Langley fixed the corner case of packets for flows having all of the following options : MD5 TS SACK
Since MD5 needs 20 bytes, and TS needs 12 bytes, no sack block can be cooked from the remaining 8 bytes.
tcp_established_options() correctly sets opts->num_sack_blocks to zero, but returns 36 instead of 32.
This means TCP cooks packets with 4 extra bytes at the end of options, containing unitialized bytes.
Fixes: 33ad798c924b ("tcp: options clean up") Signed-off-by: Eric Dumazet edumazet@google.com Reported-by: syzbot syzkaller@googlegroups.com Acked-by: Neal Cardwell ncardwell@google.com Acked-by: Soheil Hassas Yeganeh soheil@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- net/ipv4/tcp_output.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 8971cc1..ad787e7 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -740,8 +740,9 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb min_t(unsigned int, eff_sacks, (remaining - TCPOLEN_SACK_BASE_ALIGNED) / TCPOLEN_SACK_PERBLOCK); - size += TCPOLEN_SACK_BASE_ALIGNED + - opts->num_sack_blocks * TCPOLEN_SACK_PERBLOCK; + if (likely(opts->num_sack_blocks)) + size += TCPOLEN_SACK_BASE_ALIGNED + + opts->num_sack_blocks * TCPOLEN_SACK_PERBLOCK; }
return size;
From: Taehee Yoo ap420073@gmail.com
[ Upstream commit 9cf1cd8ee3ee09ef2859017df2058e2f53c5347f ]
In order to set/get/dump, the tipc uses the generic netlink infrastructure. So, when tipc module is inserted, init function calls genl_register_family(). After genl_register_family(), set/get/dump commands are immediately allowed and these callbacks internally use the net_generic. net_generic is allocated by register_pernet_device() but this is called after genl_register_family() in the __init function. So, these callbacks would use un-initialized net_generic.
Test commands: #SHELL1 while : do modprobe tipc modprobe -rv tipc done
#SHELL2 while : do tipc link list done
Splat looks like: [ 59.616322][ T2788] kasan: CONFIG_KASAN_INLINE enabled [ 59.617234][ T2788] kasan: GPF could be caused by NULL-ptr deref or user memory access [ 59.618398][ T2788] general protection fault: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN PTI [ 59.619389][ T2788] CPU: 3 PID: 2788 Comm: tipc Not tainted 5.4.0+ #194 [ 59.620231][ T2788] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006 [ 59.621428][ T2788] RIP: 0010:tipc_bcast_get_broadcast_mode+0x131/0x310 [tipc] [ 59.622379][ T2788] Code: c7 c6 ef 8b 38 c0 65 ff 0d 84 83 c9 3f e8 d7 a5 f2 e3 48 8d bb 38 11 00 00 48 b8 00 00 00 00 [ 59.622550][ T2780] NET: Registered protocol family 30 [ 59.624627][ T2788] RSP: 0018:ffff88804b09f578 EFLAGS: 00010202 [ 59.624630][ T2788] RAX: dffffc0000000000 RBX: 0000000000000011 RCX: 000000008bc66907 [ 59.624631][ T2788] RDX: 0000000000000229 RSI: 000000004b3cf4cc RDI: 0000000000001149 [ 59.624633][ T2788] RBP: ffff88804b09f588 R08: 0000000000000003 R09: fffffbfff4fb3df1 [ 59.624635][ T2788] R10: fffffbfff50318f8 R11: ffff888066cadc18 R12: ffffffffa6cc2f40 [ 59.624637][ T2788] R13: 1ffff11009613eba R14: ffff8880662e9328 R15: ffff8880662e9328 [ 59.624639][ T2788] FS: 00007f57d8f7b740(0000) GS:ffff88806cc00000(0000) knlGS:0000000000000000 [ 59.624645][ T2788] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 59.625875][ T2780] tipc: Started in single node mode [ 59.626128][ T2788] CR2: 00007f57d887a8c0 CR3: 000000004b140002 CR4: 00000000000606e0 [ 59.633991][ T2788] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 59.635195][ T2788] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 59.636478][ T2788] Call Trace: [ 59.637025][ T2788] tipc_nl_add_bc_link+0x179/0x1470 [tipc] [ 59.638219][ T2788] ? lock_downgrade+0x6e0/0x6e0 [ 59.638923][ T2788] ? __tipc_nl_add_link+0xf90/0xf90 [tipc] [ 59.639533][ T2788] ? tipc_nl_node_dump_link+0x318/0xa50 [tipc] [ 59.640160][ T2788] ? mutex_lock_io_nested+0x1380/0x1380 [ 59.640746][ T2788] tipc_nl_node_dump_link+0x4fd/0xa50 [tipc] [ 59.641356][ T2788] ? tipc_nl_node_reset_link_stats+0x340/0x340 [tipc] [ 59.642088][ T2788] ? __skb_ext_del+0x270/0x270 [ 59.642594][ T2788] genl_lock_dumpit+0x85/0xb0 [ 59.643050][ T2788] netlink_dump+0x49c/0xed0 [ 59.643529][ T2788] ? __netlink_sendskb+0xc0/0xc0 [ 59.644044][ T2788] ? __netlink_dump_start+0x190/0x800 [ 59.644617][ T2788] ? __mutex_unlock_slowpath+0xd0/0x670 [ 59.645177][ T2788] __netlink_dump_start+0x5a0/0x800 [ 59.645692][ T2788] genl_rcv_msg+0xa75/0xe90 [ 59.646144][ T2788] ? __lock_acquire+0xdfe/0x3de0 [ 59.646692][ T2788] ? genl_family_rcv_msg_attrs_parse+0x320/0x320 [ 59.647340][ T2788] ? genl_lock_dumpit+0xb0/0xb0 [ 59.647821][ T2788] ? genl_unlock+0x20/0x20 [ 59.648290][ T2788] ? genl_parallel_done+0xe0/0xe0 [ 59.648787][ T2788] ? find_held_lock+0x39/0x1d0 [ 59.649276][ T2788] ? genl_rcv+0x15/0x40 [ 59.649722][ T2788] ? lock_contended+0xcd0/0xcd0 [ 59.650296][ T2788] netlink_rcv_skb+0x121/0x350 [ 59.650828][ T2788] ? genl_family_rcv_msg_attrs_parse+0x320/0x320 [ 59.651491][ T2788] ? netlink_ack+0x940/0x940 [ 59.651953][ T2788] ? lock_acquire+0x164/0x3b0 [ 59.652449][ T2788] genl_rcv+0x24/0x40 [ 59.652841][ T2788] netlink_unicast+0x421/0x600 [ ... ]
Fixes: 7e4369057806 ("tipc: fix a slab object leak") Fixes: a62fbccecd62 ("tipc: make subscriber server support net namespace") Signed-off-by: Taehee Yoo ap420073@gmail.com Acked-by: Jon Maloy jon.maloy@ericsson.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- net/tipc/core.c | 29 +++++++++++++++-------------- 1 file changed, 15 insertions(+), 14 deletions(-)
diff --git a/net/tipc/core.c b/net/tipc/core.c index eb0f701..49bb9fc 100644 --- a/net/tipc/core.c +++ b/net/tipc/core.c @@ -120,14 +120,6 @@ static int __init tipc_init(void) sysctl_tipc_rmem[1] = RCVBUF_DEF; sysctl_tipc_rmem[2] = RCVBUF_MAX;
- err = tipc_netlink_start(); - if (err) - goto out_netlink; - - err = tipc_netlink_compat_start(); - if (err) - goto out_netlink_compat; - err = tipc_register_sysctl(); if (err) goto out_sysctl; @@ -148,8 +140,21 @@ static int __init tipc_init(void) if (err) goto out_bearer;
+ err = tipc_netlink_start(); + if (err) + goto out_netlink; + + err = tipc_netlink_compat_start(); + if (err) + goto out_netlink_compat; + pr_info("Started in single node mode\n"); return 0; + +out_netlink_compat: + tipc_netlink_stop(); +out_netlink: + tipc_bearer_cleanup(); out_bearer: unregister_pernet_device(&tipc_topsrv_net_ops); out_pernet_topsrv: @@ -159,22 +164,18 @@ static int __init tipc_init(void) out_pernet: tipc_unregister_sysctl(); out_sysctl: - tipc_netlink_compat_stop(); -out_netlink_compat: - tipc_netlink_stop(); -out_netlink: pr_err("Unable to start in single node mode\n"); return err; }
static void __exit tipc_exit(void) { + tipc_netlink_compat_stop(); + tipc_netlink_stop(); tipc_bearer_cleanup(); unregister_pernet_device(&tipc_topsrv_net_ops); tipc_socket_stop(); unregister_pernet_device(&tipc_net_ops); - tipc_netlink_stop(); - tipc_netlink_compat_stop(); tipc_unregister_sysctl();
pr_info("Deactivated\n");
From: Huy Nguyen huyn@mellanox.com
[ Upstream commit 73e6551699a32fac703ceea09214d6580edcf2d5 ]
When the user changes prio2buffer mapping while global pause is enabled, mlx5 driver incorrectly sets all active buffers (buffer that has at least one priority mapped) to lossy.
Solution: If global pause is enabled, set all the active buffers to lossless in prio2buffer command. Also, add error message when buffer size is not enough to meet xoff threshold.
Fixes: 0696d60853d5 ("net/mlx5e: Receive buffer configuration") Signed-off-by: Huy Nguyen huyn@mellanox.com Signed-off-by: Saeed Mahameed saeedm@mellanox.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- .../ethernet/mellanox/mlx5/core/en/port_buffer.c | 27 ++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c index 4ab0d03..28d56e4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c @@ -155,8 +155,11 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer, }
if (port_buffer->buffer[i].size < - (xoff + max_mtu + (1 << MLX5E_BUFFER_CELL_SHIFT))) + (xoff + max_mtu + (1 << MLX5E_BUFFER_CELL_SHIFT))) { + pr_err("buffer_size[%d]=%d is not enough for lossless buffer\n", + i, port_buffer->buffer[i].size); return -ENOMEM; + }
port_buffer->buffer[i].xoff = port_buffer->buffer[i].size - xoff; port_buffer->buffer[i].xon = @@ -232,6 +235,26 @@ static int update_buffer_lossy(unsigned int max_mtu, return 0; }
+static int fill_pfc_en(struct mlx5_core_dev *mdev, u8 *pfc_en) +{ + u32 g_rx_pause, g_tx_pause; + int err; + + err = mlx5_query_port_pause(mdev, &g_rx_pause, &g_tx_pause); + if (err) + return err; + + /* If global pause enabled, set all active buffers to lossless. + * Otherwise, check PFC setting. + */ + if (g_rx_pause || g_tx_pause) + *pfc_en = 0xff; + else + err = mlx5_query_port_pfc(mdev, pfc_en, NULL); + + return err; +} + #define MINIMUM_MAX_MTU 9216 int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv, u32 change, unsigned int mtu, @@ -277,7 +300,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
if (change & MLX5E_PORT_BUFFER_PRIO2BUFFER) { update_prio2buffer = true; - err = mlx5_query_port_pfc(priv->mdev, &curr_pfc_en, NULL); + err = fill_pfc_en(priv->mdev, &curr_pfc_en); if (err) return err;
From: Guillaume Nault gnault@redhat.com
[ Upstream commit 04d26e7b159a396372646a480f4caa166d1b6720 ]
If no synflood happens for a long enough period of time, then the synflood timestamp isn't refreshed and jiffies can advance so much that time_after32() can't accurately compare them any more.
Therefore, we can end up in a situation where time_after32(now, last_overflow + HZ) returns false, just because these two values are too far apart. In that case, the synflood timestamp isn't updated as it should be, which can trick tcp_synq_no_recent_overflow() into rejecting valid syncookies.
For example, let's consider the following scenario on a system with HZ=1000:
* The synflood timestamp is 0, either because that's the timestamp of the last synflood or, more commonly, because we're working with a freshly created socket.
* We receive a new SYN, which triggers synflood protection. Let's say that this happens when jiffies == 2147484649 (that is, 'synflood timestamp' + HZ + 2^31 + 1).
* Then tcp_synq_overflow() doesn't update the synflood timestamp, because time_after32(2147484649, 1000) returns false. With: - 2147484649: the value of jiffies, aka. 'now'. - 1000: the value of 'last_overflow' + HZ.
* A bit later, we receive the ACK completing the 3WHS. But cookie_v[46]_check() rejects it because tcp_synq_no_recent_overflow() says that we're not under synflood. That's because time_after32(2147484649, 120000) returns false. With: - 2147484649: the value of jiffies, aka. 'now'. - 120000: the value of 'last_overflow' + TCP_SYNCOOKIE_VALID.
Of course, in reality jiffies would have increased a bit, but this condition will last for the next 119 seconds, which is far enough to accommodate for jiffie's growth.
Fix this by updating the overflow timestamp whenever jiffies isn't within the [last_overflow, last_overflow + HZ] range. That shouldn't have any performance impact since the update still happens at most once per second.
Now we're guaranteed to have fresh timestamps while under synflood, so tcp_synq_no_recent_overflow() can safely use it with time_after32() in such situations.
Stale timestamps can still make tcp_synq_no_recent_overflow() return the wrong verdict when not under synflood. This will be handled in the next patch.
For 64 bits architectures, the problem was introduced with the conversion of ->tw_ts_recent_stamp to 32 bits integer by commit cca9bab1b72c ("tcp: use monotonic timestamps for PAWS"). The problem has always been there on 32 bits architectures.
Fixes: cca9bab1b72c ("tcp: use monotonic timestamps for PAWS") Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Guillaume Nault gnault@redhat.com Signed-off-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- include/linux/time.h | 13 +++++++++++++ include/net/tcp.h | 5 +++-- 2 files changed, 16 insertions(+), 2 deletions(-)
diff --git a/include/linux/time.h b/include/linux/time.h index 27d83fd..5f3e499 100644 --- a/include/linux/time.h +++ b/include/linux/time.h @@ -96,4 +96,17 @@ static inline bool itimerspec64_valid(const struct itimerspec64 *its) */ #define time_after32(a, b) ((s32)((u32)(b) - (u32)(a)) < 0) #define time_before32(b, a) time_after32(a, b) + +/** + * time_between32 - check if a 32-bit timestamp is within a given time range + * @t: the time which may be within [l,h] + * @l: the lower bound of the range + * @h: the higher bound of the range + * + * time_before32(t, l, h) returns true if @l <= @t <= @h. All operands are + * treated as 32-bit integers. + * + * Equivalent to !(time_before32(@t, @l) || time_after32(@t, @h)). + */ +#define time_between32(t, l, h) ((u32)(h) - (u32)(l) >= (u32)(t) - (u32)(l)) #endif diff --git a/include/net/tcp.h b/include/net/tcp.h index 2bcccfa..52a3945 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -485,14 +485,15 @@ static inline void tcp_synq_overflow(const struct sock *sk) reuse = rcu_dereference(sk->sk_reuseport_cb); if (likely(reuse)) { last_overflow = READ_ONCE(reuse->synq_overflow_ts); - if (time_after32(now, last_overflow + HZ)) + if (!time_between32(now, last_overflow, + last_overflow + HZ)) WRITE_ONCE(reuse->synq_overflow_ts, now); return; } }
last_overflow = tcp_sk(sk)->rx_opt.ts_recent_stamp; - if (time_after32(now, last_overflow + HZ)) + if (!time_between32(now, last_overflow, last_overflow + HZ)) tcp_sk(sk)->rx_opt.ts_recent_stamp = now; }
From: Guillaume Nault gnault@redhat.com
[ Upstream commit cb44a08f8647fd2e8db5cc9ac27cd8355fa392d8 ]
When no synflood occurs, the synflood timestamp isn't updated. Therefore it can be so old that time_after32() can consider it to be in the future.
That's a problem for tcp_synq_no_recent_overflow() as it may report that a recent overflow occurred while, in fact, it's just that jiffies has grown past 'last_overflow' + TCP_SYNCOOKIE_VALID + 2^31.
Spurious detection of recent overflows lead to extra syncookie verification in cookie_v[46]_check(). At that point, the verification should fail and the packet dropped. But we should have dropped the packet earlier as we didn't even send a syncookie.
Let's refine tcp_synq_no_recent_overflow() to report a recent overflow only if jiffies is within the [last_overflow, last_overflow + TCP_SYNCOOKIE_VALID] interval. This way, no spurious recent overflow is reported when jiffies wraps and 'last_overflow' becomes in the future from the point of view of time_after32().
However, if jiffies wraps and enters the [last_overflow, last_overflow + TCP_SYNCOOKIE_VALID] interval (with 'last_overflow' being a stale synflood timestamp), then tcp_synq_no_recent_overflow() still erroneously reports an overflow. In such cases, we have to rely on syncookie verification to drop the packet. We unfortunately have no way to differentiate between a fresh and a stale syncookie timestamp.
In practice, using last_overflow as lower bound is problematic. If the synflood timestamp is concurrently updated between the time we read jiffies and the moment we store the timestamp in 'last_overflow', then 'now' becomes smaller than 'last_overflow' and tcp_synq_no_recent_overflow() returns true, potentially dropping a valid syncookie.
Reading jiffies after loading the timestamp could fix the problem, but that'd require a memory barrier. Let's just accommodate for potential timestamp growth instead and extend the interval using 'last_overflow - HZ' as lower bound.
Signed-off-by: Guillaume Nault gnault@redhat.com Signed-off-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- include/net/tcp.h | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h index 52a3945..7be3d7f 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -509,13 +509,23 @@ static inline bool tcp_synq_no_recent_overflow(const struct sock *sk) reuse = rcu_dereference(sk->sk_reuseport_cb); if (likely(reuse)) { last_overflow = READ_ONCE(reuse->synq_overflow_ts); - return time_after32(now, last_overflow + - TCP_SYNCOOKIE_VALID); + return !time_between32(now, last_overflow - HZ, + last_overflow + + TCP_SYNCOOKIE_VALID); } }
last_overflow = tcp_sk(sk)->rx_opt.ts_recent_stamp; - return time_after32(now, last_overflow + TCP_SYNCOOKIE_VALID); + + /* If last_overflow <= jiffies <= last_overflow + TCP_SYNCOOKIE_VALID, + * then we're under synflood. However, we have to use + * 'last_overflow - HZ' as lower bound. That's because a concurrent + * tcp_synq_overflow() could update .ts_recent_stamp after we read + * jiffies but before we store .ts_recent_stamp into last_overflow, + * which could lead to rejecting a valid syncookie. + */ + return !time_between32(now, last_overflow - HZ, + last_overflow + TCP_SYNCOOKIE_VALID); }
static inline u32 tcp_cookie_time(void)
From: Guillaume Nault gnault@redhat.com
[ Upstream commit 721c8dafad26ccfa90ff659ee19755e3377b829d ]
Syncookies borrow the ->rx_opt.ts_recent_stamp field to store the timestamp of the last synflood. Protect them with READ_ONCE() and WRITE_ONCE() since reads and writes aren't serialised.
Use of .rx_opt.ts_recent_stamp for storing the synflood timestamp was introduced by a0f82f64e269 ("syncookies: remove last_synq_overflow from struct tcp_sock"). But unprotected accesses were already there when timestamp was stored in .last_synq_overflow.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Guillaume Nault gnault@redhat.com Signed-off-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- include/net/tcp.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h index 7be3d7f..2ff638f 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -492,9 +492,9 @@ static inline void tcp_synq_overflow(const struct sock *sk) } }
- last_overflow = tcp_sk(sk)->rx_opt.ts_recent_stamp; + last_overflow = READ_ONCE(tcp_sk(sk)->rx_opt.ts_recent_stamp); if (!time_between32(now, last_overflow, last_overflow + HZ)) - tcp_sk(sk)->rx_opt.ts_recent_stamp = now; + WRITE_ONCE(tcp_sk(sk)->rx_opt.ts_recent_stamp, now); }
/* syncookies: no recent synqueue overflow on this listening socket? */ @@ -515,7 +515,7 @@ static inline bool tcp_synq_no_recent_overflow(const struct sock *sk) } }
- last_overflow = tcp_sk(sk)->rx_opt.ts_recent_stamp; + last_overflow = READ_ONCE(tcp_sk(sk)->rx_opt.ts_recent_stamp);
/* If last_overflow <= jiffies <= last_overflow + TCP_SYNCOOKIE_VALID, * then we're under synflood. However, we have to use
From: Greg Kroah-Hartman gregkh@linuxfoundation.org
This reverts commit 64694b276d74c653051637caa4bfa5e8c27b30ad which is commit 7faa313f05cad184e8b17750f0cbe5216ac6debb upstream.
Turns out one of the pre-requsite patches wasn't in 4.19.y, so this patch didn't make sense. So let's revert it.
Reported-by: Steven Rostedt rostedt@goodmis.org Reported-by: Will Deacon will@kernel.org Cc: Ard Biesheuvel ard.biesheuvel@linaro.org Cc: Kevin Hilman khilman@baylibre.com Cc: Sasha Levin sashal@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Conflicts: arch/arm64/kernel/entry.S [yyl: adjust context]
Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/include/asm/assembler.h | 8 +++++--- arch/arm64/kernel/entry.S | 8 +++++--- 2 files changed, 10 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index c9a7e2e..becb545 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -675,9 +675,11 @@ .macro if_will_cond_yield_neon #ifdef CONFIG_PREEMPT get_thread_info x0 - ldr x0, [x0, #TSK_TI_PREEMPT] - sub x0, x0, #PREEMPT_DISABLE_OFFSET - cbz x0, .Lyield_@ + ldr w1, [x0, #TSK_TI_PREEMPT] + ldr x0, [x0, #TSK_TI_FLAGS] + cmp w1, #PREEMPT_DISABLE_OFFSET + csel x0, x0, xzr, eq + tbnz x0, #TIF_NEED_RESCHED, .Lyield_@ // needs rescheduling? /* fall through to endif_yield_neon */ .subsection 1 .Lyield_@ : diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 8e25e0e..e577599 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -681,16 +681,18 @@ el1_irq: irq_handler
#ifdef CONFIG_PREEMPT - ldr x24, [tsk, #TSK_TI_PREEMPT] // get preempt count + ldr w24, [tsk, #TSK_TI_PREEMPT] // get preempt count alternative_if ARM64_HAS_IRQ_PRIO_MASKING /* * DA_F were cleared at start of handling. If anything is set in DAIF, * we come back from an NMI, so skip preemption */ mrs x0, daif - orr x24, x24, x0 + orr w24, w24, w0 alternative_else_nop_endif - cbnz x24, 1f // preempt count != 0 || NMI return path + cbnz w24, 1f // preempt count != 0 || NMI return path + ldr x0, [tsk, #TSK_TI_FLAGS] // get flags + tbz x0, #TIF_NEED_RESCHED, 1f // needs rescheduling? bl el1_preempt 1: #endif
From: Chaotian Jing chaotian.jing@mediatek.com
commit 3869468e0c4800af52bfe1e0b72b338dcdae2cfc upstream.
To prepare for more users of card_busy_detect(), let's drop the struct request * as an in-parameter and convert to log the error message via dev_err() instead of pr_err().
Signed-off-by: Chaotian Jing chaotian.jing@mediatek.com Reviewed-by: Avri Altman avri.altman@wdc.com Cc: stable@vger.kernel.org Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/mmc/core/block.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 527ab15..2263121 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -982,7 +982,7 @@ static inline bool mmc_blk_in_tran_state(u32 status) }
static int card_busy_detect(struct mmc_card *card, unsigned int timeout_ms, - struct request *req, u32 *resp_errs) + u32 *resp_errs) { unsigned long timeout = jiffies + msecs_to_jiffies(timeout_ms); int err = 0; @@ -993,8 +993,8 @@ static int card_busy_detect(struct mmc_card *card, unsigned int timeout_ms,
err = __mmc_send_status(card, &status, 5); if (err) { - pr_err("%s: error %d requesting status\n", - req->rq_disk->disk_name, err); + dev_err(mmc_dev(card->host), + "error %d requesting status\n", err); return err; }
@@ -1007,9 +1007,9 @@ static int card_busy_detect(struct mmc_card *card, unsigned int timeout_ms, * leaves the program state. */ if (done) { - pr_err("%s: Card stuck in wrong state! %s %s status: %#x\n", - mmc_hostname(card->host), - req->rq_disk->disk_name, __func__, status); + dev_err(mmc_dev(card->host), + "Card stuck in wrong state! %s status: %#x\n", + __func__, status); return -ETIMEDOUT; }
@@ -1678,7 +1678,7 @@ static int mmc_blk_fix_state(struct mmc_card *card, struct request *req)
mmc_blk_send_stop(card, timeout);
- err = card_busy_detect(card, timeout, req, NULL); + err = card_busy_detect(card, timeout, NULL);
mmc_retune_release(card->host);
@@ -1902,7 +1902,7 @@ static int mmc_blk_card_busy(struct mmc_card *card, struct request *req) if (mmc_host_is_spi(card->host) || rq_data_dir(req) == READ) return 0;
- err = card_busy_detect(card, MMC_BLK_TIMEOUT_MS, req, &status); + err = card_busy_detect(card, MMC_BLK_TIMEOUT_MS, &status);
/* * Do not assume data transferred correctly if there are any error bits
From: Chaotian Jing chaotian.jing@mediatek.com
commit a0d4c7eb71dd08a89ad631177bb0cbbabd598f84 upstream.
MMC IOCTLS with R1B responses may cause the card to enter the busy state, which means it's not ready to receive a new request. To prevent new requests from being sent to the card, use a CMD13 polling loop to verify that the card returns to the transfer state, before completing the request.
Signed-off-by: Chaotian Jing chaotian.jing@mediatek.com Reviewed-by: Avri Altman avri.altman@wdc.com Cc: stable@vger.kernel.org Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/mmc/core/block.c | 147 ++++++++++++++++++----------------------------- 1 file changed, 55 insertions(+), 92 deletions(-)
diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 2263121..60eac66 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -409,38 +409,6 @@ static int mmc_blk_ioctl_copy_to_user(struct mmc_ioc_cmd __user *ic_ptr, return 0; }
-static int ioctl_rpmb_card_status_poll(struct mmc_card *card, u32 *status, - u32 retries_max) -{ - int err; - u32 retry_count = 0; - - if (!status || !retries_max) - return -EINVAL; - - do { - err = __mmc_send_status(card, status, 5); - if (err) - break; - - if (!R1_STATUS(*status) && - (R1_CURRENT_STATE(*status) != R1_STATE_PRG)) - break; /* RPMB programming operation complete */ - - /* - * Rechedule to give the MMC device a chance to continue - * processing the previous command without being polled too - * frequently. - */ - usleep_range(1000, 5000); - } while (++retry_count < retries_max); - - if (retry_count == retries_max) - err = -EPERM; - - return err; -} - static int ioctl_do_sanitize(struct mmc_card *card) { int err; @@ -469,6 +437,58 @@ static int ioctl_do_sanitize(struct mmc_card *card) return err; }
+static inline bool mmc_blk_in_tran_state(u32 status) +{ + /* + * Some cards mishandle the status bits, so make sure to check both the + * busy indication and the card state. + */ + return status & R1_READY_FOR_DATA && + (R1_CURRENT_STATE(status) == R1_STATE_TRAN); +} + +static int card_busy_detect(struct mmc_card *card, unsigned int timeout_ms, + u32 *resp_errs) +{ + unsigned long timeout = jiffies + msecs_to_jiffies(timeout_ms); + int err = 0; + u32 status; + + do { + bool done = time_after(jiffies, timeout); + + err = __mmc_send_status(card, &status, 5); + if (err) { + dev_err(mmc_dev(card->host), + "error %d requesting status\n", err); + return err; + } + + /* Accumulate any response error bits seen */ + if (resp_errs) + *resp_errs |= status; + + /* + * Timeout if the device never becomes ready for data and never + * leaves the program state. + */ + if (done) { + dev_err(mmc_dev(card->host), + "Card stuck in wrong state! %s status: %#x\n", + __func__, status); + return -ETIMEDOUT; + } + + /* + * Some cards mishandle the status bits, + * so make sure to check both the busy + * indication and the card state. + */ + } while (!mmc_blk_in_tran_state(status)); + + return err; +} + static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md, struct mmc_blk_ioc_data *idata) { @@ -478,7 +498,6 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md, struct scatterlist sg; int err; unsigned int target_part; - u32 status = 0;
if (!card || !md || !idata) return -EINVAL; @@ -612,16 +631,12 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
memcpy(&(idata->ic.response), cmd.resp, sizeof(cmd.resp));
- if (idata->rpmb) { + if (idata->rpmb || (cmd.flags & MMC_RSP_R1B)) { /* - * Ensure RPMB command has completed by polling CMD13 + * Ensure RPMB/R1B command has completed by polling CMD13 * "Send Status". */ - err = ioctl_rpmb_card_status_poll(card, &status, 5); - if (err) - dev_err(mmc_dev(card->host), - "%s: Card Status=0x%08X, error %d\n", - __func__, status, err); + err = card_busy_detect(card, MMC_BLK_TIMEOUT_MS, NULL); }
return err; @@ -971,58 +986,6 @@ static unsigned int mmc_blk_data_timeout_ms(struct mmc_host *host, return ms; }
-static inline bool mmc_blk_in_tran_state(u32 status) -{ - /* - * Some cards mishandle the status bits, so make sure to check both the - * busy indication and the card state. - */ - return status & R1_READY_FOR_DATA && - (R1_CURRENT_STATE(status) == R1_STATE_TRAN); -} - -static int card_busy_detect(struct mmc_card *card, unsigned int timeout_ms, - u32 *resp_errs) -{ - unsigned long timeout = jiffies + msecs_to_jiffies(timeout_ms); - int err = 0; - u32 status; - - do { - bool done = time_after(jiffies, timeout); - - err = __mmc_send_status(card, &status, 5); - if (err) { - dev_err(mmc_dev(card->host), - "error %d requesting status\n", err); - return err; - } - - /* Accumulate any response error bits seen */ - if (resp_errs) - *resp_errs |= status; - - /* - * Timeout if the device never becomes ready for data and never - * leaves the program state. - */ - if (done) { - dev_err(mmc_dev(card->host), - "Card stuck in wrong state! %s status: %#x\n", - __func__, status); - return -ETIMEDOUT; - } - - /* - * Some cards mishandle the status bits, - * so make sure to check both the busy - * indication and the card state. - */ - } while (!mmc_blk_in_tran_state(status)); - - return err; -} - static int mmc_blk_reset(struct mmc_blk_data *md, struct mmc_host *host, int type) {
From: Dexuan Cui decui@microsoft.com
commit f2c33ccacb2d4bbeae2a255a7ca0cbfd03017b7c upstream.
pci_pm_thaw_noirq() is supposed to return the device to D0 and restore its configuration registers, but previously it only did that for devices whose drivers implemented the new power management ops.
Hibernation, e.g., via "echo disk > /sys/power/state", involves freezing devices, creating a hibernation image, thawing devices, writing the image, and powering off. The fact that thawing did not return devices with legacy power management to D0 caused errors, e.g., in this path:
pci_pm_thaw_noirq if (pci_has_legacy_pm_support(pci_dev)) # true for Mellanox VF driver return pci_legacy_resume_early(dev) # ... legacy PM skips the rest pci_set_power_state(pci_dev, PCI_D0) pci_restore_state(pci_dev) pci_pm_thaw if (pci_has_legacy_pm_support(pci_dev)) pci_legacy_resume drv->resume mlx4_resume ... pci_enable_msix_range ... if (dev->current_state != PCI_D0) # <--- return -EINVAL;
which caused these warnings:
mlx4_core a6d1:00:02.0: INTx is not supported in multi-function mode, aborting PM: dpm_run_callback(): pci_pm_thaw+0x0/0xd7 returns -95 PM: Device a6d1:00:02.0 failed to thaw: error -95
Return devices to D0 and restore config registers for all devices, not just those whose drivers support new power management.
[bhelgaas: also call pci_restore_state() before pci_legacy_resume_early(), update comment, add stable tag, commit log] Link: https://lore.kernel.org/r/KU1P153MB016637CAEAD346F0AA8E3801BFAD0@KU1P153MB01... Signed-off-by: Dexuan Cui decui@microsoft.com Signed-off-by: Bjorn Helgaas bhelgaas@google.com Reviewed-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Cc: stable@vger.kernel.org # v4.13+ Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/pci/pci-driver.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c index cf14795..ccb455c 100644 --- a/drivers/pci/pci-driver.c +++ b/drivers/pci/pci-driver.c @@ -1079,17 +1079,22 @@ static int pci_pm_thaw_noirq(struct device *dev) return error; }
- if (pci_has_legacy_pm_support(pci_dev)) - return pci_legacy_resume_early(dev); - /* - * pci_restore_state() requires the device to be in D0 (because of MSI - * restoration among other things), so force it into D0 in case the - * driver's "freeze" callbacks put it into a low-power state directly. + * Both the legacy ->resume_early() and the new pm->thaw_noirq() + * callbacks assume the device has been returned to D0 and its + * config state has been restored. + * + * In addition, pci_restore_state() restores MSI-X state in MMIO + * space, which requires the device to be in D0, so return it to D0 + * in case the driver's "freeze" callbacks put it into a low-power + * state. */ pci_set_power_state(pci_dev, PCI_D0); pci_restore_state(pci_dev);
+ if (pci_has_legacy_pm_support(pci_dev)) + return pci_legacy_resume_early(dev); + if (drv && drv->pm && drv->pm->thaw_noirq) error = drv->pm->thaw_noirq(dev);
From: Steffen Liebergeld steffen.liebergeld@kernkonzept.com
commit d8558ac8c93d429d65d7490b512a3a67e559d0d4 upstream.
According to documentation [0] the correct offset for the Upstream Peer Decode Configuration Register (UPDCR) is 0x1014. It was previously defined as 0x1114.
d99321b63b1f ("PCI: Enable quirks for PCIe ACS on Intel PCH root ports") intended to enforce isolation between PCI devices allowing them to be put into separate IOMMU groups. Due to the wrong register offset the intended isolation was not fully enforced. This is fixed with this patch.
Please note that I did not test this patch because I have no hardware that implements this register.
[0] https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/4th-... (page 325) Fixes: d99321b63b1f ("PCI: Enable quirks for PCIe ACS on Intel PCH root ports") Link: https://lore.kernel.org/r/7a3505df-79ba-8a28-464c-88b83eefffa6@kernkonzept.c... Signed-off-by: Steffen Liebergeld steffen.liebergeld@kernkonzept.com Signed-off-by: Bjorn Helgaas bhelgaas@google.com Reviewed-by: Andrew Murray andrew.murray@arm.com Acked-by: Ashok Raj ashok.raj@intel.com Cc: stable@vger.kernel.org # v3.15+ Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/pci/quirks.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c index e415fe9..e6921aa 100644 --- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c @@ -4578,7 +4578,7 @@ int pci_dev_specific_acs_enabled(struct pci_dev *dev, u16 acs_flags) #define INTEL_BSPR_REG_BPPD (1 << 9)
/* Upstream Peer Decode Configuration Register */ -#define INTEL_UPDCR_REG 0x1114 +#define INTEL_UPDCR_REG 0x1014 /* 5:0 Peer Decode Enable bits */ #define INTEL_UPDCR_REG_MASK 0x3f
From: Jian-Hong Pan jian-hong@endlessm.com
commit e045fa29e89383c717e308609edd19d2fd29e1be upstream.
When a driver enables MSI-X, msix_program_entries() reads the MSI-X Vector Control register for each vector and saves it in desc->masked. Each register is 32 bits and bit 0 is the actual Mask bit.
When we restored these registers during resume, we previously set the Mask bit if *any* bit in desc->masked was set instead of when the Mask bit itself was set:
pci_restore_state pci_restore_msi_state __pci_restore_msix_state for_each_pci_msi_entry msix_mask_irq(entry, entry->masked) <-- entire u32 word __pci_msix_desc_mask_irq(desc, flag) mask_bits = desc->masked & ~PCI_MSIX_ENTRY_CTRL_MASKBIT if (flag) <-- testing entire u32, not just bit 0 mask_bits |= PCI_MSIX_ENTRY_CTRL_MASKBIT writel(mask_bits, desc_addr + PCI_MSIX_ENTRY_VECTOR_CTRL)
This means that after resume, MSI-X vectors were masked when they shouldn't be, which leads to timeouts like this:
nvme nvme0: I/O 978 QID 3 timeout, completion polled
On resume, set the Mask bit only when the saved Mask bit from suspend was set.
This should remove the need for 19ea025e1d28 ("nvme: Add quirk for Kingston NVME SSD running FW E8FK11.T").
[bhelgaas: commit log, move fix to __pci_msix_desc_mask_irq()] Link: https://bugzilla.kernel.org/show_bug.cgi?id=204887 Link: https://lore.kernel.org/r/20191008034238.2503-1-jian-hong@endlessm.com Fixes: f2440d9acbe8 ("PCI MSI: Refactor interrupt masking code") Signed-off-by: Jian-Hong Pan jian-hong@endlessm.com Signed-off-by: Bjorn Helgaas bhelgaas@google.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/pci/msi.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c index 709f5bf..873848b 100644 --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c @@ -211,7 +211,7 @@ u32 __pci_msix_desc_mask_irq(struct msi_desc *desc, u32 flag) return 0;
mask_bits &= ~PCI_MSIX_ENTRY_CTRL_MASKBIT; - if (flag) + if (flag & PCI_MSIX_ENTRY_CTRL_MASKBIT) mask_bits |= PCI_MSIX_ENTRY_CTRL_MASKBIT; writel(mask_bits, pci_msix_desc_addr(desc) + PCI_MSIX_ENTRY_VECTOR_CTRL);
From: George Cherian george.cherian@marvell.com
commit f338bb9f0179cb959977b74e8331b312264d720b upstream.
Enhance the ACS quirk for Cavium Processors. Add the root port vendor IDs for ThunderX2 and ThunderX3 series of processors.
[bhelgaas: add Fixes: and stable tag] Fixes: f2ddaf8dfd4a ("PCI: Apply Cavium ThunderX ACS quirk to more Root Ports") Link: https://lore.kernel.org/r/20191111024243.GA11408@dc5-eodlnx05.marvell.com Signed-off-by: George Cherian george.cherian@marvell.com Signed-off-by: Bjorn Helgaas bhelgaas@google.com Reviewed-by: Robert Richter rrichter@marvell.com Cc: stable@vger.kernel.org # v4.12+ Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/pci/quirks.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-)
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c index e6921aa..1cab79b 100644 --- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c @@ -4219,15 +4219,21 @@ static int pci_quirk_amd_sb_acs(struct pci_dev *dev, u16 acs_flags)
static bool pci_quirk_cavium_acs_match(struct pci_dev *dev) { + if (!pci_is_pcie(dev) || pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT) + return false; + + switch (dev->device) { /* - * Effectively selects all downstream ports for whole ThunderX 1 - * family by 0xf800 mask (which represents 8 SoCs), while the lower - * bits of device ID are used to indicate which subdevice is used - * within the SoC. + * Effectively selects all downstream ports for whole ThunderX1 + * (which represents 8 SoCs). */ - return (pci_is_pcie(dev) && - (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) && - ((dev->device & 0xf800) == 0xa000)); + case 0xa000 ... 0xa7ff: /* ThunderX1 */ + case 0xaf84: /* ThunderX2 */ + case 0xb884: /* ThunderX3 */ + return true; + default: + return false; + } }
static int pci_quirk_cavium_acs(struct pci_dev *dev, u16 acs_flags)
From: Max Filippov jcmvbkbc@gmail.com
commit 36de10c4788efc6efe6ff9aa10d38cb7eea4c818 upstream.
Virtual and translated addresses retrieved by the xtensa TLB sanity checker must be consistent, i.e. correspond to the same state of the checked TLB entry. KASAN shadow memory is mapped dynamically using auto-refill TLB entries and thus may change TLB state between the virtual and translated address retrieval, resulting in false TLB insanity report. Move read_xtlb_translation close to read_xtlb_virtual to make sure that read values are consistent.
Cc: stable@vger.kernel.org Fixes: a99e07ee5e88 ("xtensa: check TLB sanity on return to userspace") Signed-off-by: Max Filippov jcmvbkbc@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/xtensa/mm/tlb.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/xtensa/mm/tlb.c b/arch/xtensa/mm/tlb.c index 59153d0..b43f036 100644 --- a/arch/xtensa/mm/tlb.c +++ b/arch/xtensa/mm/tlb.c @@ -216,6 +216,8 @@ static int check_tlb_entry(unsigned w, unsigned e, bool dtlb) unsigned tlbidx = w | (e << PAGE_SHIFT); unsigned r0 = dtlb ? read_dtlb_virtual(tlbidx) : read_itlb_virtual(tlbidx); + unsigned r1 = dtlb ? + read_dtlb_translation(tlbidx) : read_itlb_translation(tlbidx); unsigned vpn = (r0 & PAGE_MASK) | (e << PAGE_SHIFT); unsigned pte = get_pte_for_vaddr(vpn); unsigned mm_asid = (get_rasid_register() >> 8) & ASID_MASK; @@ -231,8 +233,6 @@ static int check_tlb_entry(unsigned w, unsigned e, bool dtlb) }
if (tlb_asid == mm_asid) { - unsigned r1 = dtlb ? read_dtlb_translation(tlbidx) : - read_itlb_translation(tlbidx); if ((pte ^ r1) & PAGE_MASK) { pr_err("%cTLB: way: %u, entry: %u, mapping: %08x->%08x, PTE: %08x\n", dtlb ? 'D' : 'I', w, e, r0, r1, pte);
From: Chris Lew clew@codeaurora.org
commit 4623e8bf1de0b86e23a56cdb39a72f054e89c3bd upstream.
When wrapping around the FIFO, the remote expects the tail pointer to be reset to 0 on the edge case where the tail equals the FIFO length.
Fixes: caf989c350e8 ("rpmsg: glink: Introduce glink smem based transport") Cc: stable@vger.kernel.org Signed-off-by: Chris Lew clew@codeaurora.org Signed-off-by: Bjorn Andersson bjorn.andersson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/rpmsg/qcom_glink_smem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/rpmsg/qcom_glink_smem.c b/drivers/rpmsg/qcom_glink_smem.c index 7b65443..bbd5759 100644 --- a/drivers/rpmsg/qcom_glink_smem.c +++ b/drivers/rpmsg/qcom_glink_smem.c @@ -105,7 +105,7 @@ static void glink_smem_rx_advance(struct qcom_glink_pipe *np, tail = le32_to_cpu(*pipe->tail);
tail += count; - if (tail > pipe->native.length) + if (tail >= pipe->native.length) tail -= pipe->native.length;
*pipe->tail = cpu_to_le32(tail);
From: Arun Kumar Neelakantam aneela@codeaurora.org
commit b85f6b601407347f5425c4c058d1b7871f5bf4f0 upstream.
Memory allocated for re-usable intents are not freed during channel cleanup which causes memory leak in system.
Check and free all re-usable memory to avoid memory leak.
Fixes: 933b45da5d1d ("rpmsg: glink: Add support for TX intents") Cc: stable@vger.kernel.org Acked-By: Chris Lew clew@codeaurora.org Tested-by: Srinivas Kandagatla srinivas.kandagatla@linaro.org Signed-off-by: Arun Kumar Neelakantam aneela@codeaurora.org Reported-by: Srinivas Kandagatla srinivas.kandagatla@linaro.org Signed-off-by: Bjorn Andersson bjorn.andersson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/rpmsg/qcom_glink_native.c | 9 +++++++++ 1 file changed, 9 insertions(+)
diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c index e2ce4e6..335b47c 100644 --- a/drivers/rpmsg/qcom_glink_native.c +++ b/drivers/rpmsg/qcom_glink_native.c @@ -241,10 +241,19 @@ static void qcom_glink_channel_release(struct kref *ref) { struct glink_channel *channel = container_of(ref, struct glink_channel, refcount); + struct glink_core_rx_intent *tmp; unsigned long flags; + int iid;
spin_lock_irqsave(&channel->intent_lock, flags); + idr_for_each_entry(&channel->liids, tmp, iid) { + kfree(tmp->data); + kfree(tmp); + } idr_destroy(&channel->liids); + + idr_for_each_entry(&channel->riids, tmp, iid) + kfree(tmp); idr_destroy(&channel->riids); spin_unlock_irqrestore(&channel->intent_lock, flags);
From: Arun Kumar Neelakantam aneela@codeaurora.org
commit ac74ea01860170699fb3b6ea80c0476774c8e94f upstream.
Extra channel reference put when remote sending OPEN_ACK after timeout causes use-after-free while handling next remote CLOSE command.
Remove extra reference put in timeout case to avoid use-after-free.
Fixes: b4f8e52b89f6 ("rpmsg: Introduce Qualcomm RPM glink driver") Cc: stable@vger.kernel.org Tested-by: Srinivas Kandagatla srinivas.kandagatla@linaro.org Signed-off-by: Arun Kumar Neelakantam aneela@codeaurora.org Signed-off-by: Bjorn Andersson bjorn.andersson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/rpmsg/qcom_glink_native.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c index 335b47c..539a6b9 100644 --- a/drivers/rpmsg/qcom_glink_native.c +++ b/drivers/rpmsg/qcom_glink_native.c @@ -1106,13 +1106,12 @@ static int qcom_glink_create_remote(struct qcom_glink *glink, close_link: /* * Send a close request to "undo" our open-ack. The close-ack will - * release the last reference. + * release qcom_glink_send_open_req() reference and the last reference + * will be relesed after receiving remote_close or transport unregister + * by calling qcom_glink_native_remove(). */ qcom_glink_send_close_req(glink, channel);
- /* Release qcom_glink_send_open_req() reference */ - kref_put(&channel->refcount, qcom_glink_channel_release); - return ret; }
From: Chris Lew clew@codeaurora.org
commit b646293e272816dd0719529dcebbd659de0722f7 upstream.
In a remote processor crash scenario, there is no guarantee the remote processor sent close requests before it went into a bad state. Remove the reference that is normally handled by the close command in the so channel resources can be released.
Fixes: b4f8e52b89f6 ("rpmsg: Introduce Qualcomm RPM glink driver") Cc: stable@vger.kernel.org Tested-by: Srinivas Kandagatla srinivas.kandagatla@linaro.org Signed-off-by: Chris Lew clew@codeaurora.org Reported-by: Srinivas Kandagatla srinivas.kandagatla@linaro.org Signed-off-by: Bjorn Andersson bjorn.andersson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/rpmsg/qcom_glink_native.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c index 539a6b9..7b1a603 100644 --- a/drivers/rpmsg/qcom_glink_native.c +++ b/drivers/rpmsg/qcom_glink_native.c @@ -1644,6 +1644,10 @@ void qcom_glink_native_remove(struct qcom_glink *glink) idr_for_each_entry(&glink->lcids, channel, cid) kref_put(&channel->refcount, qcom_glink_channel_release);
+ /* Release any defunct local channels, waiting for close-req */ + idr_for_each_entry(&glink->rcids, channel, cid) + kref_put(&channel->refcount, qcom_glink_channel_release); + idr_destroy(&glink->lcids); idr_destroy(&glink->rcids); spin_unlock_irqrestore(&glink->idr_lock, flags);
From: Chris Lew clew@codeaurora.org
commit f7e714988edaffe6ac578318e99501149b067ba0 upstream.
The device release function is set before registering with rpmsg. If rpmsg registration fails, the framework will call device_put(), which invokes the release function. The channel create logic does not need to free rpdev if rpmsg_register_device() fails and release is called.
Fixes: b4f8e52b89f6 ("rpmsg: Introduce Qualcomm RPM glink driver") Cc: stable@vger.kernel.org Tested-by: Srinivas Kandagatla srinivas.kandagatla@linaro.org Signed-off-by: Chris Lew clew@codeaurora.org Signed-off-by: Bjorn Andersson bjorn.andersson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/rpmsg/qcom_glink_native.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c index 7b1a603..df6e3a9 100644 --- a/drivers/rpmsg/qcom_glink_native.c +++ b/drivers/rpmsg/qcom_glink_native.c @@ -1426,15 +1426,13 @@ static int qcom_glink_rx_open(struct qcom_glink *glink, unsigned int rcid,
ret = rpmsg_register_device(rpdev); if (ret) - goto free_rpdev; + goto rcid_remove;
channel->rpdev = rpdev; }
return 0;
-free_rpdev: - kfree(rpdev); rcid_remove: spin_lock_irqsave(&glink->idr_lock, flags); idr_remove(&glink->rcids, channel->rcid);
From: Bjorn Andersson bjorn.andersson@linaro.org
commit c3dadc19b7564c732598b30d637c6f275c3b77b6 upstream.
Attempting to transmit rx_done messages after the GLINK instance is being torn down will cause use after free and memory leaks. So cancel the intent_work and free up the pending intents.
With this there are no concurrent accessors of the channel left during qcom_glink_native_remove() and there is therefor no need to hold the spinlock during this operation - which would prohibit the use of cancel_work_sync() in the release function. So remove this.
Fixes: 1d2ea36eead9 ("rpmsg: glink: Add rx done command") Cc: stable@vger.kernel.org Acked-by: Chris Lew clew@codeaurora.org Tested-by: Srinivas Kandagatla srinivas.kandagatla@linaro.org Signed-off-by: Bjorn Andersson bjorn.andersson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/rpmsg/qcom_glink_native.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c index df6e3a9..fedccdc 100644 --- a/drivers/rpmsg/qcom_glink_native.c +++ b/drivers/rpmsg/qcom_glink_native.c @@ -241,11 +241,23 @@ static void qcom_glink_channel_release(struct kref *ref) { struct glink_channel *channel = container_of(ref, struct glink_channel, refcount); + struct glink_core_rx_intent *intent; struct glink_core_rx_intent *tmp; unsigned long flags; int iid;
+ /* cancel pending rx_done work */ + cancel_work_sync(&channel->intent_work); + spin_lock_irqsave(&channel->intent_lock, flags); + /* Free all non-reuse intents pending rx_done work */ + list_for_each_entry_safe(intent, tmp, &channel->done_intents, node) { + if (!intent->reuse) { + kfree(intent->data); + kfree(intent); + } + } + idr_for_each_entry(&channel->liids, tmp, iid) { kfree(tmp->data); kfree(tmp); @@ -1628,7 +1640,6 @@ void qcom_glink_native_remove(struct qcom_glink *glink) struct glink_channel *channel; int cid; int ret; - unsigned long flags;
disable_irq(glink->irq); cancel_work_sync(&glink->rx_work); @@ -1637,7 +1648,6 @@ void qcom_glink_native_remove(struct qcom_glink *glink) if (ret) dev_warn(glink->dev, "Can't remove GLINK devices: %d\n", ret);
- spin_lock_irqsave(&glink->idr_lock, flags); /* Release any defunct local channels, waiting for close-ack */ idr_for_each_entry(&glink->lcids, channel, cid) kref_put(&channel->refcount, qcom_glink_channel_release); @@ -1648,7 +1658,6 @@ void qcom_glink_native_remove(struct qcom_glink *glink)
idr_destroy(&glink->lcids); idr_destroy(&glink->rcids); - spin_unlock_irqrestore(&glink->idr_lock, flags); mbox_free_channel(glink->mbox_chan); } EXPORT_SYMBOL_GPL(qcom_glink_native_remove);
From: Bjorn Andersson bjorn.andersson@linaro.org
commit 278bcb7300f61785dba63840bd2a8cf79f14554c upstream.
By just cancelling the deferred rx worker during GLINK instance teardown any pending deferred commands are leaked, so free them.
Fixes: b4f8e52b89f6 ("rpmsg: Introduce Qualcomm RPM glink driver") Cc: stable@vger.kernel.org Acked-by: Chris Lew clew@codeaurora.org Tested-by: Srinivas Kandagatla srinivas.kandagatla@linaro.org Signed-off-by: Bjorn Andersson bjorn.andersson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/rpmsg/qcom_glink_native.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c index fedccdc..25c394a 100644 --- a/drivers/rpmsg/qcom_glink_native.c +++ b/drivers/rpmsg/qcom_glink_native.c @@ -1565,6 +1565,18 @@ static void qcom_glink_work(struct work_struct *work) } }
+static void qcom_glink_cancel_rx_work(struct qcom_glink *glink) +{ + struct glink_defer_cmd *dcmd; + struct glink_defer_cmd *tmp; + + /* cancel any pending deferred rx_work */ + cancel_work_sync(&glink->rx_work); + + list_for_each_entry_safe(dcmd, tmp, &glink->rx_queue, node) + kfree(dcmd); +} + struct qcom_glink *qcom_glink_native_probe(struct device *dev, unsigned long features, struct qcom_glink_pipe *rx, @@ -1642,7 +1654,7 @@ void qcom_glink_native_remove(struct qcom_glink *glink) int ret;
disable_irq(glink->irq); - cancel_work_sync(&glink->rx_work); + qcom_glink_cancel_rx_work(glink);
ret = device_for_each_child(glink->dev, NULL, qcom_glink_remove_device); if (ret)
From: Long Li longli@microsoft.com
commit 4357d45f50e58672e1d17648d792f27df01dfccd upstream.
During reconnecting, the transport may have already been destroyed and is in the process being reconnected. In this case, return -EAGAIN to not fail and to retry this I/O.
Signed-off-by: Long Li longli@microsoft.com Cc: stable@vger.kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- fs/cifs/transport.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c index fe77f41..0c4df56 100644 --- a/fs/cifs/transport.c +++ b/fs/cifs/transport.c @@ -286,8 +286,11 @@ void cifs_mid_q_entry_release(struct mid_q_entry *midEntry) int val = 1; __be32 rfc1002_marker;
- if (cifs_rdma_enabled(server) && server->smbd_conn) { - rc = smbd_send(server, num_rqst, rqst); + if (cifs_rdma_enabled(server)) { + /* return -EAGAIN when connecting or reconnecting */ + rc = -EAGAIN; + if (server->smbd_conn) + rc = smbd_send(server, num_rqst, rqst); goto smbd_done; } if (ssocket == NULL)
From: Long Li longli@microsoft.com
commit d63cdbae60ac6fbb2864bd3d8df7404f12b7407d upstream.
Log these activities to help production support.
Signed-off-by: Long Li longli@microsoft.com Cc: stable@vger.kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- fs/cifs/smbdirect.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c index 1959931..e6ec5f6 100644 --- a/fs/cifs/smbdirect.c +++ b/fs/cifs/smbdirect.c @@ -1491,6 +1491,7 @@ void smbd_destroy(struct smbd_connection *info) info->transport_status == SMBD_DESTROYED);
destroy_workqueue(info->workqueue); + log_rdma_event(INFO, "rdma session destroyed\n"); kfree(info); }
@@ -1528,8 +1529,9 @@ int smbd_reconnect(struct TCP_Server_Info *server) log_rdma_event(INFO, "creating rdma session\n"); server->smbd_conn = smbd_get_connection( server, (struct sockaddr *) &server->dstaddr); - log_rdma_event(INFO, "created rdma session info=%p\n", - server->smbd_conn); + + if (server->smbd_conn) + cifs_dbg(VFS, "RDMA transport re-established\n");
return server->smbd_conn ? 0 : -ENOENT; }
From: Long Li longli@microsoft.com
commit 37941ea17d3f8eb2f5ac2f59346fab9e8439271a upstream.
While it's not friendly to fail user processes that issue more iovs than we support, at least we should return the correct error code so the user process gets a chance to retry with smaller number of iovs.
Signed-off-by: Long Li longli@microsoft.com Cc: stable@vger.kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- fs/cifs/smbdirect.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c index e6ec5f6..784628e 100644 --- a/fs/cifs/smbdirect.c +++ b/fs/cifs/smbdirect.c @@ -1164,7 +1164,7 @@ static int smbd_post_send_data(
if (n_vec > SMBDIRECT_MAX_SGE) { cifs_dbg(VFS, "Can't fit data to SGL, n_vec=%d\n", n_vec); - return -ENOMEM; + return -EINVAL; }
sg_init_table(sgl, n_vec);
From: Long Li longli@microsoft.com
commit 14cc639c17ab0b6671526a7459087352507609e4 upstream.
On reconnect, the transport data structure is NULL and its information is not available.
Signed-off-by: Long Li longli@microsoft.com Cc: stable@vger.kernel.org Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- fs/cifs/cifs_debug.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c index 0657679..a85aeff 100644 --- a/fs/cifs/cifs_debug.c +++ b/fs/cifs/cifs_debug.c @@ -210,6 +210,11 @@ static int cifs_debug_data_proc_show(struct seq_file *m, void *v) if (!server->rdma) goto skip_rdma;
+ if (!server->smbd_conn) { + seq_printf(m, "\nSMBDirect transport not available"); + goto skip_rdma; + } + seq_printf(m, "\nSMBDirect (in hex) protocol version: %x " "transport status: %x", server->smbd_conn->protocol,
From: Pavel Shilovsky pshilov@microsoft.com
commit 44805b0e62f15e90d233485420e1847133716bdc upstream.
Currently the client translates O_SYNC and O_DIRECT flags into corresponding SMB create options when openning a file. The problem is that on reconnect when the file is being re-opened the client doesn't set those flags and it causes a server to reject re-open requests because create options don't match. The latter means that any subsequent system call against that open file fail until a share is re-mounted.
Fix this by properly setting SMB create options when re-openning files after reconnects.
Fixes: 1013e760d10e6: ("SMB3: Don't ignore O_SYNC/O_DSYNC and O_DIRECT flags") Cc: Stable stable@vger.kernel.org Signed-off-by: Pavel Shilovsky pshilov@microsoft.com Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- fs/cifs/file.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/fs/cifs/file.c b/fs/cifs/file.c index 79b6ef1..e92513e 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -726,6 +726,13 @@ int cifs_open(struct inode *inode, struct file *file) if (backup_cred(cifs_sb)) create_options |= CREATE_OPEN_BACKUP_INTENT;
+ /* O_SYNC also has bit for O_DSYNC so following check picks up either */ + if (cfile->f_flags & O_SYNC) + create_options |= CREATE_WRITE_THROUGH; + + if (cfile->f_flags & O_DIRECT) + create_options |= CREATE_NO_BUFFER; + if (server->ops->get_lease_key) server->ops->get_lease_key(inode, &cfile->fid);
From: Pavel Shilovsky pshilov@microsoft.com
commit 9150c3adbf24d77cfba37f03639d4a908ca4ac25 upstream.
If Close command is interrupted before sending a request to the server the client ends up leaking an open file handle. This wastes server resources and can potentially block applications that try to remove the file or any directory containing this file.
Fix this by putting the close command into a worker queue, so another thread retries it later.
Cc: Stable stable@vger.kernel.org Tested-by: Frank Sorenson sorenson@redhat.com Reviewed-by: Ronnie Sahlberg lsahlber@redhat.com Signed-off-by: Pavel Shilovsky pshilov@microsoft.com Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- fs/cifs/smb2misc.c | 59 ++++++++++++++++++++++++++++++++++++++++------------- fs/cifs/smb2pdu.c | 16 ++++++++++++++- fs/cifs/smb2proto.h | 3 +++ 3 files changed, 63 insertions(+), 15 deletions(-)
diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c index 449d1584..766974f 100644 --- a/fs/cifs/smb2misc.c +++ b/fs/cifs/smb2misc.c @@ -743,36 +743,67 @@ struct smb2_lease_break_work { kfree(cancelled); }
+/* Caller should already has an extra reference to @tcon */ +static int +__smb2_handle_cancelled_close(struct cifs_tcon *tcon, __u64 persistent_fid, + __u64 volatile_fid) +{ + struct close_cancelled_open *cancelled; + + cancelled = kzalloc(sizeof(*cancelled), GFP_KERNEL); + if (!cancelled) + return -ENOMEM; + + cancelled->fid.persistent_fid = persistent_fid; + cancelled->fid.volatile_fid = volatile_fid; + cancelled->tcon = tcon; + INIT_WORK(&cancelled->work, smb2_cancelled_close_fid); + WARN_ON(queue_work(cifsiod_wq, &cancelled->work) == false); + + return 0; +} + +int +smb2_handle_cancelled_close(struct cifs_tcon *tcon, __u64 persistent_fid, + __u64 volatile_fid) +{ + int rc; + + cifs_dbg(FYI, "%s: tc_count=%d\n", __func__, tcon->tc_count); + spin_lock(&cifs_tcp_ses_lock); + tcon->tc_count++; + spin_unlock(&cifs_tcp_ses_lock); + + rc = __smb2_handle_cancelled_close(tcon, persistent_fid, volatile_fid); + if (rc) + cifs_put_tcon(tcon); + + return rc; +} + int smb2_handle_cancelled_mid(char *buffer, struct TCP_Server_Info *server) { struct smb2_sync_hdr *sync_hdr = (struct smb2_sync_hdr *)buffer; struct smb2_create_rsp *rsp = (struct smb2_create_rsp *)buffer; struct cifs_tcon *tcon; - struct close_cancelled_open *cancelled; + int rc;
if (sync_hdr->Command != SMB2_CREATE || sync_hdr->Status != STATUS_SUCCESS) return 0;
- cancelled = kzalloc(sizeof(*cancelled), GFP_KERNEL); - if (!cancelled) - return -ENOMEM; - tcon = smb2_find_smb_tcon(server, sync_hdr->SessionId, sync_hdr->TreeId); - if (!tcon) { - kfree(cancelled); + if (!tcon) return -ENOENT; - }
- cancelled->fid.persistent_fid = rsp->PersistentFileId; - cancelled->fid.volatile_fid = rsp->VolatileFileId; - cancelled->tcon = tcon; - INIT_WORK(&cancelled->work, smb2_cancelled_close_fid); - queue_work(cifsiod_wq, &cancelled->work); + rc = __smb2_handle_cancelled_close(tcon, rsp->PersistentFileId, + rsp->VolatileFileId); + if (rc) + cifs_put_tcon(tcon);
- return 0; + return rc; }
/** diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c index 64cca51..30417d4 100644 --- a/fs/cifs/smb2pdu.c +++ b/fs/cifs/smb2pdu.c @@ -2629,7 +2629,21 @@ int smb311_posix_mkdir(const unsigned int xid, struct inode *inode, SMB2_close(const unsigned int xid, struct cifs_tcon *tcon, u64 persistent_fid, u64 volatile_fid) { - return SMB2_close_flags(xid, tcon, persistent_fid, volatile_fid, 0); + int rc; + int tmp_rc; + + rc = SMB2_close_flags(xid, tcon, persistent_fid, volatile_fid, 0); + + /* retry close in a worker thread if this one is interrupted */ + if (rc == -EINTR) { + tmp_rc = smb2_handle_cancelled_close(tcon, persistent_fid, + volatile_fid); + if (tmp_rc) + cifs_dbg(VFS, "handle cancelled close fid 0x%llx returned error %d\n", + persistent_fid, tmp_rc); + } + + return rc; }
int diff --git a/fs/cifs/smb2proto.h b/fs/cifs/smb2proto.h index b407657..c66d7da 100644 --- a/fs/cifs/smb2proto.h +++ b/fs/cifs/smb2proto.h @@ -204,6 +204,9 @@ extern int SMB2_set_compression(const unsigned int xid, struct cifs_tcon *tcon, extern int SMB2_oplock_break(const unsigned int xid, struct cifs_tcon *tcon, const u64 persistent_fid, const u64 volatile_fid, const __u8 oplock_level); +extern int smb2_handle_cancelled_close(struct cifs_tcon *tcon, + __u64 persistent_fid, + __u64 volatile_fid); extern int smb2_handle_cancelled_mid(char *buffer, struct TCP_Server_Info *server); void smb2_cancelled_close_fid(struct work_struct *work);
From: Lihua Yao ylhuajnu@outlook.com
commit d60d0cff4ab01255b25375425745c3cff69558ad upstream.
fin_pll is the parent of clock-controller@7e00f000, specify the dependency to ensure proper initialization order of clock providers.
without this patch: [ 0.000000] S3C6410 clocks: apll = 0, mpll = 0 [ 0.000000] epll = 0, arm_clk = 0
with this patch: [ 0.000000] S3C6410 clocks: apll = 532000000, mpll = 532000000 [ 0.000000] epll = 24000000, arm_clk = 532000000
Cc: stable@vger.kernel.org Fixes: 3f6d439f2022 ("clk: reverse default clk provider initialization order in of_clk_init()") Signed-off-by: Lihua Yao ylhuajnu@outlook.com Reviewed-by: Sylwester Nawrocki s.nawrocki@samsung.com Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm/boot/dts/s3c6410-mini6410.dts | 4 ++++ arch/arm/boot/dts/s3c6410-smdk6410.dts | 4 ++++ 2 files changed, 8 insertions(+)
diff --git a/arch/arm/boot/dts/s3c6410-mini6410.dts b/arch/arm/boot/dts/s3c6410-mini6410.dts index 0e159c8..1aeac33 100644 --- a/arch/arm/boot/dts/s3c6410-mini6410.dts +++ b/arch/arm/boot/dts/s3c6410-mini6410.dts @@ -165,6 +165,10 @@ }; };
+&clocks { + clocks = <&fin_pll>; +}; + &sdhci0 { pinctrl-names = "default"; pinctrl-0 = <&sd0_clk>, <&sd0_cmd>, <&sd0_cd>, <&sd0_bus4>; diff --git a/arch/arm/boot/dts/s3c6410-smdk6410.dts b/arch/arm/boot/dts/s3c6410-smdk6410.dts index a9a5689..3bf6c45 100644 --- a/arch/arm/boot/dts/s3c6410-smdk6410.dts +++ b/arch/arm/boot/dts/s3c6410-smdk6410.dts @@ -69,6 +69,10 @@ }; };
+&clocks { + clocks = <&fin_pll>; +}; + &sdhci0 { pinctrl-names = "default"; pinctrl-0 = <&sd0_clk>, <&sd0_cmd>, <&sd0_cd>, <&sd0_bus4>;
From: Dmitry Osipenko digetx@gmail.com
commit d70f7d31a9e2088e8a507194354d41ea10062994 upstream.
There is an unfortunate typo in the code that results in writing to FLOW_CTLR_HALT instead of FLOW_CTLR_CSR.
Cc: stable@vger.kernel.org Acked-by: Peter De Schrijver pdeschrijver@nvidia.com Signed-off-by: Dmitry Osipenko digetx@gmail.com Signed-off-by: Thierry Reding treding@nvidia.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm/mach-tegra/reset-handler.S | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm/mach-tegra/reset-handler.S b/arch/arm/mach-tegra/reset-handler.S index 805f306..e31f167 100644 --- a/arch/arm/mach-tegra/reset-handler.S +++ b/arch/arm/mach-tegra/reset-handler.S @@ -56,16 +56,16 @@ ENTRY(tegra_resume) cmp r6, #TEGRA20 beq 1f @ Yes /* Clear the flow controller flags for this CPU. */ - cpu_to_csr_reg r1, r0 + cpu_to_csr_reg r3, r0 mov32 r2, TEGRA_FLOW_CTRL_BASE - ldr r1, [r2, r1] + ldr r1, [r2, r3] /* Clear event & intr flag */ orr r1, r1, \ #FLOW_CTRL_CSR_INTR_FLAG | FLOW_CTRL_CSR_EVENT_FLAG movw r0, #0x3FFD @ enable, cluster_switch, immed, bitmaps @ & ext flags for CPU power mgnt bic r1, r1, r0 - str r1, [r2] + str r1, [r2, r3] 1:
mov32 r9, 0xc09
From: Jiang Yi giangyi@amazon.com
commit d567fb8819162099035e546b11a736e29c2af0ea upstream.
Since irq_bypass_register_producer() is called after request_irq(), we should do tear-down in reverse order: irq_bypass_unregister_producer() then free_irq().
Specifically free_irq() may release resources required by the irqbypass del_producer() callback. Notably an example provided by Marc Zyngier on arm64 with GICv4 that he indicates has the potential to wedge the hardware:
free_irq(irq) __free_irq(irq) irq_domain_deactivate_irq(irq) its_irq_domain_deactivate() [unmap the VLPI from the ITS]
kvm_arch_irq_bypass_del_producer(cons, prod) kvm_vgic_v4_unset_forwarding(kvm, irq, ...) its_unmap_vlpi(irq) [Unmap the VLPI from the ITS (again), remap the original LPI]
Signed-off-by: Jiang Yi giangyi@amazon.com Cc: stable@vger.kernel.org # v4.4+ Fixes: 6d7425f109d26 ("vfio: Register/unregister irq_bypass_producer") Link: https://lore.kernel.org/kvm/20191127164910.15888-1-giangyi@amazon.com Reviewed-by: Marc Zyngier maz@kernel.org Reviewed-by: Eric Auger eric.auger@redhat.com [aw: commit log] Signed-off-by: Alex Williamson alex.williamson@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/vfio/pci/vfio_pci_intrs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 1c46045..94594dc 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -297,8 +297,8 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_device *vdev, irq = pci_irq_vector(pdev, vector);
if (vdev->ctx[vector].trigger) { - free_irq(irq, vdev->ctx[vector].trigger); irq_bypass_unregister_producer(&vdev->ctx[vector].producer); + free_irq(irq, vdev->ctx[vector].trigger); kfree(vdev->ctx[vector].name); eventfd_ctx_put(vdev->ctx[vector].trigger); vdev->ctx[vector].trigger = NULL;
From: Navid Emamdoost navid.emamdoost@gmail.com
commit 6645d42d79d33e8a9fe262660a75d5f4556bbea9 upstream.
In the implementation of sync_file_merge() the allocated sync_file is leaked if number of fences overflows. Release sync_file by goto err.
Fixes: a02b9dc90d84 ("dma-buf/sync_file: refactor fence storage in struct sync_file") Signed-off-by: Navid Emamdoost navid.emamdoost@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Daniel Vetter daniel.vetter@ffwll.ch Link: https://patchwork.freedesktop.org/patch/msgid/20191122220957.30427-1-navid.e... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/dma-buf/sync_file.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c index 35dd064..91d6209 100644 --- a/drivers/dma-buf/sync_file.c +++ b/drivers/dma-buf/sync_file.c @@ -230,7 +230,7 @@ static struct sync_file *sync_file_merge(const char *name, struct sync_file *a, a_fences = get_fences(a, &a_num_fences); b_fences = get_fences(b, &b_num_fences); if (a_num_fences > INT_MAX - b_num_fences) - return NULL; + goto err;
num_fences = a_num_fences + b_num_fences;
From: Martin Blumenstingl martin.blumenstingl@googlemail.com
commit 43cb86799ff03e9819c07f37f72f80f8246ad7ed upstream.
With commit 222ec1618c3ace ("drm: Add aspect ratio parsing in DRM layer") the drm core started honoring the picture_aspect_ratio field when comparing two drm_display_modes. Prior to that it was ignored. When the CVBS encoder driver was initially submitted there was no aspect ratio check.
Switch from drm_mode_equal() to drm_mode_match() without DRM_MODE_MATCH_ASPECT_RATIO to fix "kmscube" and X.org output using the CVBS connector. When (for example) kmscube sets the output mode when using the CVBS connector it passes HDMI_PICTURE_ASPECT_NONE, making the drm_mode_equal() fail as it include the aspect ratio.
Prior to this patch kmscube reported: failed to set mode: Invalid argument
The CVBS mode checking in the sun4i (drivers/gpu/drm/sun4i/sun4i_tv.c sun4i_tv_mode_to_drm_mode) and ZTE (drivers/gpu/drm/zte/zx_tvenc.c tvenc_mode_{pal,ntsc}) drivers don't set the "picture_aspect_ratio" at all. The Meson VPU driver does not rely on the aspect ratio for the CVBS output so we can safely decouple it from the hdmi_picture_aspect setting.
Cc: stable@vger.kernel.org Fixes: 222ec1618c3ace ("drm: Add aspect ratio parsing in DRM layer") Fixes: bbbe775ec5b5da ("drm: Add support for Amlogic Meson Graphic Controller") Signed-off-by: Martin Blumenstingl martin.blumenstingl@googlemail.com Acked-by: Neil Armstrong narmstrong@baylibre.com [narmstrong: squashed with drm: meson: venc: cvbs: deduplicate the meson_cvbs_mode lookup code] Signed-off-by: Neil Armstrong narmstrong@baylibre.com Link: https://patchwork.freedesktop.org/patch/msgid/20191208171832.1064772-3-marti... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/gpu/drm/meson/meson_venc_cvbs.c | 48 ++++++++++++++++++--------------- 1 file changed, 27 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/meson/meson_venc_cvbs.c b/drivers/gpu/drm/meson/meson_venc_cvbs.c index f7945ba..e176014 100644 --- a/drivers/gpu/drm/meson/meson_venc_cvbs.c +++ b/drivers/gpu/drm/meson/meson_venc_cvbs.c @@ -75,6 +75,25 @@ struct meson_cvbs_mode meson_cvbs_modes[MESON_CVBS_MODES_COUNT] = { }, };
+static const struct meson_cvbs_mode * +meson_cvbs_get_mode(const struct drm_display_mode *req_mode) +{ + int i; + + for (i = 0; i < MESON_CVBS_MODES_COUNT; ++i) { + struct meson_cvbs_mode *meson_mode = &meson_cvbs_modes[i]; + + if (drm_mode_match(req_mode, &meson_mode->mode, + DRM_MODE_MATCH_TIMINGS | + DRM_MODE_MATCH_CLOCK | + DRM_MODE_MATCH_FLAGS | + DRM_MODE_MATCH_3D_FLAGS)) + return meson_mode; + } + + return NULL; +} + /* Connector */
static void meson_cvbs_connector_destroy(struct drm_connector *connector) @@ -147,14 +166,8 @@ static int meson_venc_cvbs_encoder_atomic_check(struct drm_encoder *encoder, struct drm_crtc_state *crtc_state, struct drm_connector_state *conn_state) { - int i; - - for (i = 0; i < MESON_CVBS_MODES_COUNT; ++i) { - struct meson_cvbs_mode *meson_mode = &meson_cvbs_modes[i]; - - if (drm_mode_equal(&crtc_state->mode, &meson_mode->mode)) - return 0; - } + if (meson_cvbs_get_mode(&crtc_state->mode)) + return 0;
return -EINVAL; } @@ -192,24 +205,17 @@ static void meson_venc_cvbs_encoder_mode_set(struct drm_encoder *encoder, struct drm_display_mode *mode, struct drm_display_mode *adjusted_mode) { + const struct meson_cvbs_mode *meson_mode = meson_cvbs_get_mode(mode); struct meson_venc_cvbs *meson_venc_cvbs = encoder_to_meson_venc_cvbs(encoder); struct meson_drm *priv = meson_venc_cvbs->priv; - int i;
- for (i = 0; i < MESON_CVBS_MODES_COUNT; ++i) { - struct meson_cvbs_mode *meson_mode = &meson_cvbs_modes[i]; + if (meson_mode) { + meson_venci_cvbs_mode_set(priv, meson_mode->enci);
- if (drm_mode_equal(mode, &meson_mode->mode)) { - meson_venci_cvbs_mode_set(priv, - meson_mode->enci); - - /* Setup 27MHz vclk2 for ENCI and VDAC */ - meson_vclk_setup(priv, MESON_VCLK_TARGET_CVBS, - MESON_VCLK_CVBS, MESON_VCLK_CVBS, - MESON_VCLK_CVBS, true); - break; - } + /* Setup 27MHz vclk2 for ENCI and VDAC */ + meson_vclk_setup(priv, MESON_VCLK_TARGET_CVBS, MESON_VCLK_CVBS, + MESON_VCLK_CVBS, MESON_VCLK_CVBS, true); } }
From: Mike Snitzer snitzer@redhat.com
commit dbaf971c9cdf10843071a60dcafc1aaab3162354 upstream.
Removes the branching for edge-case where no SCSI device handler exists. The __map_bio_fast() method was far too limited, by only selecting a new pathgroup or path IFF there was a path failure, fix this be eliminating it in favor of __map_bio(). __map_bio()'s extra SCSI device handler specific MPATHF_PG_INIT_REQUIRED test is not in the fast path anyway.
This change restores full path selector functionality for bio-based configurations that don't haave a SCSI device handler. But it should be noted that the path selectors do have an impact on performance for certain networks that are extremely fast (and don't require frequent switching).
Fixes: 8d47e65948dd ("dm mpath: remove unnecessary NVMe branching in favor of scsi_dh checks") Cc: stable@vger.kernel.org Reported-by: Drew Hastings dhastings@crucialwebhost.com Suggested-by: Martin Wilck mwilck@suse.de Signed-off-by: Mike Snitzer snitzer@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/md/dm-mpath.c | 37 +------------------------------------ 1 file changed, 1 insertion(+), 36 deletions(-)
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c index 481e54d..def4f6e 100644 --- a/drivers/md/dm-mpath.c +++ b/drivers/md/dm-mpath.c @@ -609,45 +609,10 @@ static struct pgpath *__map_bio(struct multipath *m, struct bio *bio) return pgpath; }
-static struct pgpath *__map_bio_fast(struct multipath *m, struct bio *bio) -{ - struct pgpath *pgpath; - unsigned long flags; - - /* Do we need to select a new pgpath? */ - /* - * FIXME: currently only switching path if no path (due to failure, etc) - * - which negates the point of using a path selector - */ - pgpath = READ_ONCE(m->current_pgpath); - if (!pgpath) - pgpath = choose_pgpath(m, bio->bi_iter.bi_size); - - if (!pgpath) { - if (test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags)) { - /* Queue for the daemon to resubmit */ - spin_lock_irqsave(&m->lock, flags); - bio_list_add(&m->queued_bios, bio); - spin_unlock_irqrestore(&m->lock, flags); - queue_work(kmultipathd, &m->process_queued_bios); - - return ERR_PTR(-EAGAIN); - } - return NULL; - } - - return pgpath; -} - static int __multipath_map_bio(struct multipath *m, struct bio *bio, struct dm_mpath_io *mpio) { - struct pgpath *pgpath; - - if (!m->hw_handler_name) - pgpath = __map_bio_fast(m, bio); - else - pgpath = __map_bio(m, bio); + struct pgpath *pgpath = __map_bio(m, bio);
if (IS_ERR(pgpath)) return DM_MAPIO_SUBMITTED;
From: Bart Van Assche bvanassche@acm.org
commit 5480e299b5ae57956af01d4839c9fc88a465eeab upstream.
Some time ago the block layer was modified such that timeout handlers are called from thread context instead of interrupt context. Make it safe to run the iSCSI timeout handler in thread context. This patch fixes the following lockdep complaint:
================================ WARNING: inconsistent lock state 5.5.1-dbg+ #11 Not tainted -------------------------------- inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-W} usage. kworker/7:1H/206 [HC0[0]:SC0[0]:HE1:SE1] takes: ffff88802d9827e8 (&(&session->frwd_lock)->rlock){+.?.}, at: iscsi_eh_cmd_timed_out+0xa6/0x6d0 [libiscsi] {IN-SOFTIRQ-W} state was registered at: lock_acquire+0x106/0x240 _raw_spin_lock+0x38/0x50 iscsi_check_transport_timeouts+0x3e/0x210 [libiscsi] call_timer_fn+0x132/0x470 __run_timers.part.0+0x39f/0x5b0 run_timer_softirq+0x63/0xc0 __do_softirq+0x12d/0x5fd irq_exit+0xb3/0x110 smp_apic_timer_interrupt+0x131/0x3d0 apic_timer_interrupt+0xf/0x20 default_idle+0x31/0x230 arch_cpu_idle+0x13/0x20 default_idle_call+0x53/0x60 do_idle+0x38a/0x3f0 cpu_startup_entry+0x24/0x30 start_secondary+0x222/0x290 secondary_startup_64+0xa4/0xb0 irq event stamp: 1383705 hardirqs last enabled at (1383705): [<ffffffff81aace5c>] _raw_spin_unlock_irq+0x2c/0x50 hardirqs last disabled at (1383704): [<ffffffff81aacb98>] _raw_spin_lock_irq+0x18/0x50 softirqs last enabled at (1383690): [<ffffffffa0e2efea>] iscsi_queuecommand+0x76a/0xa20 [libiscsi] softirqs last disabled at (1383682): [<ffffffffa0e2e998>] iscsi_queuecommand+0x118/0xa20 [libiscsi]
other info that might help us debug this: Possible unsafe locking scenario:
CPU0 ---- lock(&(&session->frwd_lock)->rlock); <Interrupt> lock(&(&session->frwd_lock)->rlock);
*** DEADLOCK ***
2 locks held by kworker/7:1H/206: #0: ffff8880d57bf928 ((wq_completion)kblockd){+.+.}, at: process_one_work+0x472/0xab0 #1: ffff88802b9c7de8 ((work_completion)(&q->timeout_work)){+.+.}, at: process_one_work+0x476/0xab0
stack backtrace: CPU: 7 PID: 206 Comm: kworker/7:1H Not tainted 5.5.1-dbg+ #11 Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 Workqueue: kblockd blk_mq_timeout_work Call Trace: dump_stack+0xa5/0xe6 print_usage_bug.cold+0x232/0x23b mark_lock+0x8dc/0xa70 __lock_acquire+0xcea/0x2af0 lock_acquire+0x106/0x240 _raw_spin_lock+0x38/0x50 iscsi_eh_cmd_timed_out+0xa6/0x6d0 [libiscsi] scsi_times_out+0xf4/0x440 [scsi_mod] scsi_timeout+0x1d/0x20 [scsi_mod] blk_mq_check_expired+0x365/0x3a0 bt_iter+0xd6/0xf0 blk_mq_queue_tag_busy_iter+0x3de/0x650 blk_mq_timeout_work+0x1af/0x380 process_one_work+0x56d/0xab0 worker_thread+0x7a/0x5d0 kthread+0x1bc/0x210 ret_from_fork+0x24/0x30
Fixes: 287922eb0b18 ("block: defer timeouts to a workqueue") Cc: Christoph Hellwig hch@lst.de Cc: Keith Busch keith.busch@intel.com Cc: Lee Duncan lduncan@suse.com Cc: Chris Leech cleech@redhat.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20191209173457.187370-1-bvanassche@acm.org Signed-off-by: Bart Van Assche bvanassche@acm.org Reviewed-by: Lee Duncan lduncan@suse.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/scsi/libiscsi.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c index 896f5e9..214fe77 100644 --- a/drivers/scsi/libiscsi.c +++ b/drivers/scsi/libiscsi.c @@ -2000,7 +2000,7 @@ enum blk_eh_timer_return iscsi_eh_cmd_timed_out(struct scsi_cmnd *sc)
ISCSI_DBG_EH(session, "scsi cmd %p timedout\n", sc);
- spin_lock(&session->frwd_lock); + spin_lock_bh(&session->frwd_lock); task = (struct iscsi_task *)sc->SCp.ptr; if (!task) { /* @@ -2127,7 +2127,7 @@ enum blk_eh_timer_return iscsi_eh_cmd_timed_out(struct scsi_cmnd *sc) done: if (task) task->last_timeout = jiffies; - spin_unlock(&session->frwd_lock); + spin_unlock_bh(&session->frwd_lock); ISCSI_DBG_EH(session, "return %s\n", rc == BLK_EH_RESET_TIMER ? "timer reset" : "shutdown or nh"); return rc;
From: Alex Deucher alexander.deucher@amd.com
commit 008037d4d972c9c47b273e40e52ae34f9d9e33e7 upstream.
Shift and mask were reversed. Noticed by chance.
Tested-by: Meelis Roos mroos@linux.ee Reviewed-by: Michel Dänzer mdaenzer@redhat.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/gpu/drm/radeon/r100.c | 4 ++-- drivers/gpu/drm/radeon/r200.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c index 7d39ed6..b24401f 100644 --- a/drivers/gpu/drm/radeon/r100.c +++ b/drivers/gpu/drm/radeon/r100.c @@ -1820,8 +1820,8 @@ static int r100_packet0_check(struct radeon_cs_parser *p, track->textures[i].use_pitch = 1; } else { track->textures[i].use_pitch = 0; - track->textures[i].width = 1 << ((idx_value >> RADEON_TXFORMAT_WIDTH_SHIFT) & RADEON_TXFORMAT_WIDTH_MASK); - track->textures[i].height = 1 << ((idx_value >> RADEON_TXFORMAT_HEIGHT_SHIFT) & RADEON_TXFORMAT_HEIGHT_MASK); + track->textures[i].width = 1 << ((idx_value & RADEON_TXFORMAT_WIDTH_MASK) >> RADEON_TXFORMAT_WIDTH_SHIFT); + track->textures[i].height = 1 << ((idx_value & RADEON_TXFORMAT_HEIGHT_MASK) >> RADEON_TXFORMAT_HEIGHT_SHIFT); } if (idx_value & RADEON_TXFORMAT_CUBIC_MAP_ENABLE) track->textures[i].tex_coord_type = 2; diff --git a/drivers/gpu/drm/radeon/r200.c b/drivers/gpu/drm/radeon/r200.c index c22321c..c2b506c 100644 --- a/drivers/gpu/drm/radeon/r200.c +++ b/drivers/gpu/drm/radeon/r200.c @@ -476,8 +476,8 @@ int r200_packet0_check(struct radeon_cs_parser *p, track->textures[i].use_pitch = 1; } else { track->textures[i].use_pitch = 0; - track->textures[i].width = 1 << ((idx_value >> RADEON_TXFORMAT_WIDTH_SHIFT) & RADEON_TXFORMAT_WIDTH_MASK); - track->textures[i].height = 1 << ((idx_value >> RADEON_TXFORMAT_HEIGHT_SHIFT) & RADEON_TXFORMAT_HEIGHT_MASK); + track->textures[i].width = 1 << ((idx_value & RADEON_TXFORMAT_WIDTH_MASK) >> RADEON_TXFORMAT_WIDTH_SHIFT); + track->textures[i].height = 1 << ((idx_value & RADEON_TXFORMAT_HEIGHT_MASK) >> RADEON_TXFORMAT_HEIGHT_SHIFT); } if (idx_value & R200_TXFORMAT_LOOKUP_DISABLE) track->textures[i].lookup_disable = true;
From: Mathias Nyman mathias.nyman@linux.intel.com
commit 057d476fff778f1d3b9f861fdb5437ea1a3cfc99 upstream.
A race in xhci USB3 remote wake handling may force device back to suspend after it initiated resume siganaling, causing a missed resume event or warm reset of device.
When a USB3 link completes resume signaling and goes to enabled (UO) state a interrupt is issued and the interrupt handler will clear the bus_state->port_remote_wakeup resume flag, allowing bus suspend.
If the USB3 roothub thread just finished reading port status before the interrupt, finding ports still in suspended (U3) state, but hasn't yet started suspending the hub, then the xhci interrupt handler will clear the flag that prevented roothub suspend and allow bus to suspend, forcing all port links back to suspended (U3) state.
Example case: usb_runtime_suspend() # because all ports still show suspended U3 usb_suspend_both() hub_suspend(); # successful as hub->wakeup_bits not set yet ==> INTERRUPT xhci_irq() handle_port_status() clear bus_state->port_remote_wakeup usb_wakeup_notification() sets hub->wakeup_bits; kick_hub_wq() <== END INTERRUPT hcd_bus_suspend() xhci_bus_suspend() # success as port_remote_wakeup bits cleared
Fix this by increasing roothub usage count during port resume to prevent roothub autosuspend, and by making sure bus_state->port_remote_wakeup flag is only cleared after resume completion is visible, i.e. after xhci roothub returned U0 or other non-U3 link state link on a get port status request.
Issue rootcaused by Chiasheng Lee
Cc: stable@vger.kernel.org Cc: Lee, Hou-hsun hou-hsun.lee@intel.com Reported-by: Lee, Chiasheng chiasheng.lee@intel.com Signed-off-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20191211142007.8847-3-mathias.nyman@linux.intel.co... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/usb/host/xhci-hub.c | 8 ++++++++ drivers/usb/host/xhci-ring.c | 3 +-- 2 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c index 843b774..9772c0d 100644 --- a/drivers/usb/host/xhci-hub.c +++ b/drivers/usb/host/xhci-hub.c @@ -868,6 +868,14 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd, status |= USB_PORT_STAT_C_BH_RESET << 16; if ((raw_port_status & PORT_CEC)) status |= USB_PORT_STAT_C_CONFIG_ERROR << 16; + + /* USB3 remote wake resume signaling completed */ + if (bus_state->port_remote_wakeup & (1 << wIndex) && + (raw_port_status & PORT_PLS_MASK) != XDEV_RESUME && + (raw_port_status & PORT_PLS_MASK) != XDEV_RECOVERY) { + bus_state->port_remote_wakeup &= ~(1 << wIndex); + usb_hcd_end_port_resume(&hcd->self, wIndex); + } }
if (hcd->speed < HCD_USB3) { diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index b039749..98b6760 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -1609,7 +1609,6 @@ static void handle_port_status(struct xhci_hcd *xhci, slot_id = xhci_find_slot_id_by_port(hcd, xhci, hcd_portnum + 1); if (slot_id && xhci->devs[slot_id]) xhci->devs[slot_id]->flags |= VDEV_PORT_ERROR; - bus_state->port_remote_wakeup &= ~(1 << hcd_portnum); }
if ((portsc & PORT_PLC) && (portsc & PORT_PLS_MASK) == XDEV_RESUME) { @@ -1630,6 +1629,7 @@ static void handle_port_status(struct xhci_hcd *xhci, bus_state->port_remote_wakeup |= 1 << hcd_portnum; xhci_test_and_clear_bit(xhci, port, PORT_PLC); xhci_set_link_state(xhci, port, XDEV_U0); + usb_hcd_start_port_resume(&hcd->self, hcd_portnum); /* Need to wait until the next link state change * indicates the device is actually in U0. */ @@ -1669,7 +1669,6 @@ static void handle_port_status(struct xhci_hcd *xhci, if (slot_id && xhci->devs[slot_id]) xhci_ring_device(xhci, slot_id); if (bus_state->port_remote_wakeup & (1 << hcd_portnum)) { - bus_state->port_remote_wakeup &= ~(1 << hcd_portnum); xhci_test_and_clear_bit(xhci, port, PORT_PLC); usb_wakeup_notification(hcd->self.root_hub, hcd_portnum + 1);
From: Greg Kroah-Hartman gregkh@linuxfoundation.org
Merge 45 patches from 4.19.91 stable branch (48 total) beside 3 already merged patches: 248e65f PCI: pciehp: Avoid returning prematurely from sysfs requests 6970c15 dm btree: increase rebalance threshold in __rebalance2() 5dc3f40 scsi: qla2xxx: Change discovery state before PLOGI
Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Makefile b/Makefile index 39bdbaf..4e07d78 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 4 PATCHLEVEL = 19 -SUBLEVEL = 90 +SUBLEVEL = 91 EXTRAVERSION = NAME = "People's Front"