Hi Jakub,
在 2021/3/18 9:28, Jakub Kicinski 写道:
On Thu, 18 Mar 2021 09:02:54 +0800 Huazhong Tan wrote:
On 2021/3/16 4:04, Jakub Kicinski wrote:
On Mon, 15 Mar 2021 20:23:50 +0800 Huazhong Tan wrote:
From: Jian Shen shenjian15@huawei.com
For device version V3, it supports queue bonding, which can identify the tuple information of TCP stream, and create flow director rules automatically, in order to keep the tx and rx packets are in the same queue pair. The driver set FD_ADD field of TX BD for TCP SYN packet, and set FD_DEL filed for TCP FIN or RST packet. The hardware create or remove a fd rule according to the TX BD, and it also support to age-out a rule if not hit for a long time.
The queue bonding mode is default to be disabled, and can be enabled/disabled with ethtool priv-flags command.
This seems like fairly well defined behavior, IMHO we should have a full device feature for it, rather than a private flag.
Should we add a NETIF_F_NTUPLE_HW feature for it?
It'd be better to keep the configuration close to the existing RFS config, no? Perhaps a new file under
/sys/class/net/$dev/queues/rx-$id/
to enable the feature would be more appropriate?
Otherwise I'd call it something like NETIF_F_RFS_AUTO ?
I noticed that the enum NETIF_F_XXX_BIT has already used 64 bits since
NETIF_F_HW_HSR_DUP_BIT being added, while the prototype of netdev_features_t
is u64. So there is no useable bit for new feature if I understand correct.
Is there any solution or plan for it ?
Alex, any thoughts? IIRC Intel HW had a similar feature?
Does the device need to be able to parse the frame fully for this mechanism to work? Will it work even if the TCP segment is encapsulated in a custom tunnel?
no, custom tunnel is not supported.
Hm, okay, it's just queue mapping, if device gets it wrong not the end of the world (provided security boundaries are preserved). .