Update dpdk.spec due to backporting PMD and lib patches.
Signed-off-by: Huisong Li lihuisong@huawei.com --- dpdk.spec | 77 ++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 76 insertions(+), 1 deletion(-)
diff --git a/dpdk.spec b/dpdk.spec index d5522d1..971a988 100644 --- a/dpdk.spec +++ b/dpdk.spec @@ -1,6 +1,6 @@ Name: dpdk Version: 21.11 -Release: 18 +Release: 19 Packager: packaging@6wind.com URL: http://dpdk.org %global source_version 21.11 @@ -137,6 +137,72 @@ Patch6003: backport-0001-CVE-2022-2132.patch Patch6004: backport-0002-CVE-2022-2132.patch Patch6005: backport-CVE-2022-28199.patch
+Patch9125: 0125-net-hns3-fix-link-status-capability-query-from-VF.patch +Patch9126: 0126-net-hns3-support-backplane-media-type.patch +Patch9127: 0127-net-hns3-cancel-heartbeat-alarm-when-VF-reset.patch +Patch9128: 0128-net-hns3-fix-PTP-interrupt-logging.patch +Patch9129: 0129-net-hns3-fix-statistics-locking.patch +Patch9130: 0130-net-hns3-fix-descriptors-check-with-SVE.patch +Patch9131: 0131-net-hns3-clean-some-functions.patch +Patch9132: 0132-net-hns3-delete-unused-code.patch +Patch9133: 0133-examples-dma-support-dequeue-when-no-packet-received.patch +Patch9134: 0134-net-hns3-add-dump-of-VF-VLAN-filter-modify-capabilit.patch +Patch9135: 0135-net-hns3-fix-Rx-with-PTP.patch +Patch9136: 0136-net-hns3-fix-crash-in-SVE-Tx.patch +Patch9137: 0137-net-hns3-fix-next-to-use-overflow-in-SVE-Tx.patch +Patch9138: 0138-net-hns3-fix-next-to-use-overflow-in-simple-Tx.patch +Patch9139: 0139-net-hns3-optimize-SVE-Tx-performance.patch +Patch9140: 0140-net-hns3-fix-crash-when-secondary-process-access-FW.patch +Patch9141: 0141-net-hns3-delete-unused-markup.patch +Patch9142: 0142-net-hns3-fix-clearing-hardware-MAC-statistics.patch +Patch9143: 0143-net-hns3-revert-Tx-performance-optimization.patch +Patch9144: 0144-net-hns3-fix-RSS-rule-restore.patch +Patch9145: 0145-net-hns3-fix-RSS-filter-restore.patch +Patch9146: 0146-net-hns3-fix-lock-protection-of-RSS-flow-rule.patch +Patch9147: 0147-net-hns3-fix-RSS-flow-rule-restore.patch +Patch9148: 0148-net-hns3-move-flow-direction-rule-recovery.patch +Patch9149: 0149-net-hns3-fix-restore-filter-function-input.patch +Patch9150: 0150-net-hns3-fix-build-with-gcov.patch +Patch9151: 0151-net-hns3-fix-packet-type-for-GENEVE.patch +Patch9152: 0152-net-hns3-remove-magic-numbers-for-MAC-address.patch +Patch9153: 0153-net-hns3-fix-code-check-warnings.patch +Patch9154: 0154-net-hns3-fix-header-files-includes.patch +Patch9155: 0155-net-hns3-remove-unused-structures.patch +Patch9156: 0156-net-hns3-rename-header-guards.patch +Patch9157: 0157-net-hns3-fix-IPv4-and-IPv6-RSS.patch +Patch9158: 0158-net-hns3-fix-types-in-IPv6-SCTP-fields.patch +Patch9159: 0159-net-hns3-fix-IPv4-RSS.patch +Patch9160: 0160-net-hns3-add-check-for-L3-and-L4-type.patch +Patch9161: 0161-net-hns3-revert-fix-mailbox-communication-with-HW.patch +Patch9162: 0162-net-hns3-fix-VF-mailbox-message-handling.patch +Patch9163: 0163-net-hns3-fix-minimum-Tx-frame-length.patch +Patch9164: 0164-ethdev-introduce-Rx-Tx-descriptor-dump-API.patch +Patch9165: 0165-net-hns3-support-Rx-Tx-descriptor-dump.patch +Patch9166: 0166-remove-unnecessary-null-checks.patch +Patch9167: 0167-ethdev-introduce-generic-dummy-packet-burst-function.patch +Patch9168: 0168-fix-spelling-in-comments-and-strings.patch +Patch9169: 0169-net-hns3-add-VLAN-filter-query-in-dump-file.patch +Patch9170: 0170-net-bonding-fix-array-overflow-in-Rx-burst.patch +Patch9171: 0171-net-bonding-fix-double-slave-link-status-query.patch +Patch9172: 0172-app-testpmd-fix-supported-RSS-offload-display.patch +Patch9173: 0173-app-testpmd-unify-name-of-L2-payload-offload.patch +Patch9174: 0174-app-testpmd-refactor-config-all-RSS-command.patch +Patch9175: 0175-app-testpmd-unify-RSS-types-display.patch +Patch9176: 0176-app-testpmd-compact-RSS-types-output.patch +Patch9177: 0177-app-testpmd-reorder-RSS-type-table.patch +Patch9178: 0178-app-testpmd-fix-RSS-types-display.patch +Patch9179: 0179-ethdev-support-telemetry-private-dump.patch +Patch9180: 0180-dmadev-add-telemetry.patch +Patch9181: 0181-dmadev-support-telemetry-dump-dmadev.patch +Patch9182: 0182-telemetry-add-missing-C-guards.patch +Patch9183: 0183-telemetry-add-missing-C-guards.patch +Patch9184: 0184-telemetry-fix-escaping-of-invalid-json-characters.patch +Patch9185: 0185-telemetry-add-escaping-of-strings-in-arrays.patch +Patch9186: 0186-telemetry-add-escaping-of-strings-in-dicts.patch +Patch9187: 0187-telemetry-limit-command-characters.patch +Patch9188: 0188-telemetry-eliminate-duplicate-code-for-json-output.patch +Patch9189: 0189-telemetry-make-help-command-more-helpful.patch + Summary: Data Plane Development Kit core Group: System Environment/Libraries License: BSD and LGPLv2 and GPLv2 @@ -264,6 +330,15 @@ strip -g $RPM_BUILD_ROOT/lib/modules/%{kern_devel_ver}/extra/dpdk/igb_uio.ko /usr/sbin/depmod
%changelog +* Thu Oct 21 2022 Huisong Li lihuisong@huawei.com - 21.11-19 +- add some bugfixes for hns3 +- revert Tx performance optimization for hns3 +- refactor some RSS commands for testpmd +- add Rx/Tx descriptor dump feature for hns3 +- add ethdev telemetry private dump +- add dmadev telemetry +- sync telemetry lib + * Thu Oct 6 2022 wuchangsheng wuchangsheng2@huawei.com - 21.11-18 - reinit support return ok
Currently, the VF LSC capability is obtained from PF driver in the interrupt mailbox interrupt thread, it is asynchronous. The VF driver waits for 500ms to get this capability in probe process.
The primary process will receive a message and do probe in the interrupt thread context when attach a device in the secondary process. At this case, VF driver never obtains this capability from PF.
The root cause is that 'vf->pf_push_lsc_cap' is not updated by the handling mailbox thread until finishing probe. The reason this update wouldn't be done is that the handling mailbox interrupt thread and the probe alarm thread are both in epool thread, and the probe alarm thread is before the mailbox interrupt thread.
Fixes: 9bc2289fe5ea ("net/hns3: refactor VF LSC event report") Cc: stable@dpdk.org
Signed-off-by: Huisong Li lihuisong@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_ethdev_vf.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 7323e47f15..b85f68cb1d 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -778,6 +778,14 @@ hns3vf_get_push_lsc_cap(struct hns3_hw *hw)
while (remain_ms > 0) { rte_delay_ms(HNS3_POLL_RESPONE_MS); + /* + * The probe process may perform in interrupt thread context. + * For example, users attach a device in the secondary process. + * At the moment, the handling mailbox task will be blocked. So + * driver has to actively handle the HNS3_MBX_LINK_STAT_CHANGE + * mailbox from PF driver to get this capability. + */ + hns3_dev_handle_mbx_msg(hw); if (__atomic_load_n(&vf->pf_push_lsc_cap, __ATOMIC_ACQUIRE) != HNS3_PF_PUSH_LSC_CAP_UNKNOWN) break;
From: Chengwen Feng fengchengwen@huawei.com
The 802.11 physical PMA sub-layer defines three media: copper, fiber and backplane. For PMD, the backplane is similar to the fiber, the main differences are that backplane doesn't have optical module.
Because the interface of firmware fiber is also applicable to the backplane, this patch supports the backplane only through simple extension.
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng fengchengwen@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_ethdev.c | 54 +++++++++++++++++++--------------- 1 file changed, 30 insertions(+), 24 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index 8a8f3f1950..5632b82078 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -2788,11 +2788,8 @@ hns3_check_media_type(struct hns3_hw *hw, uint8_t media_type) } break; case HNS3_MEDIA_TYPE_FIBER: - ret = 0; - break; case HNS3_MEDIA_TYPE_BACKPLANE: - PMD_INIT_LOG(ERR, "Media type is Backplane, not supported."); - ret = -EOPNOTSUPP; + ret = 0; break; default: PMD_INIT_LOG(ERR, "Unknown media type = %u!", media_type); @@ -4245,14 +4242,11 @@ hns3_update_link_info(struct rte_eth_dev *eth_dev) { struct hns3_adapter *hns = eth_dev->data->dev_private; struct hns3_hw *hw = &hns->hw; - int ret = 0;
if (hw->mac.media_type == HNS3_MEDIA_TYPE_COPPER) - ret = hns3_update_copper_link_info(hw); - else if (hw->mac.media_type == HNS3_MEDIA_TYPE_FIBER) - ret = hns3_update_fiber_link_info(hw); + return hns3_update_copper_link_info(hw);
- return ret; + return hns3_update_fiber_link_info(hw); }
static int @@ -4545,11 +4539,13 @@ hns3_get_port_supported_speed(struct rte_eth_dev *eth_dev) if (ret) return ret;
- if (mac->media_type == HNS3_MEDIA_TYPE_FIBER) { + if (mac->media_type == HNS3_MEDIA_TYPE_FIBER || + mac->media_type == HNS3_MEDIA_TYPE_BACKPLANE) { /* * Some firmware does not support the report of supported_speed, - * and only report the effective speed of SFP. In this case, it - * is necessary to use the SFP's speed as the supported_speed. + * and only report the effective speed of SFP/backplane. In this + * case, it is necessary to use the SFP/backplane's speed as the + * supported_speed. */ if (mac->supported_speed == 0) mac->supported_speed = @@ -4811,7 +4807,7 @@ hns3_check_port_speed(struct hns3_hw *hw, uint32_t link_speeds)
if (mac->media_type == HNS3_MEDIA_TYPE_COPPER) speed_bit = hns3_convert_link_speeds2bitmap_copper(link_speeds); - else if (mac->media_type == HNS3_MEDIA_TYPE_FIBER) + else speed_bit = hns3_convert_link_speeds2bitmap_fiber(link_speeds);
if (!(speed_bit & supported_speed)) { @@ -4955,6 +4951,19 @@ hns3_set_fiber_port_link_speed(struct hns3_hw *hw, return hns3_cfg_mac_speed_dup(hw, cfg->speed, cfg->duplex); }
+static const char * +hns3_get_media_type_name(uint8_t media_type) +{ + if (media_type == HNS3_MEDIA_TYPE_FIBER) + return "fiber"; + else if (media_type == HNS3_MEDIA_TYPE_COPPER) + return "copper"; + else if (media_type == HNS3_MEDIA_TYPE_BACKPLANE) + return "backplane"; + else + return "unknown"; +} + static int hns3_set_port_link_speed(struct hns3_hw *hw, struct hns3_set_link_speed_cfg *cfg) @@ -4969,18 +4978,15 @@ hns3_set_port_link_speed(struct hns3_hw *hw, #endif
ret = hns3_set_copper_port_link_speed(hw, cfg); - if (ret) { - hns3_err(hw, "failed to set copper port link speed," - "ret = %d.", ret); - return ret; - } - } else if (hw->mac.media_type == HNS3_MEDIA_TYPE_FIBER) { + } else { ret = hns3_set_fiber_port_link_speed(hw, cfg); - if (ret) { - hns3_err(hw, "failed to set fiber port link speed," - "ret = %d.", ret); - return ret; - } + } + + if (ret) { + hns3_err(hw, "failed to set %s port link speed, ret = %d.", + hns3_get_media_type_name(hw->mac.media_type), + ret); + return ret; }
return 0;
The purpose of the heartbeat alarm is to keep alive for VF. The mailbox channel is disabled when VF is reset, and the heartbeat mailbox message will fail to send. If the reset is not complete, the error information about the heartbeat sending failure will be printed continuously. In fact, VF does set alive when VF restore its configuration. So the heartbeat alarm can be canceled to prepare to start reset and start the alarm when start service.
Signed-off-by: Huisong Li lihuisong@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_ethdev_vf.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index b85f68cb1d..0dea63e8be 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -1977,6 +1977,8 @@ hns3vf_stop_service(struct hns3_adapter *hns) } else hw->reset.mbuf_deferred_free = false;
+ rte_eal_alarm_cancel(hns3vf_keep_alive_handler, eth_dev); + /* * It is cumbersome for hardware to pick-and-choose entries for deletion * from table space. Hence, for function reset software intervention is @@ -1998,6 +2000,10 @@ hns3vf_start_service(struct hns3_adapter *hns) eth_dev = &rte_eth_devices[hw->data->port_id]; hns3_set_rxtx_function(eth_dev); hns3_mp_req_start_rxtx(eth_dev); + + rte_eal_alarm_set(HNS3VF_KEEP_ALIVE_INTERVAL, hns3vf_keep_alive_handler, + eth_dev); + if (hw->adapter_state == HNS3_NIC_STARTED) { hns3vf_start_poll_job(eth_dev);
PMD driver will receive a PTP interrupt when receive a PTP packet. But driver doesn't distinguish it. As a result, many unknown events are printed when many PTP packets are received on the link. The PTP interrupt is normal, so this patch doesn't log and ignores it.
Fixes: 38b539d96eb6 ("net/hns3: support IEEE 1588 PTP") Cc: stable@dpdk.org
Signed-off-by: Huisong Li lihuisong@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_ethdev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index 5632b82078..7c9938b96e 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -318,7 +318,7 @@ hns3_interrupt_handler(void *param) hns3_schedule_reset(hns); } else if (event_cause == HNS3_VECTOR0_EVENT_MBX) { hns3_dev_handle_mbx_msg(hw); - } else { + } else if (event_cause != HNS3_VECTOR0_EVENT_PTP) { hns3_warn(hw, "received unknown event: vector0_int_stat:0x%x " "ras_int_stat:0x%x cmdq_int_stat:0x%x", vector0_int, ras_int, cmdq_int);
The stats_lock is used to protect statistics update in stats APIs and periodic task, but current code only protect queue related statistics.
Fixes: a65342d9d5d2 ("net/hns3: fix MAC and queues HW statistics overflow") Cc: stable@dpdk.org
Signed-off-by: Huisong Li lihuisong@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_stats.c | 22 +++++++++------------- 1 file changed, 9 insertions(+), 13 deletions(-)
diff --git a/drivers/net/hns3/hns3_stats.c b/drivers/net/hns3/hns3_stats.c index bf8af4531f..d56d3ec174 100644 --- a/drivers/net/hns3/hns3_stats.c +++ b/drivers/net/hns3/hns3_stats.c @@ -629,6 +629,7 @@ hns3_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *rte_stats) uint16_t i; int ret;
+ rte_spinlock_lock(&hw->stats_lock); /* Update imissed stats */ ret = hns3_update_imissed_stats(hw, false); if (ret) { @@ -644,10 +645,7 @@ hns3_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *rte_stats) if (rxq == NULL) continue;
- rte_spinlock_lock(&hw->stats_lock); hns3_rcb_rx_ring_stats_get(rxq, stats); - rte_spinlock_unlock(&hw->stats_lock); - rte_stats->ierrors += rxq->err_stats.l2_errors + rxq->err_stats.pkt_len_errors; rte_stats->ibytes += rxq->basic_stats.bytes; @@ -659,9 +657,7 @@ hns3_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *rte_stats) if (txq == NULL) continue;
- rte_spinlock_lock(&hw->stats_lock); hns3_rcb_tx_ring_stats_get(txq, stats); - rte_spinlock_unlock(&hw->stats_lock); rte_stats->obytes += txq->basic_stats.bytes; }
@@ -683,7 +679,10 @@ hns3_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *rte_stats) rte_stats->opackets = stats->rcb_tx_ring_pktnum_rcd - rte_stats->oerrors; rte_stats->rx_nombuf = eth_dev->data->rx_mbuf_alloc_failed; + out: + rte_spinlock_unlock(&hw->stats_lock); + return ret; }
@@ -697,6 +696,7 @@ hns3_stats_reset(struct rte_eth_dev *eth_dev) uint16_t i; int ret;
+ rte_spinlock_lock(&hw->stats_lock); /* * Note: Reading hardware statistics of imissed registers will * clear them. @@ -732,7 +732,6 @@ hns3_stats_reset(struct rte_eth_dev *eth_dev) if (rxq == NULL) continue;
- rte_spinlock_lock(&hw->stats_lock); memset(&rxq->basic_stats, 0, sizeof(struct hns3_rx_basic_stats));
@@ -740,7 +739,6 @@ hns3_stats_reset(struct rte_eth_dev *eth_dev) (void)hns3_read_dev(rxq, HNS3_RING_RX_PKTNUM_RECORD_REG); rxq->err_stats.pkt_len_errors = 0; rxq->err_stats.l2_errors = 0; - rte_spinlock_unlock(&hw->stats_lock); }
/* Clear all the stats of a txq in a loop to keep them synchronized */ @@ -749,19 +747,18 @@ hns3_stats_reset(struct rte_eth_dev *eth_dev) if (txq == NULL) continue;
- rte_spinlock_lock(&hw->stats_lock); memset(&txq->basic_stats, 0, sizeof(struct hns3_tx_basic_stats));
/* This register is read-clear */ (void)hns3_read_dev(txq, HNS3_RING_TX_PKTNUM_RECORD_REG); - rte_spinlock_unlock(&hw->stats_lock); }
- rte_spinlock_lock(&hw->stats_lock); hns3_tqp_stats_clear(hw); - rte_spinlock_unlock(&hw->stats_lock); + out: + rte_spinlock_unlock(&hw->stats_lock); + return ret; }
@@ -1082,11 +1079,11 @@ hns3_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, count++; } } - rte_spinlock_unlock(&hw->stats_lock);
ret = hns3_update_imissed_stats(hw, false); if (ret) { hns3_err(hw, "update imissed stats failed, ret = %d", ret); + rte_spinlock_unlock(&hw->stats_lock); return ret; }
@@ -1115,7 +1112,6 @@ hns3_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, } }
- rte_spinlock_lock(&hw->stats_lock); hns3_tqp_dfx_stats_get(dev, xstats, &count); hns3_queue_stats_get(dev, xstats, &count); rte_spinlock_unlock(&hw->stats_lock);
From: Chengwen Feng fengchengwen@huawei.com
The SVE algorithm and NEON algorithm have the same requirements for nb-desc, but the nb-desc is verified only when using NEON.
Fixes: fa29fe45a7b4 ("net/hns3: support queue start and stop") Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng fengchengwen@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_rxtx.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index 0dc1d8cb60..b7fe2352a1 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -1759,7 +1759,8 @@ hns3_rxq_conf_runtime_check(struct hns3_hw *hw, uint16_t buf_size, return -EINVAL; }
- if (pkt_burst == hns3_recv_pkts_vec) { + if (pkt_burst == hns3_recv_pkts_vec || + pkt_burst == hns3_recv_pkts_vec_sve) { min_vec_bds = HNS3_DEFAULT_RXQ_REARM_THRESH + HNS3_DEFAULT_RX_BURST; if (nb_desc < min_vec_bds ||
From: Dongdong Liu liudongdong3@huawei.com
Delete unnecessary code and adjust code to make code more clean.
Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_rxtx.c | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index b7fe2352a1..840ca384ce 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -1909,8 +1909,6 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc, rxq->pvid_sw_discard_en = false; rxq->ptype_en = hns3_dev_get_support(hw, RXD_ADV_LAYOUT) ? true : false; rxq->configured = true; - rxq->io_base = (void *)((char *)hw->io_base + HNS3_TQP_REG_OFFSET + - idx * HNS3_TQP_REG_SIZE); rxq->io_base = (void *)((char *)hw->io_base + hns3_get_tqp_reg_offset(idx)); rxq->io_head_reg = (volatile void *)((char *)rxq->io_base + @@ -2442,10 +2440,8 @@ hns3_recv_pkts_simple(void *rx_queue,
nmb = hns3_rx_alloc_buffer(rxq); if (unlikely(nmb == NULL)) { - uint16_t port_id; - - port_id = rxq->port_id; - rte_eth_devices[port_id].data->rx_mbuf_alloc_failed++; + rte_eth_devices[rxq->port_id].data-> + rx_mbuf_alloc_failed++; break; }
@@ -3870,7 +3866,7 @@ hns3_prep_pkt_proc(struct hns3_tx_queue *tx_queue, struct rte_mbuf *m) #endif if (hns3_pkt_is_tso(m)) { if (hns3_pkt_need_linearized(m, m->nb_segs, - tx_queue->max_non_tso_bd_num) || + tx_queue->max_non_tso_bd_num) || hns3_check_tso_pkt_valid(m)) { rte_errno = EINVAL; return -EINVAL;
From: Dongdong Liu liudongdong3@huawei.com
The RTE_HNS3_ONLY_1630_FPGA macro is not in use, so delete the code.
Fixes: 2192c428f9a6 ("net/hns3: fix firmware compatibility configuration") Fixes: bdaf190f8235 ("net/hns3: support link speed autoneg for PF") Cc: stable@dpdk.org
Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_cmd.c | 33 --------------------------------- drivers/net/hns3/hns3_ethdev.c | 11 ++--------- 2 files changed, 2 insertions(+), 42 deletions(-)
diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c index e3d096d9cb..50cb3eabb1 100644 --- a/drivers/net/hns3/hns3_cmd.c +++ b/drivers/net/hns3/hns3_cmd.c @@ -631,39 +631,6 @@ hns3_firmware_compat_config(struct hns3_hw *hw, bool is_init) struct hns3_cmd_desc desc; uint32_t compat = 0;
-#if defined(RTE_HNS3_ONLY_1630_FPGA) - /* If resv reg enabled phy driver of imp is not configured, driver - * will use temporary phy driver. - */ - struct rte_pci_device *pci_dev; - struct rte_eth_dev *eth_dev; - uint8_t revision; - int ret; - - eth_dev = &rte_eth_devices[hw->data->port_id]; - pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - /* Get PCI revision id */ - ret = rte_pci_read_config(pci_dev, &revision, HNS3_PCI_REVISION_ID_LEN, - HNS3_PCI_REVISION_ID); - if (ret != HNS3_PCI_REVISION_ID_LEN) { - PMD_INIT_LOG(ERR, "failed to read pci revision id, ret = %d", - ret); - return -EIO; - } - if (revision == PCI_REVISION_ID_HIP09_A) { - struct hns3_pf *pf = HNS3_DEV_HW_TO_PF(hw); - if (hns3_dev_get_support(hw, COPPER) == 0 || pf->is_tmp_phy) { - PMD_INIT_LOG(ERR, "***use temp phy driver in dpdk***"); - pf->is_tmp_phy = true; - hns3_set_bit(hw->capability, - HNS3_DEV_SUPPORT_COPPER_B, 1); - return 0; - } - - PMD_INIT_LOG(ERR, "***use phy driver in imp***"); - } -#endif - hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_FIRMWARE_COMPAT_CFG, false); req = (struct hns3_firmware_compat_cmd *)desc.data;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index 7c9938b96e..401736f5a6 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -4970,17 +4970,10 @@ hns3_set_port_link_speed(struct hns3_hw *hw, { int ret;
- if (hw->mac.media_type == HNS3_MEDIA_TYPE_COPPER) { -#if defined(RTE_HNS3_ONLY_1630_FPGA) - struct hns3_pf *pf = HNS3_DEV_HW_TO_PF(hw); - if (pf->is_tmp_phy) - return 0; -#endif - + if (hw->mac.media_type == HNS3_MEDIA_TYPE_COPPER) ret = hns3_set_copper_port_link_speed(hw, cfg); - } else { + else ret = hns3_set_fiber_port_link_speed(hw, cfg); - }
if (ret) { hns3_err(hw, "failed to set %s port link speed, ret = %d.",
From: Chengwen Feng fengchengwen@huawei.com
Currently the example using DMA in asynchronous mode, which are: nb_rx = rte_eth_rx_burst(); if (nb_rx == 0) continue; ... dma_enqueue(); // enqueue the received packets copy request nb_cpl = dma_dequeue(); // get copy completed packets ...
There are no waiting inside dma_dequeue(), and this is why it's called asynchronus. If there are no packet received, it won't call dma_dequeue(), but some packets may still in the DMA queue which enqueued in last cycle. As a result, when the traffic is stopped, the sent packets and received packets are unbalanced from the perspective of the traffic generator.
The patch supports DMA dequeue when no packet received, it helps to judge the test result by comparing the sent packets with the received packets on traffic generator sides.
Signed-off-by: Chengwen Feng fengchengwen@huawei.com Acked-by: Bruce Richardson bruce.richardson@intel.com Acked-by: Kevin Laatz kevin.laatz@intel.com --- examples/dma/dmafwd.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c index 9b17b40dbf..b06042e5fe 100644 --- a/examples/dma/dmafwd.c +++ b/examples/dma/dmafwd.c @@ -408,8 +408,13 @@ dma_rx_port(struct rxtx_port_config *rx_config) nb_rx = rte_eth_rx_burst(rx_config->rxtx_port, i, pkts_burst, MAX_PKT_BURST);
- if (nb_rx == 0) + if (nb_rx == 0) { + if (copy_mode == COPY_MODE_DMA_NUM && + (nb_rx = dma_dequeue(pkts_burst, pkts_burst_copy, + MAX_PKT_BURST, rx_config->dmadev_ids[i])) > 0) + goto handle_tx; continue; + }
port_statistics.rx[rx_config->rxtx_port] += nb_rx;
@@ -450,6 +455,7 @@ dma_rx_port(struct rxtx_port_config *rx_config) pkts_burst_copy[j]); }
+handle_tx: rte_mempool_put_bulk(dma_pktmbuf_pool, (void *)pkts_burst, nb_rx);
From: Jie Hai haijie1@huawei.com
Show whether support modifying VF VLAN filter or not. Sample output changes: + -- support VF VLAN FILTER MOD: Yes
Signed-off-by: Jie Hai haijie1@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_dump.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/net/hns3/hns3_dump.c b/drivers/net/hns3/hns3_dump.c index ef3e5c0fb4..646e93d8e6 100644 --- a/drivers/net/hns3/hns3_dump.c +++ b/drivers/net/hns3/hns3_dump.c @@ -98,6 +98,7 @@ hns3_get_dev_feature_capability(FILE *file, struct hns3_hw *hw) {HNS3_DEV_SUPPORT_OUTER_UDP_CKSUM_B, "OUTER UDP CKSUM"}, {HNS3_DEV_SUPPORT_RAS_IMP_B, "RAS IMP"}, {HNS3_DEV_SUPPORT_TM_B, "TM"}, + {HNS3_DEV_SUPPORT_VF_VLAN_FLT_MOD_B, "VF VLAN FILTER MOD"}, }; uint32_t i;
The Rx and Tx vector algorithm of hns3 PMD don't support PTP function. Currently, hns3 driver uses 'pf->ptp_enable' to check whether PTP is enabled so as to not select Rx and Tx vector algorithm. And the variable is set when call rte_eth_timesync_enable(). Namely, it may not be set before selecting Rx/Tx function, let's say the case: set PTP offload in dev_configure(), do dev_start() and then call rte_eth_timesync_enable(). In this case, all PTP packets can not be received to application. So this patch fixes the check based on the RTE_ETH_RX_OFFLOAD_TIMESTAMP flag.
Fixes: 3ca3dcd65101 ("net/hns3: fix vector Rx/Tx when PTP enabled") Cc: stable@dpdk.org
Signed-off-by: Huisong Li lihuisong@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_ptp.c | 1 - drivers/net/hns3/hns3_rxtx_vec.c | 20 +++++++++----------- 2 files changed, 9 insertions(+), 12 deletions(-)
diff --git a/drivers/net/hns3/hns3_ptp.c b/drivers/net/hns3/hns3_ptp.c index 0b0061bba5..6bbd85ba23 100644 --- a/drivers/net/hns3/hns3_ptp.c +++ b/drivers/net/hns3/hns3_ptp.c @@ -125,7 +125,6 @@ hns3_timesync_enable(struct rte_eth_dev *dev)
if (pf->ptp_enable) return 0; - hns3_warn(hw, "note: please ensure Rx/Tx burst mode is simple or common when enabling PTP!");
rte_spinlock_lock(&hw->lock); ret = hns3_timesync_configure(hns, true); diff --git a/drivers/net/hns3/hns3_rxtx_vec.c b/drivers/net/hns3/hns3_rxtx_vec.c index 73f0ab6bc8..153866cf03 100644 --- a/drivers/net/hns3/hns3_rxtx_vec.c +++ b/drivers/net/hns3/hns3_rxtx_vec.c @@ -17,15 +17,18 @@ int hns3_tx_check_vec_support(struct rte_eth_dev *dev) { struct rte_eth_txmode *txmode = &dev->data->dev_conf.txmode; - struct hns3_adapter *hns = dev->data->dev_private; - struct hns3_pf *pf = &hns->pf; + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
/* Only support RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE */ if (txmode->offloads != RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) return -ENOTSUP;
- /* Vec is not supported when PTP enabled */ - if (pf->ptp_enable) + /* + * PTP function requires the cooperation of Rx and Tx. + * Tx vector isn't supported if RTE_ETH_RX_OFFLOAD_TIMESTAMP is set + * in Rx offloads. + */ + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) return -ENOTSUP;
return 0; @@ -233,9 +236,8 @@ hns3_rx_check_vec_support(struct rte_eth_dev *dev) struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf; struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; uint64_t offloads_mask = RTE_ETH_RX_OFFLOAD_TCP_LRO | - RTE_ETH_RX_OFFLOAD_VLAN; - struct hns3_adapter *hns = dev->data->dev_private; - struct hns3_pf *pf = &hns->pf; + RTE_ETH_RX_OFFLOAD_VLAN | + RTE_ETH_RX_OFFLOAD_TIMESTAMP;
if (dev->data->scattered_rx) return -ENOTSUP; @@ -249,9 +251,5 @@ hns3_rx_check_vec_support(struct rte_eth_dev *dev) if (hns3_rxq_iterate(dev, hns3_rxq_vec_check, NULL) != 0) return -ENOTSUP;
- /* Vec is not supported when PTP enabled */ - if (pf->ptp_enable) - return -ENOTSUP; - return 0; }
From: Chengwen Feng fengchengwen@huawei.com
Currently, the number of Tx send bytes is obtained by accumulating the length of the batch 'mbuf' packets of the current loop cycle. Unfortunately, it uses svcntd (which means all lane, regardless of whether the corresponding lane is valid) which may lead to overflow, and thus refers to an invalid mbuf.
Because the SVE xmit algorithm applies only to a single mbuf, the mbuf's data_len is equal pkt_len, so this patch fixes it by using svaddv_u64(svbool_t pg, svuint64_t data_len) which only adds valid lanes.
Fixes: fdcd6a3e0246 ("net/hns3: add bytes stats") Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng fengchengwen@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_rxtx_vec_sve.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/net/hns3/hns3_rxtx_vec_sve.c b/drivers/net/hns3/hns3_rxtx_vec_sve.c index be1fdbcdf0..b0dfb052bb 100644 --- a/drivers/net/hns3/hns3_rxtx_vec_sve.c +++ b/drivers/net/hns3/hns3_rxtx_vec_sve.c @@ -435,9 +435,8 @@ hns3_tx_fill_hw_ring_sve(struct hns3_tx_queue *txq, offsets, svdup_n_u64(valid_bit));
/* Increment bytes counter */ - uint32_t idx; - for (idx = 0; idx < svcntd(); idx++) - txq->basic_stats.bytes += pkts[idx]->pkt_len; + txq->basic_stats.bytes += + (svaddv_u64(pg, data_len) >> HNS3_UINT16_BIT);
/* update index for next loop */ i += svcntd();
From: Chengwen Feng fengchengwen@huawei.com
If txq's next-to-use plus nb_pkts equal txq's nb_tx_desc when using SVE xmit algorithm, the txq's next-to-use will equal nb_tx_desc after the xmit, this does not cause Tx exceptions, but may affect other ops that depend on this field, such as tx_descriptor_status.
Fixes: f0c243a6cb6f ("net/hns3: support SVE Tx") Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng fengchengwen@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_rxtx_vec_sve.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/net/hns3/hns3_rxtx_vec_sve.c b/drivers/net/hns3/hns3_rxtx_vec_sve.c index b0dfb052bb..f09a81dbd5 100644 --- a/drivers/net/hns3/hns3_rxtx_vec_sve.c +++ b/drivers/net/hns3/hns3_rxtx_vec_sve.c @@ -464,14 +464,16 @@ hns3_xmit_fixed_burst_vec_sve(void *__restrict tx_queue, return 0; }
- if (txq->next_to_use + nb_pkts > txq->nb_tx_desc) { + if (txq->next_to_use + nb_pkts >= txq->nb_tx_desc) { nb_tx = txq->nb_tx_desc - txq->next_to_use; hns3_tx_fill_hw_ring_sve(txq, tx_pkts, nb_tx); txq->next_to_use = 0; }
- hns3_tx_fill_hw_ring_sve(txq, tx_pkts + nb_tx, nb_pkts - nb_tx); - txq->next_to_use += nb_pkts - nb_tx; + if (nb_pkts > nb_tx) { + hns3_tx_fill_hw_ring_sve(txq, tx_pkts + nb_tx, nb_pkts - nb_tx); + txq->next_to_use += nb_pkts - nb_tx; + }
txq->tx_bd_ready -= nb_pkts; hns3_write_txq_tail_reg(txq, nb_pkts);
From: Chengwen Feng fengchengwen@huawei.com
If txq's next-to-use plus nb_pkts equal txq's nb_tx_desc when using simple xmit algorithm, the txq's next-to-use will equal nb_tx_desc fter the xmit, this does not cause Tx exceptions, but may affect other ops that depend on this field, such as tx_descriptor_status.
Fixes: 7ef933908f04 ("net/hns3: add simple Tx path") Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng fengchengwen@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_rxtx.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index 840ca384ce..93cc70477d 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -4129,14 +4129,16 @@ hns3_xmit_pkts_simple(void *tx_queue, }
txq->tx_bd_ready -= nb_pkts; - if (txq->next_to_use + nb_pkts > txq->nb_tx_desc) { + if (txq->next_to_use + nb_pkts >= txq->nb_tx_desc) { nb_tx = txq->nb_tx_desc - txq->next_to_use; hns3_tx_fill_hw_ring(txq, tx_pkts, nb_tx); txq->next_to_use = 0; }
- hns3_tx_fill_hw_ring(txq, tx_pkts + nb_tx, nb_pkts - nb_tx); - txq->next_to_use += nb_pkts - nb_tx; + if (nb_pkts > nb_tx) { + hns3_tx_fill_hw_ring(txq, tx_pkts + nb_tx, nb_pkts - nb_tx); + txq->next_to_use += nb_pkts - nb_tx; + }
hns3_write_txq_tail_reg(txq, nb_pkts);
From: Chengwen Feng fengchengwen@huawei.com
Optimize SVE xmit algorithm performance, will get about 1%+ performance gain under 64B macfwd.
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng fengchengwen@huawei.com --- drivers/net/hns3/hns3_rxtx_vec_sve.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-)
diff --git a/drivers/net/hns3/hns3_rxtx_vec_sve.c b/drivers/net/hns3/hns3_rxtx_vec_sve.c index f09a81dbd5..6f23ba674d 100644 --- a/drivers/net/hns3/hns3_rxtx_vec_sve.c +++ b/drivers/net/hns3/hns3_rxtx_vec_sve.c @@ -389,10 +389,12 @@ hns3_tx_fill_hw_ring_sve(struct hns3_tx_queue *txq, HNS3_UINT32_BIT; svuint64_t base_addr, buf_iova, data_off, data_len, addr; svuint64_t offsets = svindex_u64(0, BD_SIZE); - uint32_t i = 0; - svbool_t pg = svwhilelt_b64_u32(i, nb_pkts); + uint32_t cnt = svcntd(); + svbool_t pg; + uint32_t i;
- do { + for (i = 0; i < nb_pkts; /* i is updated in the inner loop */) { + pg = svwhilelt_b64_u32(i, nb_pkts); base_addr = svld1_u64(pg, (uint64_t *)pkts); /* calc mbuf's field buf_iova address */ buf_iova = svadd_n_u64_z(pg, base_addr, @@ -439,12 +441,11 @@ hns3_tx_fill_hw_ring_sve(struct hns3_tx_queue *txq, (svaddv_u64(pg, data_len) >> HNS3_UINT16_BIT);
/* update index for next loop */ - i += svcntd(); - pkts += svcntd(); - txdp += svcntd(); - tx_entry += svcntd(); - pg = svwhilelt_b64_u32(i, nb_pkts); - } while (svptest_any(svptrue_b64(), pg)); + i += cnt; + pkts += cnt; + txdp += cnt; + tx_entry += cnt; + } }
static uint16_t
From: Chengwen Feng fengchengwen@huawei.com
Currently, to prevent missing reporting of reset interrupts and quickly identify reset interrupts, the following logic is designed in the FW (firmware) command interface hns3_cmd_send: if an unprocessed interrupt exist (by checking reset registers), related reset task is scheduled.
The secondary process may invoke the hns3_cmd_send interface (e.g. using proc-info query some stats). Unfortunately, the secondary process does not support reset processing, and a segment fault may occur if it schedules reset task.
Fix it by limit the checking and scheduling of reset under only primary process.
Fixes: 2790c6464725 ("net/hns3: support device reset") Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng fengchengwen@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_ethdev.c | 10 +++++++++- drivers/net/hns3/hns3_ethdev_vf.c | 11 +++++++++-- 2 files changed, 18 insertions(+), 3 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index 401736f5a6..24ee9df332 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -5602,7 +5602,15 @@ hns3_is_reset_pending(struct hns3_adapter *hns) struct hns3_hw *hw = &hns->hw; enum hns3_reset_level reset;
- hns3_check_event_cause(hns, NULL); + /* + * Check the registers to confirm whether there is reset pending. + * Note: This check may lead to schedule reset task, but only primary + * process can process the reset event. Therefore, limit the + * checking under only primary process. + */ + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + hns3_check_event_cause(hns, NULL); + reset = hns3_get_reset_level(hns, &hw->reset.pending); if (reset != HNS3_NONE_RESET && hw->reset.level != HNS3_NONE_RESET && hw->reset.level < reset) { diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 0dea63e8be..db2f15abe2 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -1864,8 +1864,15 @@ hns3vf_is_reset_pending(struct hns3_adapter *hns) if (hw->reset.level == HNS3_VF_FULL_RESET) return false;
- /* Check the registers to confirm whether there is reset pending */ - hns3vf_check_event_cause(hns, NULL); + /* + * Check the registers to confirm whether there is reset pending. + * Note: This check may lead to schedule reset task, but only primary + * process can process the reset event. Therefore, limit the + * checking under only primary process. + */ + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + hns3vf_check_event_cause(hns, NULL); + reset = hns3vf_get_reset_level(hw, &hw->reset.pending); if (hw->reset.level != HNS3_NONE_RESET && reset != HNS3_NONE_RESET && hw->reset.level < reset) {
The '__rte_unused' tag in the input parameter of 'hns3_mac_stats_reset' is redundant. This patch remove this tag. In addition, this function is aimed to clear MAC statics. So using 'struct hns3_hw' as input parameter is better than 'struct rte_eth_dev', and it also facilitates the call of this function.
Fixes: 8839c5e202f3 ("net/hns3: support device stats") Cc: stable@dpdk.org
Signed-off-by: Huisong Li lihuisong@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_stats.c | 22 +++++----------------- 1 file changed, 5 insertions(+), 17 deletions(-)
diff --git a/drivers/net/hns3/hns3_stats.c b/drivers/net/hns3/hns3_stats.c index d56d3ec174..c2af3bd231 100644 --- a/drivers/net/hns3/hns3_stats.c +++ b/drivers/net/hns3/hns3_stats.c @@ -406,15 +406,6 @@ hns3_query_mac_stats_reg_num(struct hns3_hw *hw) return 0; }
-static int -hns3_query_update_mac_stats(struct rte_eth_dev *dev) -{ - struct hns3_adapter *hns = dev->data->dev_private; - struct hns3_hw *hw = &hns->hw; - - return hns3_update_mac_stats(hw); -} - static int hns3_update_port_rpu_drop_stats(struct hns3_hw *hw) { @@ -763,14 +754,13 @@ hns3_stats_reset(struct rte_eth_dev *eth_dev) }
static int -hns3_mac_stats_reset(__rte_unused struct rte_eth_dev *dev) +hns3_mac_stats_reset(struct hns3_hw *hw) { - struct hns3_adapter *hns = dev->data->dev_private; - struct hns3_hw *hw = &hns->hw; struct hns3_mac_stats *mac_stats = &hw->mac_stats; int ret;
- ret = hns3_query_update_mac_stats(dev); + /* Clear hardware MAC statistics by reading it. */ + ret = hns3_update_mac_stats(hw); if (ret) { hns3_err(hw, "Clear Mac stats fail : %d", ret); return ret; @@ -1063,8 +1053,7 @@ hns3_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, hns3_tqp_basic_stats_get(dev, xstats, &count);
if (!hns->is_vf) { - /* Update Mac stats */ - ret = hns3_query_update_mac_stats(dev); + ret = hns3_update_mac_stats(hw); if (ret < 0) { hns3_err(hw, "Update Mac stats fail : %d", ret); rte_spinlock_unlock(&hw->stats_lock); @@ -1482,8 +1471,7 @@ hns3_dev_xstats_reset(struct rte_eth_dev *dev) if (hns->is_vf) goto out;
- /* HW registers are cleared on read */ - ret = hns3_mac_stats_reset(dev); + ret = hns3_mac_stats_reset(hw);
out: rte_spinlock_unlock(&hw->stats_lock);
In the situation that the driver hns3 exits abnormally during packets sending and receiving, the hardware statistics are not cleared when the driver hns3 is reloaded. It need to be cleared during driver hns3 initialization that hardware MAC statistics.
Fixes: 8839c5e202f3 ("net/hns3: support device stats") Cc: stable@dpdk.org
Signed-off-by: Huisong Li lihuisong@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_stats.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/net/hns3/hns3_stats.c b/drivers/net/hns3/hns3_stats.c index c2af3bd231..552ae9d30c 100644 --- a/drivers/net/hns3/hns3_stats.c +++ b/drivers/net/hns3/hns3_stats.c @@ -1528,6 +1528,7 @@ hns3_tqp_stats_clear(struct hns3_hw *hw) int hns3_stats_init(struct hns3_hw *hw) { + struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw); int ret;
rte_spinlock_init(&hw->stats_lock); @@ -1538,6 +1539,9 @@ hns3_stats_init(struct hns3_hw *hw) return ret; }
+ if (!hns->is_vf) + hns3_mac_stats_reset(hw); + return hns3_tqp_stats_init(hw); }
From: Chengwen Feng fengchengwen@huawei.com
The Tx performance deteriorates in the case of larger packets size and larger burst. It may take a long time to optimize in these scenarios, so this commit reverts commit 0b77e8f3d364 ("net/hns3: optimize Tx performance")
Fixes: 0b77e8f3d364 ("net/hns3: optimize Tx performance") Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng fengchengwen@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_rxtx.c | 115 ++++++++++++++++++----------------- 1 file changed, 60 insertions(+), 55 deletions(-)
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index 93cc70477d..21c3ef72b1 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -3075,51 +3075,40 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc, return 0; }
-static int +static void hns3_tx_free_useless_buffer(struct hns3_tx_queue *txq) { uint16_t tx_next_clean = txq->next_to_clean; - uint16_t tx_next_use = txq->next_to_use; - struct hns3_entry *tx_entry = &txq->sw_ring[tx_next_clean]; + uint16_t tx_next_use = txq->next_to_use; + uint16_t tx_bd_ready = txq->tx_bd_ready; + uint16_t tx_bd_max = txq->nb_tx_desc; + struct hns3_entry *tx_bak_pkt = &txq->sw_ring[tx_next_clean]; struct hns3_desc *desc = &txq->tx_ring[tx_next_clean]; - uint16_t i; - - if (tx_next_use >= tx_next_clean && - tx_next_use < tx_next_clean + txq->tx_rs_thresh) - return -1; + struct rte_mbuf *mbuf;
- /* - * All mbufs can be released only when the VLD bits of all - * descriptors in a batch are cleared. - */ - for (i = 0; i < txq->tx_rs_thresh; i++) { - if (desc[i].tx.tp_fe_sc_vld_ra_ri & - rte_le_to_cpu_16(BIT(HNS3_TXD_VLD_B))) - return -1; - } + while ((!(desc->tx.tp_fe_sc_vld_ra_ri & + rte_cpu_to_le_16(BIT(HNS3_TXD_VLD_B)))) && + tx_next_use != tx_next_clean) { + mbuf = tx_bak_pkt->mbuf; + if (mbuf) { + rte_pktmbuf_free_seg(mbuf); + tx_bak_pkt->mbuf = NULL; + }
- for (i = 0; i < txq->tx_rs_thresh; i++) { - rte_pktmbuf_free_seg(tx_entry[i].mbuf); - tx_entry[i].mbuf = NULL; + desc++; + tx_bak_pkt++; + tx_next_clean++; + tx_bd_ready++; + + if (tx_next_clean >= tx_bd_max) { + tx_next_clean = 0; + desc = txq->tx_ring; + tx_bak_pkt = txq->sw_ring; + } }
- /* Update numbers of available descriptor due to buffer freed */ - txq->tx_bd_ready += txq->tx_rs_thresh; - txq->next_to_clean += txq->tx_rs_thresh; - if (txq->next_to_clean >= txq->nb_tx_desc) - txq->next_to_clean = 0; - - return 0; -} - -static inline int -hns3_tx_free_required_buffer(struct hns3_tx_queue *txq, uint16_t required_bds) -{ - while (required_bds > txq->tx_bd_ready) { - if (hns3_tx_free_useless_buffer(txq) != 0) - return -1; - } - return 0; + txq->next_to_clean = tx_next_clean; + txq->tx_bd_ready = tx_bd_ready; }
int @@ -4162,8 +4151,7 @@ hns3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) uint16_t nb_tx; uint16_t i;
- if (txq->tx_bd_ready < txq->tx_free_thresh) - (void)hns3_tx_free_useless_buffer(txq); + hns3_tx_free_useless_buffer(txq);
tx_next_use = txq->next_to_use; tx_bd_max = txq->nb_tx_desc; @@ -4178,14 +4166,10 @@ hns3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) nb_buf = tx_pkt->nb_segs;
if (nb_buf > txq->tx_bd_ready) { - /* Try to release the required MBUF, but avoid releasing - * all MBUFs, otherwise, the MBUFs will be released for - * a long time and may cause jitter. - */ - if (hns3_tx_free_required_buffer(txq, nb_buf) != 0) { - txq->dfx_stats.queue_full_cnt++; - goto end_of_tx; - } + txq->dfx_stats.queue_full_cnt++; + if (nb_tx == 0) + return 0; + goto end_of_tx; }
/* @@ -4609,22 +4593,43 @@ hns3_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) static int hns3_tx_done_cleanup_full(struct hns3_tx_queue *txq, uint32_t free_cnt) { - uint16_t round_cnt; + uint16_t next_to_clean = txq->next_to_clean; + uint16_t next_to_use = txq->next_to_use; + uint16_t tx_bd_ready = txq->tx_bd_ready; + struct hns3_entry *tx_pkt = &txq->sw_ring[next_to_clean]; + struct hns3_desc *desc = &txq->tx_ring[next_to_clean]; uint32_t idx;
if (free_cnt == 0 || free_cnt > txq->nb_tx_desc) free_cnt = txq->nb_tx_desc;
- if (txq->tx_rs_thresh == 0) - return 0; - - round_cnt = rounddown(free_cnt, txq->tx_rs_thresh); - for (idx = 0; idx < round_cnt; idx += txq->tx_rs_thresh) { - if (hns3_tx_free_useless_buffer(txq) != 0) + for (idx = 0; idx < free_cnt; idx++) { + if (next_to_clean == next_to_use) + break; + if (desc->tx.tp_fe_sc_vld_ra_ri & + rte_cpu_to_le_16(BIT(HNS3_TXD_VLD_B))) break; + if (tx_pkt->mbuf != NULL) { + rte_pktmbuf_free_seg(tx_pkt->mbuf); + tx_pkt->mbuf = NULL; + } + next_to_clean++; + tx_bd_ready++; + tx_pkt++; + desc++; + if (next_to_clean == txq->nb_tx_desc) { + tx_pkt = txq->sw_ring; + desc = txq->tx_ring; + next_to_clean = 0; + } + } + + if (idx > 0) { + txq->next_to_clean = next_to_clean; + txq->tx_bd_ready = tx_bd_ready; }
- return idx; + return (int)idx; }
int
The 'hns3_restore_rss_filter' function is used to restore RSS rule. But this function calls the 'hns3_config_rss_filter' which sets the last to invalid in flow RSS list. This causes the flow RSS list has no valid rule.
Fixes: ec674cb742e5 ("net/hns3: fix flushing RSS rule") Cc: stable@dpdk.org
Signed-off-by: Huisong Li lihuisong@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_flow.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c index 5e0a9bc93f..8b9bfe4880 100644 --- a/drivers/net/hns3/hns3_flow.c +++ b/drivers/net/hns3/hns3_flow.c @@ -1539,7 +1539,6 @@ hns3_config_rss_filter(struct rte_eth_dev *dev, const struct hns3_rss_conf *conf, bool add) { struct hns3_adapter *hns = dev->data->dev_private; - struct hns3_rss_conf_ele *rss_filter_ptr; struct hns3_hw *hw = &hns->hw; struct hns3_rss_conf *rss_info; uint64_t flow_types; @@ -1618,13 +1617,6 @@ hns3_config_rss_filter(struct rte_eth_dev *dev, goto rss_config_err; }
- /* - * When create a new RSS rule, the old rule will be overlaid and set - * invalid. - */ - TAILQ_FOREACH(rss_filter_ptr, &hw->flow_rss_list, entries) - rss_filter_ptr->filter_info.valid = false; - rss_config_err: rte_spinlock_unlock(&hw->lock);
@@ -1749,6 +1741,7 @@ hns3_flow_create_rss_rule(struct rte_eth_dev *dev, { struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct hns3_rss_conf_ele *rss_filter_ptr; + struct hns3_rss_conf_ele *filter_ptr; const struct hns3_rss_conf *rss_conf; int ret;
@@ -1773,6 +1766,14 @@ hns3_flow_create_rss_rule(struct rte_eth_dev *dev,
hns3_rss_conf_copy(&rss_filter_ptr->filter_info, &rss_conf->conf); rss_filter_ptr->filter_info.valid = true; + + /* + * When create a new RSS rule, the old rule will be overlaid and set + * invalid. + */ + TAILQ_FOREACH(filter_ptr, &hw->flow_rss_list, entries) + filter_ptr->filter_info.valid = false; + TAILQ_INSERT_TAIL(&hw->flow_rss_list, rss_filter_ptr, entries); flow->rule = rss_filter_ptr; flow->filter_type = RTE_ETH_FILTER_HASH;
Currently, driver sets RSS function to 'RTE_ETH_HASH_FUNCTION_MAX' when user flush all rules in order to judge whether driver needs to restore RSS rules. In fact, all rules are saved in flow RSS list. So there is no need to modify RSS function to this macro. And this list can be used to restore. The modification of RSS function may introduce new problem.
Fixes: eb158fc756a5 ("net/hns3: fix config when creating RSS rule after flush") Cc: stable@dpdk.org
Signed-off-by: Huisong Li lihuisong@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_flow.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-)
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c index 8b9bfe4880..82ead96854 100644 --- a/drivers/net/hns3/hns3_flow.c +++ b/drivers/net/hns3/hns3_flow.c @@ -1587,8 +1587,6 @@ hns3_config_rss_filter(struct rte_eth_dev *dev, rss_info->conf.queue_num = 0; }
- /* set RSS func invalid after flushed */ - rss_info->conf.func = RTE_ETH_HASH_FUNCTION_MAX; return 0; }
@@ -1659,13 +1657,23 @@ int hns3_restore_rss_filter(struct rte_eth_dev *dev) { struct hns3_adapter *hns = dev->data->dev_private; + struct hns3_rss_conf_ele *filter; struct hns3_hw *hw = &hns->hw; + int ret = 0;
- /* When user flush all rules, it doesn't need to restore RSS rule */ - if (hw->rss_info.conf.func == RTE_ETH_HASH_FUNCTION_MAX) - return 0; + TAILQ_FOREACH(filter, &hw->flow_rss_list, entries) { + if (!filter->filter_info.valid) + continue;
- return hns3_config_rss_filter(dev, &hw->rss_info, true); + ret = hns3_config_rss_filter(dev, &filter->filter_info, true); + if (ret != 0) { + hns3_err(hw, "restore RSS filter failed, ret=%d", ret); + goto out; + } + } + +out: + return ret; }
static int
RSS flow rules are saved in RSS filter linked list. The linked list is modified by rte_flow API and is used to restore RSS rules during reset process. So this patch uses 'hw->flows_lock' to protect the configuration and recovery of RSS rule.
Fixes: c37ca66f2b27 ("net/hns3: support RSS") Cc: stable@dpdk.org
Signed-off-by: Huisong Li lihuisong@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_flow.c | 16 ++++++---------- 1 file changed, 6 insertions(+), 10 deletions(-)
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c index 82ead96854..301a4a56b3 100644 --- a/drivers/net/hns3/hns3_flow.c +++ b/drivers/net/hns3/hns3_flow.c @@ -1596,27 +1596,20 @@ hns3_config_rss_filter(struct rte_eth_dev *dev, hns3_warn(hw, "Config queue numbers %u are beyond the scope of truncated", rss_flow_conf.queue_num); hns3_info(hw, "Max of contiguous %u PF queues are configured", num); - - rte_spinlock_lock(&hw->lock); if (num) { ret = hns3_update_indir_table(dev, &rss_flow_conf, num); if (ret) - goto rss_config_err; + return ret; }
/* Set hash algorithm and flow types by the user's config */ ret = hns3_hw_rss_hash_set(hw, &rss_flow_conf); if (ret) - goto rss_config_err; + return ret;
ret = hns3_rss_conf_copy(rss_info, &rss_flow_conf); - if (ret) { + if (ret) hns3_err(hw, "RSS config init fail(%d)", ret); - goto rss_config_err; - } - -rss_config_err: - rte_spinlock_unlock(&hw->lock);
return ret; } @@ -1661,6 +1654,7 @@ hns3_restore_rss_filter(struct rte_eth_dev *dev) struct hns3_hw *hw = &hns->hw; int ret = 0;
+ pthread_mutex_lock(&hw->flows_lock); TAILQ_FOREACH(filter, &hw->flow_rss_list, entries) { if (!filter->filter_info.valid) continue; @@ -1673,6 +1667,8 @@ hns3_restore_rss_filter(struct rte_eth_dev *dev) }
out: + pthread_mutex_unlock(&hw->flows_lock); + return ret; }
After reset process, types of RSS flow rule cannot be restored when load driver without RTE_ETH_MQ_RX_RSS_FLAG flag. This is because the restoration for RSS flow rule is done in the 'hns3_config_rss()'. But this function is also used to configure and restore RSS configuration from ethdev ops, and doesn't configure RSS types if 'rxmode.mq_mode' has not the flag. As a result, RSS types configured by rte flow API can't be restored in this case when encounter reset. Actually, all RSS rules are saved to a global link list.
Use the linked list to restore RSS flow rule.
Fixes: 920be799dbc3 ("net/hns3: fix RSS indirection table configuration") Cc: stable@dpdk.org
Signed-off-by: Huisong Li lihuisong@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_ethdev.c | 11 ++--------- drivers/net/hns3/hns3_ethdev_vf.c | 11 ++--------- drivers/net/hns3/hns3_flow.c | 8 +++++++- drivers/net/hns3/hns3_flow.h | 1 + drivers/net/hns3/hns3_rss.h | 1 - 5 files changed, 12 insertions(+), 20 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index 24ee9df332..fc3fc76a40 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -5006,6 +5006,7 @@ static int hns3_do_start(struct hns3_adapter *hns, bool reset_queue) { struct hns3_hw *hw = &hns->hw; + struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id]; bool link_en; int ret;
@@ -5042,7 +5043,7 @@ hns3_do_start(struct hns3_adapter *hns, bool reset_queue) if (ret) goto err_set_link_speed;
- return 0; + return hns3_restore_filter(dev);
err_set_link_speed: (void)hns3_cfg_mac_mode(hw, false); @@ -5059,12 +5060,6 @@ hns3_do_start(struct hns3_adapter *hns, bool reset_queue) return ret; }
-static void -hns3_restore_filter(struct rte_eth_dev *dev) -{ - hns3_restore_rss_filter(dev); -} - static int hns3_dev_start(struct rte_eth_dev *dev) { @@ -5121,8 +5116,6 @@ hns3_dev_start(struct rte_eth_dev *dev) hns3_set_rxtx_function(dev); hns3_mp_req_start_rxtx(dev);
- hns3_restore_filter(dev); - /* Enable interrupt of all rx queues before enabling queues */ hns3_dev_all_rx_queue_intr_enable(hw, true);
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index db2f15abe2..13f1cba0e6 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -1727,6 +1727,7 @@ static int hns3vf_do_start(struct hns3_adapter *hns, bool reset_queue) { struct hns3_hw *hw = &hns->hw; + struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id]; uint16_t nb_rx_q = hw->data->nb_rx_queues; uint16_t nb_tx_q = hw->data->nb_tx_queues; int ret; @@ -1741,13 +1742,7 @@ hns3vf_do_start(struct hns3_adapter *hns, bool reset_queue) if (ret) hns3_err(hw, "failed to init queues, ret = %d.", ret);
- return ret; -} - -static void -hns3vf_restore_filter(struct rte_eth_dev *dev) -{ - hns3_restore_rss_filter(dev); + return hns3_restore_filter(dev); }
static int @@ -1799,8 +1794,6 @@ hns3vf_dev_start(struct rte_eth_dev *dev) hns3_set_rxtx_function(dev); hns3_mp_req_start_rxtx(dev);
- hns3vf_restore_filter(dev); - /* Enable interrupt of all rx queues before enabling queues */ hns3_dev_all_rx_queue_intr_enable(hw, true); hns3_start_tqps(hw); diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c index 301a4a56b3..7bd2f0bf7a 100644 --- a/drivers/net/hns3/hns3_flow.c +++ b/drivers/net/hns3/hns3_flow.c @@ -1646,7 +1646,7 @@ hns3_clear_rss_filter(struct rte_eth_dev *dev) return ret; }
-int +static int hns3_restore_rss_filter(struct rte_eth_dev *dev) { struct hns3_adapter *hns = dev->data->dev_private; @@ -1672,6 +1672,12 @@ hns3_restore_rss_filter(struct rte_eth_dev *dev) return ret; }
+int +hns3_restore_filter(struct rte_eth_dev *dev) +{ + return hns3_restore_rss_filter(dev); +} + static int hns3_flow_parse_rss(struct rte_eth_dev *dev, const struct hns3_rss_conf *conf, bool add) diff --git a/drivers/net/hns3/hns3_flow.h b/drivers/net/hns3/hns3_flow.h index 1ab3f9f5c6..0f5de129a3 100644 --- a/drivers/net/hns3/hns3_flow.h +++ b/drivers/net/hns3/hns3_flow.h @@ -49,5 +49,6 @@ int hns3_dev_flow_ops_get(struct rte_eth_dev *dev, const struct rte_flow_ops **ops); void hns3_flow_init(struct rte_eth_dev *dev); void hns3_flow_uninit(struct rte_eth_dev *dev); +int hns3_restore_filter(struct rte_eth_dev *dev);
#endif /* _HNS3_FLOW_H_ */ diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h index 5b90d3a628..78c9eff827 100644 --- a/drivers/net/hns3/hns3_rss.h +++ b/drivers/net/hns3/hns3_rss.h @@ -110,6 +110,5 @@ int hns3_config_rss(struct hns3_adapter *hns); void hns3_rss_uninit(struct hns3_adapter *hns); int hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw, uint64_t rss_hf); int hns3_rss_set_algo_key(struct hns3_hw *hw, const uint8_t *key); -int hns3_restore_rss_filter(struct rte_eth_dev *dev);
#endif /* _HNS3_RSS_H_ */
The 'hns3_restore_filter' is used to restore flow rules from rte_flow API during the reset process. This patch moves the recovery of flow direction rule to this function to improve code maintainability.
Fixes: fcba820d9b9e ("net/hns3: support flow director") Cc: stable@dpdk.org
Signed-off-by: Huisong Li lihuisong@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_ethdev.c | 4 ---- drivers/net/hns3/hns3_fdir.c | 3 +++ drivers/net/hns3/hns3_flow.c | 7 +++++++ 3 files changed, 10 insertions(+), 4 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index fc3fc76a40..01c13f8d70 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -5907,10 +5907,6 @@ hns3_restore_conf(struct hns3_adapter *hns) if (ret) goto err_promisc;
- ret = hns3_restore_all_fdir_filter(hns); - if (ret) - goto err_promisc; - ret = hns3_restore_ptp(hns); if (ret) goto err_promisc; diff --git a/drivers/net/hns3/hns3_fdir.c b/drivers/net/hns3/hns3_fdir.c index 30e5e66772..48a91fb517 100644 --- a/drivers/net/hns3/hns3_fdir.c +++ b/drivers/net/hns3/hns3_fdir.c @@ -1068,6 +1068,9 @@ int hns3_restore_all_fdir_filter(struct hns3_adapter *hns) bool err = false; int ret;
+ if (hns->is_vf) + return 0; + /* * This API is called in the reset recovery process, the parent function * must hold hw->lock. diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c index 7bd2f0bf7a..17c4274123 100644 --- a/drivers/net/hns3/hns3_flow.c +++ b/drivers/net/hns3/hns3_flow.c @@ -1675,6 +1675,13 @@ hns3_restore_rss_filter(struct rte_eth_dev *dev) int hns3_restore_filter(struct rte_eth_dev *dev) { + struct hns3_adapter *hns = dev->data->dev_private; + int ret; + + ret = hns3_restore_all_fdir_filter(hns); + if (ret != 0) + return ret; + return hns3_restore_rss_filter(dev); }
This 'hns3_restore_filter' is an internal interface of driver. Currently, it uses 'struct rte_eth_dev *dev' as input parameter, This is inconvenient for the function to call in driver because caller has to obtain its device address by global variable 'rte_eth_devices[]'. Fix the input of this function.
Fixes: 920be799dbc3 ("net/hns3: fix RSS indirection table configuration") Cc: stable@dpdk.org
Signed-off-by: Huisong Li lihuisong@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_ethdev.c | 3 +-- drivers/net/hns3/hns3_ethdev_vf.c | 3 +-- drivers/net/hns3/hns3_flow.c | 30 ++++++++++++------------------ drivers/net/hns3/hns3_flow.h | 2 +- 4 files changed, 15 insertions(+), 23 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index 01c13f8d70..c59543ef5b 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -5006,7 +5006,6 @@ static int hns3_do_start(struct hns3_adapter *hns, bool reset_queue) { struct hns3_hw *hw = &hns->hw; - struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id]; bool link_en; int ret;
@@ -5043,7 +5042,7 @@ hns3_do_start(struct hns3_adapter *hns, bool reset_queue) if (ret) goto err_set_link_speed;
- return hns3_restore_filter(dev); + return hns3_restore_filter(hns);
err_set_link_speed: (void)hns3_cfg_mac_mode(hw, false); diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 13f1cba0e6..72d60191ab 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -1727,7 +1727,6 @@ static int hns3vf_do_start(struct hns3_adapter *hns, bool reset_queue) { struct hns3_hw *hw = &hns->hw; - struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id]; uint16_t nb_rx_q = hw->data->nb_rx_queues; uint16_t nb_tx_q = hw->data->nb_tx_queues; int ret; @@ -1742,7 +1741,7 @@ hns3vf_do_start(struct hns3_adapter *hns, bool reset_queue) if (ret) hns3_err(hw, "failed to init queues, ret = %d.", ret);
- return hns3_restore_filter(dev); + return hns3_restore_filter(hns); }
static int diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c index 17c4274123..2b4286d46d 100644 --- a/drivers/net/hns3/hns3_flow.c +++ b/drivers/net/hns3/hns3_flow.c @@ -1508,11 +1508,9 @@ hns3_hw_rss_hash_set(struct hns3_hw *hw, struct rte_flow_action_rss *rss_config) }
static int -hns3_update_indir_table(struct rte_eth_dev *dev, +hns3_update_indir_table(struct hns3_hw *hw, const struct rte_flow_action_rss *conf, uint16_t num) { - struct hns3_adapter *hns = dev->data->dev_private; - struct hns3_hw *hw = &hns->hw; uint16_t indir_tbl[HNS3_RSS_IND_TBL_SIZE_MAX]; uint16_t j; uint32_t i; @@ -1535,11 +1533,9 @@ hns3_update_indir_table(struct rte_eth_dev *dev, }
static int -hns3_config_rss_filter(struct rte_eth_dev *dev, +hns3_config_rss_filter(struct hns3_hw *hw, const struct hns3_rss_conf *conf, bool add) { - struct hns3_adapter *hns = dev->data->dev_private; - struct hns3_hw *hw = &hns->hw; struct hns3_rss_conf *rss_info; uint64_t flow_types; uint16_t num; @@ -1591,13 +1587,13 @@ hns3_config_rss_filter(struct rte_eth_dev *dev, }
/* Set rx queues to use */ - num = RTE_MIN(dev->data->nb_rx_queues, rss_flow_conf.queue_num); + num = RTE_MIN(hw->data->nb_rx_queues, rss_flow_conf.queue_num); if (rss_flow_conf.queue_num > num) hns3_warn(hw, "Config queue numbers %u are beyond the scope of truncated", rss_flow_conf.queue_num); hns3_info(hw, "Max of contiguous %u PF queues are configured", num); if (num) { - ret = hns3_update_indir_table(dev, &rss_flow_conf, num); + ret = hns3_update_indir_table(hw, &rss_flow_conf, num); if (ret) return ret; } @@ -1627,7 +1623,7 @@ hns3_clear_rss_filter(struct rte_eth_dev *dev) rss_filter_ptr = TAILQ_FIRST(&hw->flow_rss_list); while (rss_filter_ptr) { TAILQ_REMOVE(&hw->flow_rss_list, rss_filter_ptr, entries); - ret = hns3_config_rss_filter(dev, &rss_filter_ptr->filter_info, + ret = hns3_config_rss_filter(hw, &rss_filter_ptr->filter_info, false); if (ret) rss_rule_fail_cnt++; @@ -1647,11 +1643,9 @@ hns3_clear_rss_filter(struct rte_eth_dev *dev) }
static int -hns3_restore_rss_filter(struct rte_eth_dev *dev) +hns3_restore_rss_filter(struct hns3_hw *hw) { - struct hns3_adapter *hns = dev->data->dev_private; struct hns3_rss_conf_ele *filter; - struct hns3_hw *hw = &hns->hw; int ret = 0;
pthread_mutex_lock(&hw->flows_lock); @@ -1659,7 +1653,7 @@ hns3_restore_rss_filter(struct rte_eth_dev *dev) if (!filter->filter_info.valid) continue;
- ret = hns3_config_rss_filter(dev, &filter->filter_info, true); + ret = hns3_config_rss_filter(hw, &filter->filter_info, true); if (ret != 0) { hns3_err(hw, "restore RSS filter failed, ret=%d", ret); goto out; @@ -1673,16 +1667,16 @@ hns3_restore_rss_filter(struct rte_eth_dev *dev) }
int -hns3_restore_filter(struct rte_eth_dev *dev) +hns3_restore_filter(struct hns3_adapter *hns) { - struct hns3_adapter *hns = dev->data->dev_private; + struct hns3_hw *hw = &hns->hw; int ret;
ret = hns3_restore_all_fdir_filter(hns); if (ret != 0) return ret;
- return hns3_restore_rss_filter(dev); + return hns3_restore_rss_filter(hw); }
static int @@ -1699,7 +1693,7 @@ hns3_flow_parse_rss(struct rte_eth_dev *dev, return -EINVAL; }
- return hns3_config_rss_filter(dev, conf, add); + return hns3_config_rss_filter(hw, conf, add); }
static int @@ -1960,7 +1954,7 @@ hns3_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow, break; case RTE_ETH_FILTER_HASH: rss_filter_ptr = (struct hns3_rss_conf_ele *)flow->rule; - ret = hns3_config_rss_filter(dev, &rss_filter_ptr->filter_info, + ret = hns3_config_rss_filter(hw, &rss_filter_ptr->filter_info, false); if (ret) return rte_flow_error_set(error, EIO, diff --git a/drivers/net/hns3/hns3_flow.h b/drivers/net/hns3/hns3_flow.h index 0f5de129a3..854fbb7ff0 100644 --- a/drivers/net/hns3/hns3_flow.h +++ b/drivers/net/hns3/hns3_flow.h @@ -49,6 +49,6 @@ int hns3_dev_flow_ops_get(struct rte_eth_dev *dev, const struct rte_flow_ops **ops); void hns3_flow_init(struct rte_eth_dev *dev); void hns3_flow_uninit(struct rte_eth_dev *dev); -int hns3_restore_filter(struct rte_eth_dev *dev); +int hns3_restore_filter(struct hns3_adapter *hns);
#endif /* _HNS3_FLOW_H_ */
From: Dongdong Liu liudongdong3@huawei.com
meson build -Db_coverage=true ninja -C build
../drivers/net/hns3/hns3_ethdev.c:2856:22: warning: ‘cfg.umv_space’ may be used uninitialized in this function [-Wmaybe-uninitialized] 2856 | pf->wanted_umv_size = cfg.umv_space;
Fix compiling warnings using gcc 10.3.1.
Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_ethdev.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index c59543ef5b..45b5d699b4 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -2808,6 +2808,7 @@ hns3_get_board_configuration(struct hns3_hw *hw) struct hns3_cfg cfg; int ret;
+ memset(&cfg, 0, sizeof(cfg)); ret = hns3_get_board_cfg(hw, &cfg); if (ret) { PMD_INIT_LOG(ERR, "get board config failed %d", ret);
Currently, hns3 reports VXLAN tunnel packet type for GENEVE, which is misleading to user. In fact, hns3 hardware cannot distinguish between VXLAN and GENEVE packet. So this patch uses RTE_PTYPE_TUNNEL_GRENAT packet type to report.
Fixes: 7d6df32cf742 ("net/hns3: fix missing outer L4 UDP flag for VXLAN") Cc: stable@dpdk.org
Signed-off-by: Huisong Li lihuisong@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_rxtx.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index 21c3ef72b1..089caccd7f 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -1995,7 +1995,7 @@ hns3_dev_supported_ptypes_get(struct rte_eth_dev *dev) RTE_PTYPE_INNER_L4_TCP, RTE_PTYPE_INNER_L4_SCTP, RTE_PTYPE_INNER_L4_ICMP, - RTE_PTYPE_TUNNEL_VXLAN, + RTE_PTYPE_TUNNEL_GRENAT, RTE_PTYPE_TUNNEL_NVGRE, RTE_PTYPE_UNKNOWN }; @@ -2092,7 +2092,7 @@ hns3_init_tunnel_ptype_tbl(struct hns3_ptype_table *tbl) tbl->ol3table[5] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT;
tbl->ol4table[0] = RTE_PTYPE_UNKNOWN; - tbl->ol4table[1] = RTE_PTYPE_L4_UDP | RTE_PTYPE_TUNNEL_VXLAN; + tbl->ol4table[1] = RTE_PTYPE_L4_UDP | RTE_PTYPE_TUNNEL_GRENAT; tbl->ol4table[2] = RTE_PTYPE_TUNNEL_NVGRE; }
From: Jie Hai haijie1@huawei.com
Removing magic numbers with macros.
Signed-off-by: Jie Hai haijie1@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_ethdev.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index 45b5d699b4..adc47d815d 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -1713,6 +1713,7 @@ hns3_add_mc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; uint8_t vf_id; int ret; + int idx;
/* Check if mac addr is valid */ if (!rte_is_multicast_ether_addr(mac_addr)) { @@ -1730,9 +1731,8 @@ hns3_add_mc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) HNS3_MC_MAC_VLAN_OPS_DESC_NUM); if (ret) { /* This mac addr do not exist, add new entry for it */ - memset(desc[0].data, 0, sizeof(desc[0].data)); - memset(desc[1].data, 0, sizeof(desc[0].data)); - memset(desc[2].data, 0, sizeof(desc[0].data)); + for (idx = 0; idx < HNS3_MC_MAC_VLAN_OPS_DESC_NUM; idx++) + memset(desc[idx].data, 0, sizeof(desc[idx].data)); }
/*
From: "Min Hu (Connor)" humin29@huawei.com
Fix code check warnings according to: - function should have same name with previous declaration; - local variable should no be referenced in macro referenced; - macro argument 'adapter' should be enclosed in parentheses.
Signed-off-by: Min Hu (Connor) humin29@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_common.c | 4 ++-- drivers/net/hns3/hns3_dump.c | 4 ++-- drivers/net/hns3/hns3_ethdev.h | 14 +++++++------- drivers/net/hns3/hns3_flow.c | 4 ++-- drivers/net/hns3/hns3_intr.c | 27 ++++++++++++--------------- drivers/net/hns3/hns3_intr.h | 4 ++-- drivers/net/hns3/hns3_regs.c | 4 ++-- drivers/net/hns3/hns3_rss.c | 2 +- drivers/net/hns3/hns3_rss.h | 2 +- drivers/net/hns3/hns3_rxtx.c | 4 ++-- drivers/net/hns3/hns3_rxtx.h | 14 +++++++++----- drivers/net/hns3/hns3_stats.h | 5 +++-- 12 files changed, 45 insertions(+), 43 deletions(-)
diff --git a/drivers/net/hns3/hns3_common.c b/drivers/net/hns3/hns3_common.c index 7a65db907e..1a1a016aa6 100644 --- a/drivers/net/hns3/hns3_common.c +++ b/drivers/net/hns3/hns3_common.c @@ -493,7 +493,7 @@ hns3_configure_all_mac_addr(struct hns3_adapter *hns, bool del) if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, addr); - hns3_err(hw, "failed to %s mac addr(%s) index:%d ret = %d.", + hns3_err(hw, "failed to %s mac addr(%s) index:%u ret = %d.", del ? "remove" : "restore", mac_str, i, ret); } } @@ -680,7 +680,7 @@ hns3_init_ring_with_vector(struct hns3_hw *hw) ret = hw->ops.bind_ring_with_vector(hw, vec, false, HNS3_RING_TYPE_TX, i); if (ret) { - PMD_INIT_LOG(ERR, "fail to unbind TX ring(%d) with vector: %u, ret=%d", + PMD_INIT_LOG(ERR, "fail to unbind TX ring(%u) with vector: %u, ret=%d", i, vec, ret); return ret; } diff --git a/drivers/net/hns3/hns3_dump.c b/drivers/net/hns3/hns3_dump.c index 646e93d8e6..cf5b500be1 100644 --- a/drivers/net/hns3/hns3_dump.c +++ b/drivers/net/hns3/hns3_dump.c @@ -342,7 +342,7 @@ static void hns3_print_queue_state_perline(FILE *file, const uint32_t *queue_state, uint32_t nb_queues, uint32_t line_num) { -#define HNS3_NUM_QUEUE_PER_LINE (sizeof(*queue_state) * HNS3_UINT8_BIT) +#define HNS3_NUM_QUEUE_PER_LINE (sizeof(uint32_t) * HNS3_UINT8_BIT) uint32_t id = line_num * HNS3_NUM_QUEUE_PER_LINE; uint32_t i;
@@ -365,7 +365,7 @@ static void hns3_display_queue_enable_state(FILE *file, const uint32_t *queue_state, uint32_t nb_queues, bool is_rxq) { -#define HNS3_NUM_QUEUE_PER_LINE (sizeof(*queue_state) * HNS3_UINT8_BIT) +#define HNS3_NUM_QUEUE_PER_LINE (sizeof(uint32_t) * HNS3_UINT8_BIT) uint32_t i;
fprintf(file, "\t %s queue id | enable state bitMap\n", diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h index eb8ca1e60f..aad779e949 100644 --- a/drivers/net/hns3/hns3_ethdev.h +++ b/drivers/net/hns3/hns3_ethdev.h @@ -898,11 +898,11 @@ enum hns3_dev_cap { hns3_get_bit((hw)->capability, HNS3_DEV_SUPPORT_##_name##_B)
#define HNS3_DEV_PRIVATE_TO_HW(adapter) \ - (&((struct hns3_adapter *)adapter)->hw) + (&((struct hns3_adapter *)(adapter))->hw) #define HNS3_DEV_PRIVATE_TO_PF(adapter) \ - (&((struct hns3_adapter *)adapter)->pf) + (&((struct hns3_adapter *)(adapter))->pf) #define HNS3_DEV_PRIVATE_TO_VF(adapter) \ - (&((struct hns3_adapter *)adapter)->vf) + (&((struct hns3_adapter *)(adapter))->vf) #define HNS3_DEV_HW_TO_ADAPTER(hw) \ container_of(hw, struct hns3_adapter, hw)
@@ -999,10 +999,10 @@ static inline uint32_t hns3_read_reg(void *base, uint32_t reg)
#define NEXT_ITEM_OF_ACTION(act, actions, index) \ do { \ - act = (actions) + (index); \ - while (act->type == RTE_FLOW_ACTION_TYPE_VOID) { \ + (act) = (actions) + (index); \ + while ((act)->type == RTE_FLOW_ACTION_TYPE_VOID) { \ (index)++; \ - act = actions + index; \ + (act) = (actions) + (index); \ } \ } while (0)
@@ -1027,7 +1027,7 @@ hns3_atomic_clear_bit(unsigned int nr, volatile uint64_t *addr) __atomic_fetch_and(addr, ~(1UL << nr), __ATOMIC_RELAXED); }
-static inline int64_t +static inline uint64_t hns3_test_and_clear_bit(unsigned int nr, volatile uint64_t *addr) { uint64_t mask = (1UL << nr); diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c index 2b4286d46d..1aee965e4a 100644 --- a/drivers/net/hns3/hns3_flow.c +++ b/drivers/net/hns3/hns3_flow.c @@ -66,7 +66,7 @@ static enum rte_flow_item_type tunnel_next_items[] = {
struct items_step_mngr { enum rte_flow_item_type *items; - int count; + size_t count; };
static inline void @@ -1141,7 +1141,7 @@ hns3_validate_item(const struct rte_flow_item *item, struct items_step_mngr step_mngr, struct rte_flow_error *error) { - int i; + uint32_t i;
if (item->last) return rte_flow_error_set(error, ENOTSUP, diff --git a/drivers/net/hns3/hns3_intr.c b/drivers/net/hns3/hns3_intr.c index 3ca2e1e338..4bdcd6070b 100644 --- a/drivers/net/hns3/hns3_intr.c +++ b/drivers/net/hns3/hns3_intr.c @@ -16,12 +16,6 @@
#define SWITCH_CONTEXT_US 10
-#define HNS3_CHECK_MERGE_CNT(val) \ - do { \ - if (val) \ - hw->reset.stats.merge_cnt++; \ - } while (0) - static const char *reset_string[HNS3_MAX_RESET] = { "flr", "vf_func", "vf_pf_func", "vf_full", "vf_global", "pf_func", "global", "IMP", "none", @@ -2525,20 +2519,20 @@ static void hns3_clear_reset_level(struct hns3_hw *hw, uint64_t *levels) { uint64_t merge_cnt = hw->reset.stats.merge_cnt; - int64_t tmp; + uint64_t tmp;
switch (hw->reset.level) { case HNS3_IMP_RESET: hns3_atomic_clear_bit(HNS3_IMP_RESET, levels); tmp = hns3_test_and_clear_bit(HNS3_GLOBAL_RESET, levels); - HNS3_CHECK_MERGE_CNT(tmp); + merge_cnt = tmp > 0 ? merge_cnt + 1 : merge_cnt; tmp = hns3_test_and_clear_bit(HNS3_FUNC_RESET, levels); - HNS3_CHECK_MERGE_CNT(tmp); + merge_cnt = tmp > 0 ? merge_cnt + 1 : merge_cnt; break; case HNS3_GLOBAL_RESET: hns3_atomic_clear_bit(HNS3_GLOBAL_RESET, levels); tmp = hns3_test_and_clear_bit(HNS3_FUNC_RESET, levels); - HNS3_CHECK_MERGE_CNT(tmp); + merge_cnt = tmp > 0 ? merge_cnt + 1 : merge_cnt; break; case HNS3_FUNC_RESET: hns3_atomic_clear_bit(HNS3_FUNC_RESET, levels); @@ -2546,19 +2540,19 @@ hns3_clear_reset_level(struct hns3_hw *hw, uint64_t *levels) case HNS3_VF_RESET: hns3_atomic_clear_bit(HNS3_VF_RESET, levels); tmp = hns3_test_and_clear_bit(HNS3_VF_PF_FUNC_RESET, levels); - HNS3_CHECK_MERGE_CNT(tmp); + merge_cnt = tmp > 0 ? merge_cnt + 1 : merge_cnt; tmp = hns3_test_and_clear_bit(HNS3_VF_FUNC_RESET, levels); - HNS3_CHECK_MERGE_CNT(tmp); + merge_cnt = tmp > 0 ? merge_cnt + 1 : merge_cnt; break; case HNS3_VF_FULL_RESET: hns3_atomic_clear_bit(HNS3_VF_FULL_RESET, levels); tmp = hns3_test_and_clear_bit(HNS3_VF_FUNC_RESET, levels); - HNS3_CHECK_MERGE_CNT(tmp); + merge_cnt = tmp > 0 ? merge_cnt + 1 : merge_cnt; break; case HNS3_VF_PF_FUNC_RESET: hns3_atomic_clear_bit(HNS3_VF_PF_FUNC_RESET, levels); tmp = hns3_test_and_clear_bit(HNS3_VF_FUNC_RESET, levels); - HNS3_CHECK_MERGE_CNT(tmp); + merge_cnt = tmp > 0 ? merge_cnt + 1 : merge_cnt; break; case HNS3_VF_FUNC_RESET: hns3_atomic_clear_bit(HNS3_VF_FUNC_RESET, levels); @@ -2570,13 +2564,16 @@ hns3_clear_reset_level(struct hns3_hw *hw, uint64_t *levels) default: return; }; - if (merge_cnt != hw->reset.stats.merge_cnt) + + if (merge_cnt != hw->reset.stats.merge_cnt) { hns3_warn(hw, "No need to do low-level reset after %s reset. " "merge cnt: %" PRIu64 " total merge cnt: %" PRIu64, reset_string[hw->reset.level], hw->reset.stats.merge_cnt - merge_cnt, hw->reset.stats.merge_cnt); + hw->reset.stats.merge_cnt = merge_cnt; + } }
static bool diff --git a/drivers/net/hns3/hns3_intr.h b/drivers/net/hns3/hns3_intr.h index 1a0f196614..1490a5e387 100644 --- a/drivers/net/hns3/hns3_intr.h +++ b/drivers/net/hns3/hns3_intr.h @@ -170,7 +170,7 @@ struct hns3_hw_error_desc { const struct hns3_hw_error *hw_err; };
-int hns3_enable_hw_error_intr(struct hns3_adapter *hns, bool state); +int hns3_enable_hw_error_intr(struct hns3_adapter *hns, bool en); void hns3_handle_msix_error(struct hns3_adapter *hns, uint64_t *levels); void hns3_handle_ras_error(struct hns3_adapter *hns, uint64_t *levels); void hns3_config_mac_tnl_int(struct hns3_hw *hw, bool en); @@ -185,7 +185,7 @@ void hns3_schedule_reset(struct hns3_adapter *hns); void hns3_schedule_delayed_reset(struct hns3_adapter *hns); int hns3_reset_req_hw_reset(struct hns3_adapter *hns); int hns3_reset_process(struct hns3_adapter *hns, - enum hns3_reset_level reset_level); + enum hns3_reset_level new_level); void hns3_reset_abort(struct hns3_adapter *hns); void hns3_start_report_lse(struct rte_eth_dev *dev); void hns3_stop_report_lse(struct rte_eth_dev *dev); diff --git a/drivers/net/hns3/hns3_regs.c b/drivers/net/hns3/hns3_regs.c index 6778e4cfc2..33392fd1f0 100644 --- a/drivers/net/hns3/hns3_regs.c +++ b/drivers/net/hns3/hns3_regs.c @@ -15,7 +15,7 @@ #define REG_NUM_PER_LINE 4 #define REG_LEN_PER_LINE (REG_NUM_PER_LINE * sizeof(uint32_t))
-static int hns3_get_dfx_reg_line(struct hns3_hw *hw, uint32_t *length); +static int hns3_get_dfx_reg_line(struct hns3_hw *hw, uint32_t *lines);
static const uint32_t cmdq_reg_addrs[] = {HNS3_CMDQ_TX_ADDR_L_REG, HNS3_CMDQ_TX_ADDR_H_REG, @@ -295,7 +295,7 @@ hns3_direct_access_regs(struct hns3_hw *hw, uint32_t *data) uint32_t *origin_data_ptr = data; uint32_t reg_offset; uint16_t i, j; - int reg_num; + size_t reg_num;
/* fetching per-PF registers values from PF PCIe register space */ reg_num = sizeof(cmdq_reg_addrs) / sizeof(uint32_t); diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c index 1003daf03e..fc912ed2e8 100644 --- a/drivers/net/hns3/hns3_rss.c +++ b/drivers/net/hns3/hns3_rss.c @@ -10,7 +10,7 @@ #include "hns3_logs.h"
/* Default hash keys */ -const uint8_t hns3_hash_key[] = { +const uint8_t hns3_hash_key[HNS3_RSS_KEY_SIZE] = { 0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2, 0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0, 0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4, diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h index 78c9eff827..a12f8b7034 100644 --- a/drivers/net/hns3/hns3_rss.h +++ b/drivers/net/hns3/hns3_rss.h @@ -88,7 +88,7 @@ static inline uint32_t roundup_pow_of_two(uint32_t x) return 1UL << fls(x - 1); }
-extern const uint8_t hns3_hash_key[]; +extern const uint8_t hns3_hash_key[HNS3_RSS_KEY_SIZE];
struct hns3_adapter;
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index 089caccd7f..f7641b1309 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -2762,7 +2762,7 @@ hns3_rx_check_vec_support(__rte_unused struct rte_eth_dev *dev) }
uint16_t __rte_weak -hns3_recv_pkts_vec(__rte_unused void *tx_queue, +hns3_recv_pkts_vec(__rte_unused void *rx_queue, __rte_unused struct rte_mbuf **rx_pkts, __rte_unused uint16_t nb_pkts) { @@ -2770,7 +2770,7 @@ hns3_recv_pkts_vec(__rte_unused void *tx_queue, }
uint16_t __rte_weak -hns3_recv_pkts_vec_sve(__rte_unused void *tx_queue, +hns3_recv_pkts_vec_sve(__rte_unused void *rx_queue, __rte_unused struct rte_mbuf **rx_pkts, __rte_unused uint16_t nb_pkts) { diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h index 803e805a5b..87c7c115a1 100644 --- a/drivers/net/hns3/hns3_rxtx.h +++ b/drivers/net/hns3/hns3_rxtx.h @@ -691,10 +691,12 @@ int hns3_rxq_iterate(struct rte_eth_dev *dev, int (*callback)(struct hns3_rx_queue *, void *), void *arg); void hns3_dev_release_mbufs(struct hns3_adapter *hns); int hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc, - unsigned int socket, const struct rte_eth_rxconf *conf, + unsigned int socket_id, + const struct rte_eth_rxconf *conf, struct rte_mempool *mp); int hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc, - unsigned int socket, const struct rte_eth_txconf *conf); + unsigned int socket_id, + const struct rte_eth_txconf *conf); uint32_t hns3_rx_queue_count(void *rx_queue); int hns3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id); int hns3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id); @@ -704,9 +706,11 @@ uint16_t hns3_recv_pkts_simple(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); uint16_t hns3_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); -uint16_t hns3_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, +uint16_t hns3_recv_pkts_vec(void *__restrict rx_queue, + struct rte_mbuf **__restrict rx_pkts, uint16_t nb_pkts); -uint16_t hns3_recv_pkts_vec_sve(void *rx_queue, struct rte_mbuf **rx_pkts, +uint16_t hns3_recv_pkts_vec_sve(void *__restrict rx_queue, + struct rte_mbuf **__restrict rx_pkts, uint16_t nb_pkts); int hns3_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id, @@ -754,7 +758,7 @@ void hns3_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_rxq_info *qinfo); void hns3_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_txq_info *qinfo); -uint32_t hns3_get_tqp_reg_offset(uint16_t idx); +uint32_t hns3_get_tqp_reg_offset(uint16_t queue_id); int hns3_start_all_txqs(struct rte_eth_dev *dev); int hns3_start_all_rxqs(struct rte_eth_dev *dev); void hns3_stop_all_txqs(struct rte_eth_dev *dev); diff --git a/drivers/net/hns3/hns3_stats.h b/drivers/net/hns3/hns3_stats.h index b5cd6188b4..9d84072205 100644 --- a/drivers/net/hns3/hns3_stats.h +++ b/drivers/net/hns3/hns3_stats.h @@ -145,7 +145,8 @@ struct hns3_reset_stats; #define HNS3_IMISSED_STATS_FIELD_OFFSET(f) \ (offsetof(struct hns3_rx_missed_stats, f))
-int hns3_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *rte_stats); +int hns3_stats_get(struct rte_eth_dev *eth_dev, + struct rte_eth_stats *rte_stats); int hns3_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n); int hns3_dev_xstats_reset(struct rte_eth_dev *dev); @@ -160,7 +161,7 @@ int hns3_dev_xstats_get_names_by_id(struct rte_eth_dev *dev, const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, uint32_t size); -int hns3_stats_reset(struct rte_eth_dev *dev); +int hns3_stats_reset(struct rte_eth_dev *eth_dev); int hns3_stats_init(struct hns3_hw *hw); void hns3_stats_uninit(struct hns3_hw *hw); int hns3_query_mac_stats_reg_num(struct hns3_hw *hw);
From: Chengwen Feng fengchengwen@huawei.com
Header files should be self contained and should not be cyclically dependent.
Signed-off-by: Chengwen Feng fengchengwen@huawei.com Signed-off-by: Min Hu (Connor) humin29@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_cmd.h | 3 +++ drivers/net/hns3/hns3_common.c | 2 +- drivers/net/hns3/hns3_dcb.h | 4 ++++ drivers/net/hns3/hns3_ethdev.c | 2 +- drivers/net/hns3/hns3_fdir.h | 5 +++++ drivers/net/hns3/hns3_flow.h | 3 +++ drivers/net/hns3/hns3_intr.c | 2 +- drivers/net/hns3/hns3_mbx.h | 4 ++++ drivers/net/hns3/hns3_mp.h | 2 ++ drivers/net/hns3/hns3_regs.h | 3 +++ drivers/net/hns3/hns3_rss.h | 2 ++ drivers/net/hns3/hns3_rxtx.c | 2 +- drivers/net/hns3/hns3_rxtx.h | 9 +++++++++ drivers/net/hns3/hns3_stats.h | 5 +++++ drivers/net/hns3/hns3_tm.h | 2 ++ 15 files changed, 46 insertions(+), 4 deletions(-)
diff --git a/drivers/net/hns3/hns3_cmd.h b/drivers/net/hns3/hns3_cmd.h index 82c999061d..bee96c1e46 100644 --- a/drivers/net/hns3/hns3_cmd.h +++ b/drivers/net/hns3/hns3_cmd.h @@ -7,6 +7,9 @@
#include <stdint.h>
+#include <rte_byteorder.h> +#include <rte_spinlock.h> + #define HNS3_CMDQ_TX_TIMEOUT 30000 #define HNS3_CMDQ_CLEAR_WAIT_TIME 200 #define HNS3_CMDQ_RX_INVLD_B 0 diff --git a/drivers/net/hns3/hns3_common.c b/drivers/net/hns3/hns3_common.c index 1a1a016aa6..716cebbcec 100644 --- a/drivers/net/hns3/hns3_common.c +++ b/drivers/net/hns3/hns3_common.c @@ -7,10 +7,10 @@ #include <ethdev_pci.h> #include <rte_pci.h>
-#include "hns3_common.h" #include "hns3_logs.h" #include "hns3_regs.h" #include "hns3_rxtx.h" +#include "hns3_common.h"
int hns3_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version, diff --git a/drivers/net/hns3/hns3_dcb.h b/drivers/net/hns3/hns3_dcb.h index e06ec177c8..9d9e7684c1 100644 --- a/drivers/net/hns3/hns3_dcb.h +++ b/drivers/net/hns3/hns3_dcb.h @@ -7,7 +7,11 @@
#include <stdint.h>
+#include <ethdev_driver.h> +#include <rte_ethdev.h> + #include "hns3_cmd.h" +#include "hns3_ethdev.h"
#define HNS3_ETHER_MAX_RATE 100000
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index adc47d815d..7b0e8fc77d 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -6,7 +6,6 @@ #include <rte_bus_pci.h> #include <ethdev_pci.h>
-#include "hns3_ethdev.h" #include "hns3_common.h" #include "hns3_dump.h" #include "hns3_logs.h" @@ -16,6 +15,7 @@ #include "hns3_dcb.h" #include "hns3_mp.h" #include "hns3_flow.h" +#include "hns3_ethdev.h"
#define HNS3_SERVICE_INTERVAL 1000000 /* us */ #define HNS3_SERVICE_QUICK_INTERVAL 10 diff --git a/drivers/net/hns3/hns3_fdir.h b/drivers/net/hns3/hns3_fdir.h index 4d18759160..7be1c0a248 100644 --- a/drivers/net/hns3/hns3_fdir.h +++ b/drivers/net/hns3/hns3_fdir.h @@ -5,6 +5,10 @@ #ifndef _HNS3_FDIR_H_ #define _HNS3_FDIR_H_
+#include <stdint.h> + +#include <rte_flow.h> + struct hns3_fd_key_cfg { uint8_t key_sel; uint8_t inner_sipv6_word_en; @@ -177,6 +181,7 @@ struct hns3_fdir_info { };
struct hns3_adapter; +struct hns3_hw;
int hns3_init_fd_config(struct hns3_adapter *hns); int hns3_fdir_filter_init(struct hns3_adapter *hns); diff --git a/drivers/net/hns3/hns3_flow.h b/drivers/net/hns3/hns3_flow.h index 854fbb7ff0..ec94510152 100644 --- a/drivers/net/hns3/hns3_flow.h +++ b/drivers/net/hns3/hns3_flow.h @@ -6,6 +6,9 @@ #define _HNS3_FLOW_H_
#include <rte_flow.h> +#include <ethdev_driver.h> + +#include "hns3_rss.h"
struct hns3_flow_counter { LIST_ENTRY(hns3_flow_counter) next; /* Pointer to the next counter. */ diff --git a/drivers/net/hns3/hns3_intr.c b/drivers/net/hns3/hns3_intr.c index 4bdcd6070b..57679254ee 100644 --- a/drivers/net/hns3/hns3_intr.c +++ b/drivers/net/hns3/hns3_intr.c @@ -10,9 +10,9 @@
#include "hns3_common.h" #include "hns3_logs.h" -#include "hns3_intr.h" #include "hns3_regs.h" #include "hns3_rxtx.h" +#include "hns3_intr.h"
#define SWITCH_CONTEXT_US 10
diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index d637bd2b23..b6ccd9ff8c 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -5,6 +5,10 @@ #ifndef _HNS3_MBX_H_ #define _HNS3_MBX_H_
+#include <stdint.h> + +#include <rte_spinlock.h> + enum HNS3_MBX_OPCODE { HNS3_MBX_RESET = 0x01, /* (VF -> PF) assert reset */ HNS3_MBX_ASSERTING_RESET, /* (PF -> VF) PF is asserting reset */ diff --git a/drivers/net/hns3/hns3_mp.h b/drivers/net/hns3/hns3_mp.h index a74221d086..230230bbfe 100644 --- a/drivers/net/hns3/hns3_mp.h +++ b/drivers/net/hns3/hns3_mp.h @@ -5,6 +5,8 @@ #ifndef _HNS3_MP_H_ #define _HNS3_MP_H_
+#include <ethdev_driver.h> + /* Local data for primary or secondary process. */ struct hns3_process_local_data { bool init_done; /* Process action register completed flag. */ diff --git a/drivers/net/hns3/hns3_regs.h b/drivers/net/hns3/hns3_regs.h index 5812eb39db..2636429844 100644 --- a/drivers/net/hns3/hns3_regs.h +++ b/drivers/net/hns3/hns3_regs.h @@ -5,6 +5,9 @@ #ifndef _HNS3_REGS_H_ #define _HNS3_REGS_H_
+#include <ethdev_driver.h> +#include <rte_dev_info.h> + /* bar registers for cmdq */ #define HNS3_CMDQ_TX_ADDR_L_REG 0x27000 #define HNS3_CMDQ_TX_ADDR_H_REG 0x27004 diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h index a12f8b7034..ebb51b4c66 100644 --- a/drivers/net/hns3/hns3_rss.h +++ b/drivers/net/hns3/hns3_rss.h @@ -4,6 +4,7 @@
#ifndef _HNS3_RSS_H_ #define _HNS3_RSS_H_ + #include <rte_ethdev.h> #include <rte_flow.h>
@@ -91,6 +92,7 @@ static inline uint32_t roundup_pow_of_two(uint32_t x) extern const uint8_t hns3_hash_key[HNS3_RSS_KEY_SIZE];
struct hns3_adapter; +struct hns3_hw;
int hns3_dev_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index f7641b1309..8ad40a49c7 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -17,10 +17,10 @@ #endif
#include "hns3_common.h" -#include "hns3_rxtx.h" #include "hns3_regs.h" #include "hns3_logs.h" #include "hns3_mp.h" +#include "hns3_rxtx.h"
#define HNS3_CFG_DESC_NUM(num) ((num) / 8 - 1) #define HNS3_RX_RING_PREFETCTH_MASK 3 diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h index 87c7c115a1..f619d6d466 100644 --- a/drivers/net/hns3/hns3_rxtx.h +++ b/drivers/net/hns3/hns3_rxtx.h @@ -6,7 +6,16 @@ #define _HNS3_RXTX_H_
#include <stdint.h> + +#include <ethdev_driver.h> #include <rte_mbuf_core.h> +#include <rte_ethdev.h> +#include <rte_ethdev_core.h> +#include <rte_io.h> +#include <rte_mempool.h> +#include <rte_memzone.h> + +#include "hns3_ethdev.h"
#define HNS3_MIN_RING_DESC 64 #define HNS3_MAX_RING_DESC 32768 diff --git a/drivers/net/hns3/hns3_stats.h b/drivers/net/hns3/hns3_stats.h index 9d84072205..9a360f8870 100644 --- a/drivers/net/hns3/hns3_stats.h +++ b/drivers/net/hns3/hns3_stats.h @@ -5,6 +5,9 @@ #ifndef _HNS3_STATS_H_ #define _HNS3_STATS_H_
+#include <ethdev_driver.h> +#include <rte_ethdev.h> + /* TQP stats */ struct hns3_tqp_stats { uint64_t rcb_tx_ring_pktnum_rcd; /* Total num of transmitted packets */ @@ -145,6 +148,8 @@ struct hns3_reset_stats; #define HNS3_IMISSED_STATS_FIELD_OFFSET(f) \ (offsetof(struct hns3_rx_missed_stats, f))
+struct hns3_hw; + int hns3_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *rte_stats); int hns3_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, diff --git a/drivers/net/hns3/hns3_tm.h b/drivers/net/hns3/hns3_tm.h index 83e9cc8ba9..47345eeed1 100644 --- a/drivers/net/hns3/hns3_tm.h +++ b/drivers/net/hns3/hns3_tm.h @@ -105,6 +105,8 @@ hns3_tm_calc_node_tc_no(struct hns3_tm_conf *conf, uint32_t node_id) return 0; }
+struct hns3_hw; + void hns3_tm_conf_init(struct rte_eth_dev *dev); void hns3_tm_conf_uninit(struct rte_eth_dev *dev); int hns3_tm_ops_get(struct rte_eth_dev *dev __rte_unused, void *arg);
From: Chengwen Feng fengchengwen@huawei.com
Signed-off-by: Chengwen Feng fengchengwen@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_cmd.h | 19 ------------------- drivers/net/hns3/hns3_rss.h | 4 ---- 2 files changed, 23 deletions(-)
diff --git a/drivers/net/hns3/hns3_cmd.h b/drivers/net/hns3/hns3_cmd.h index bee96c1e46..902638ba99 100644 --- a/drivers/net/hns3/hns3_cmd.h +++ b/drivers/net/hns3/hns3_cmd.h @@ -59,11 +59,6 @@ enum hns3_cmd_return_status { HNS3_CMD_ROH_CHECK_FAIL = 12 };
-struct hns3_misc_vector { - uint8_t *addr; - int vector_irq; -}; - struct hns3_cmq { struct hns3_cmq_ring csq; struct hns3_cmq_ring crq; @@ -397,20 +392,6 @@ struct hns3_pkt_buf_alloc { struct hns3_shared_buf s_buf; };
-#define HNS3_RX_COM_WL_EN_B 15 -struct hns3_rx_com_wl_buf_cmd { - uint16_t high_wl; - uint16_t low_wl; - uint8_t rsv[20]; -}; - -#define HNS3_RX_PKT_EN_B 15 -struct hns3_rx_pkt_buf_cmd { - uint16_t high_pkt; - uint16_t low_pkt; - uint8_t rsv[20]; -}; - #define HNS3_PF_STATE_DONE_B 0 #define HNS3_PF_STATE_MAIN_B 1 #define HNS3_PF_STATE_BOND_B 2 diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h index ebb51b4c66..0d24436cbe 100644 --- a/drivers/net/hns3/hns3_rss.h +++ b/drivers/net/hns3/hns3_rss.h @@ -34,10 +34,6 @@ #define HNS3_RSS_HASH_ALGO_SYMMETRIC_TOEP 2 #define HNS3_RSS_HASH_ALGO_MASK 0xf
-struct hns3_rss_tuple_cfg { - uint64_t rss_tuple_fields; -}; - #define HNS3_RSS_QUEUES_BUFFER_NUM 64 /* Same as the Max rx/tx queue num */ struct hns3_rss_conf { /* RSS parameters :algorithm, flow_types, key, queue */
From: Chengwen Feng fengchengwen@huawei.com
Currently, the hns3 driver uses _HNS3_XXX conditional compilation macros to prevent duplicate header files. But in the C11 standard, all identifiers starting with an underscore plus an uppercase letter are always reserved. So this patch fixes it.
Signed-off-by: Chengwen Feng fengchengwen@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_cmd.h | 6 +++--- drivers/net/hns3/hns3_common.h | 6 +++--- drivers/net/hns3/hns3_dcb.h | 6 +++--- drivers/net/hns3/hns3_dump.h | 6 +++--- drivers/net/hns3/hns3_ethdev.h | 6 +++--- drivers/net/hns3/hns3_fdir.h | 6 +++--- drivers/net/hns3/hns3_flow.h | 6 +++--- drivers/net/hns3/hns3_intr.h | 6 +++--- drivers/net/hns3/hns3_logs.h | 6 +++--- drivers/net/hns3/hns3_mbx.h | 6 +++--- drivers/net/hns3/hns3_mp.h | 6 +++--- drivers/net/hns3/hns3_regs.h | 6 +++--- drivers/net/hns3/hns3_rss.h | 6 +++--- drivers/net/hns3/hns3_rxtx.h | 6 +++--- drivers/net/hns3/hns3_rxtx_vec.h | 6 +++--- drivers/net/hns3/hns3_rxtx_vec_neon.h | 6 +++--- drivers/net/hns3/hns3_stats.h | 6 +++--- drivers/net/hns3/hns3_tm.h | 6 +++--- 18 files changed, 54 insertions(+), 54 deletions(-)
diff --git a/drivers/net/hns3/hns3_cmd.h b/drivers/net/hns3/hns3_cmd.h index 902638ba99..8ac8b45819 100644 --- a/drivers/net/hns3/hns3_cmd.h +++ b/drivers/net/hns3/hns3_cmd.h @@ -2,8 +2,8 @@ * Copyright(c) 2018-2021 HiSilicon Limited. */
-#ifndef _HNS3_CMD_H_ -#define _HNS3_CMD_H_ +#ifndef HNS3_CMD_H +#define HNS3_CMD_H
#include <stdint.h>
@@ -1038,4 +1038,4 @@ int hns3_cmd_init(struct hns3_hw *hw); void hns3_cmd_destroy_queue(struct hns3_hw *hw); void hns3_cmd_uninit(struct hns3_hw *hw);
-#endif /* _HNS3_CMD_H_ */ +#endif /* HNS3_CMD_H */ diff --git a/drivers/net/hns3/hns3_common.h b/drivers/net/hns3/hns3_common.h index 2994e4a269..5aa001f0cc 100644 --- a/drivers/net/hns3/hns3_common.h +++ b/drivers/net/hns3/hns3_common.h @@ -2,8 +2,8 @@ * Copyright(C) 2021 HiSilicon Limited */
-#ifndef _HNS3_COMMON_H_ -#define _HNS3_COMMON_H_ +#ifndef HNS3_COMMON_H +#define HNS3_COMMON_H
#include <sys/time.h>
@@ -61,4 +61,4 @@ int hns3_restore_rx_interrupt(struct hns3_hw *hw);
int hns3_get_pci_revision_id(struct hns3_hw *hw, uint8_t *revision_id);
-#endif /* _HNS3_COMMON_H_ */ +#endif /* HNS3_COMMON_H */ diff --git a/drivers/net/hns3/hns3_dcb.h b/drivers/net/hns3/hns3_dcb.h index 9d9e7684c1..d5bb5edf4d 100644 --- a/drivers/net/hns3/hns3_dcb.h +++ b/drivers/net/hns3/hns3_dcb.h @@ -2,8 +2,8 @@ * Copyright(c) 2018-2021 HiSilicon Limited. */
-#ifndef _HNS3_DCB_H_ -#define _HNS3_DCB_H_ +#ifndef HNS3_DCB_H +#define HNS3_DCB_H
#include <stdint.h>
@@ -215,4 +215,4 @@ int hns3_update_queue_map_configure(struct hns3_adapter *hns); int hns3_port_shaper_update(struct hns3_hw *hw, uint32_t speed); uint8_t hns3_txq_mapped_tc_get(struct hns3_hw *hw, uint16_t txq_no);
-#endif /* _HNS3_DCB_H_ */ +#endif /* HNS3_DCB_H */ diff --git a/drivers/net/hns3/hns3_dump.h b/drivers/net/hns3/hns3_dump.h index 8ba7ee866a..616cb70d6e 100644 --- a/drivers/net/hns3/hns3_dump.h +++ b/drivers/net/hns3/hns3_dump.h @@ -2,9 +2,9 @@ * Copyright(C) 2022 HiSilicon Limited */
-#ifndef _HNS3_DUMP_H_ -#define _HNS3_DUMP_H_ +#ifndef HNS3_DUMP_H +#define HNS3_DUMP_H
int hns3_eth_dev_priv_dump(struct rte_eth_dev *dev, FILE *file);
-#endif /* _HNS3_DUMP_H_ */ +#endif /* HNS3_DUMP_H */ diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h index aad779e949..40476bf882 100644 --- a/drivers/net/hns3/hns3_ethdev.h +++ b/drivers/net/hns3/hns3_ethdev.h @@ -2,8 +2,8 @@ * Copyright(c) 2018-2021 HiSilicon Limited. */
-#ifndef _HNS3_ETHDEV_H_ -#define _HNS3_ETHDEV_H_ +#ifndef HNS3_ETHDEV_H +#define HNS3_ETHDEV_H
#include <pthread.h> #include <ethdev_driver.h> @@ -1074,4 +1074,4 @@ is_reset_pending(struct hns3_adapter *hns) return ret; }
-#endif /* _HNS3_ETHDEV_H_ */ +#endif /* HNS3_ETHDEV_H */ diff --git a/drivers/net/hns3/hns3_fdir.h b/drivers/net/hns3/hns3_fdir.h index 7be1c0a248..de2422e12f 100644 --- a/drivers/net/hns3/hns3_fdir.h +++ b/drivers/net/hns3/hns3_fdir.h @@ -2,8 +2,8 @@ * Copyright(c) 2018-2021 HiSilicon Limited. */
-#ifndef _HNS3_FDIR_H_ -#define _HNS3_FDIR_H_ +#ifndef HNS3_FDIR_H +#define HNS3_FDIR_H
#include <stdint.h>
@@ -192,4 +192,4 @@ int hns3_clear_all_fdir_filter(struct hns3_adapter *hns); int hns3_fd_get_count(struct hns3_hw *hw, uint32_t id, uint64_t *value); int hns3_restore_all_fdir_filter(struct hns3_adapter *hns);
-#endif /* _HNS3_FDIR_H_ */ +#endif /* HNS3_FDIR_H */ diff --git a/drivers/net/hns3/hns3_flow.h b/drivers/net/hns3/hns3_flow.h index ec94510152..e4b2fdf2e6 100644 --- a/drivers/net/hns3/hns3_flow.h +++ b/drivers/net/hns3/hns3_flow.h @@ -2,8 +2,8 @@ * Copyright(C) 2021 HiSilicon Limited */
-#ifndef _HNS3_FLOW_H_ -#define _HNS3_FLOW_H_ +#ifndef HNS3_FLOW_H +#define HNS3_FLOW_H
#include <rte_flow.h> #include <ethdev_driver.h> @@ -54,4 +54,4 @@ void hns3_flow_init(struct rte_eth_dev *dev); void hns3_flow_uninit(struct rte_eth_dev *dev); int hns3_restore_filter(struct hns3_adapter *hns);
-#endif /* _HNS3_FLOW_H_ */ +#endif /* HNS3_FLOW_H */ diff --git a/drivers/net/hns3/hns3_intr.h b/drivers/net/hns3/hns3_intr.h index 1490a5e387..aca1c0722c 100644 --- a/drivers/net/hns3/hns3_intr.h +++ b/drivers/net/hns3/hns3_intr.h @@ -2,8 +2,8 @@ * Copyright(c) 2018-2021 HiSilicon Limited. */
-#ifndef _HNS3_INTR_H_ -#define _HNS3_INTR_H_ +#ifndef HNS3_INTR_H +#define HNS3_INTR_H
#include <stdint.h>
@@ -190,4 +190,4 @@ void hns3_reset_abort(struct hns3_adapter *hns); void hns3_start_report_lse(struct rte_eth_dev *dev); void hns3_stop_report_lse(struct rte_eth_dev *dev);
-#endif /* _HNS3_INTR_H_ */ +#endif /* HNS3_INTR_H */ diff --git a/drivers/net/hns3/hns3_logs.h b/drivers/net/hns3/hns3_logs.h index 072a53bd69..c880f752ab 100644 --- a/drivers/net/hns3/hns3_logs.h +++ b/drivers/net/hns3/hns3_logs.h @@ -2,8 +2,8 @@ * Copyright(c) 2018-2021 HiSilicon Limited. */
-#ifndef _HNS3_LOGS_H_ -#define _HNS3_LOGS_H_ +#ifndef HNS3_LOGS_H +#define HNS3_LOGS_H
extern int hns3_logtype_init; #define PMD_INIT_LOG(level, fmt, args...) \ @@ -31,4 +31,4 @@ extern int hns3_logtype_driver; #define hns3_dbg(hw, fmt, args...) \ PMD_DRV_LOG_RAW(hw, RTE_LOG_DEBUG, fmt "\n", ## args)
-#endif /* _HNS3_LOGS_H_ */ +#endif /* HNS3_LOGS_H */ diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index b6ccd9ff8c..c71f43238c 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -2,8 +2,8 @@ * Copyright(c) 2018-2021 HiSilicon Limited. */
-#ifndef _HNS3_MBX_H_ -#define _HNS3_MBX_H_ +#ifndef HNS3_MBX_H +#define HNS3_MBX_H
#include <stdint.h>
@@ -172,4 +172,4 @@ void hns3_dev_handle_mbx_msg(struct hns3_hw *hw); int hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, const uint8_t *msg_data, uint8_t msg_len, bool need_resp, uint8_t *resp_data, uint16_t resp_len); -#endif /* _HNS3_MBX_H_ */ +#endif /* HNS3_MBX_H */ diff --git a/drivers/net/hns3/hns3_mp.h b/drivers/net/hns3/hns3_mp.h index 230230bbfe..5dc32a41d4 100644 --- a/drivers/net/hns3/hns3_mp.h +++ b/drivers/net/hns3/hns3_mp.h @@ -2,8 +2,8 @@ * Copyright(c) 2018-2021 HiSilicon Limited. */
-#ifndef _HNS3_MP_H_ -#define _HNS3_MP_H_ +#ifndef HNS3_MP_H +#define HNS3_MP_H
#include <ethdev_driver.h>
@@ -21,4 +21,4 @@ void hns3_mp_req_stop_tx(struct rte_eth_dev *dev); int hns3_mp_init(struct rte_eth_dev *dev); void hns3_mp_uninit(struct rte_eth_dev *dev);
-#endif /* _HNS3_MP_H_ */ +#endif /* HNS3_MP_H */ diff --git a/drivers/net/hns3/hns3_regs.h b/drivers/net/hns3/hns3_regs.h index 2636429844..459bbaf773 100644 --- a/drivers/net/hns3/hns3_regs.h +++ b/drivers/net/hns3/hns3_regs.h @@ -2,8 +2,8 @@ * Copyright(c) 2018-2021 HiSilicon Limited. */
-#ifndef _HNS3_REGS_H_ -#define _HNS3_REGS_H_ +#ifndef HNS3_REGS_H +#define HNS3_REGS_H
#include <ethdev_driver.h> #include <rte_dev_info.h> @@ -153,4 +153,4 @@ #define HNS3_RL_USEC_TO_REG(rl_usec) ((rl_usec) >> 2)
int hns3_get_regs(struct rte_eth_dev *eth_dev, struct rte_dev_reg_info *regs); -#endif /* _HNS3_REGS_H_ */ +#endif /* HNS3_REGS_H */ diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h index 0d24436cbe..5c288c8bb2 100644 --- a/drivers/net/hns3/hns3_rss.h +++ b/drivers/net/hns3/hns3_rss.h @@ -2,8 +2,8 @@ * Copyright(c) 2018-2021 HiSilicon Limited. */
-#ifndef _HNS3_RSS_H_ -#define _HNS3_RSS_H_ +#ifndef HNS3_RSS_H +#define HNS3_RSS_H
#include <rte_ethdev.h> #include <rte_flow.h> @@ -109,4 +109,4 @@ void hns3_rss_uninit(struct hns3_adapter *hns); int hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw, uint64_t rss_hf); int hns3_rss_set_algo_key(struct hns3_hw *hw, const uint8_t *key);
-#endif /* _HNS3_RSS_H_ */ +#endif /* HNS3_RSS_H */ diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h index f619d6d466..ed40621b3a 100644 --- a/drivers/net/hns3/hns3_rxtx.h +++ b/drivers/net/hns3/hns3_rxtx.h @@ -2,8 +2,8 @@ * Copyright(c) 2018-2021 HiSilicon Limited. */
-#ifndef _HNS3_RXTX_H_ -#define _HNS3_RXTX_H_ +#ifndef HNS3_RXTX_H +#define HNS3_RXTX_H
#include <stdint.h>
@@ -780,4 +780,4 @@ void hns3_tx_push_init(struct rte_eth_dev *dev); void hns3_stop_tx_datapath(struct rte_eth_dev *dev); void hns3_start_tx_datapath(struct rte_eth_dev *dev);
-#endif /* _HNS3_RXTX_H_ */ +#endif /* HNS3_RXTX_H */ diff --git a/drivers/net/hns3/hns3_rxtx_vec.h b/drivers/net/hns3/hns3_rxtx_vec.h index d13f18627d..2c8a91921e 100644 --- a/drivers/net/hns3/hns3_rxtx_vec.h +++ b/drivers/net/hns3/hns3_rxtx_vec.h @@ -2,8 +2,8 @@ * Copyright(c) 2020-2021 HiSilicon Limited. */
-#ifndef _HNS3_RXTX_VEC_H_ -#define _HNS3_RXTX_VEC_H_ +#ifndef HNS3_RXTX_VEC_H +#define HNS3_RXTX_VEC_H
#include "hns3_rxtx.h" #include "hns3_ethdev.h" @@ -94,4 +94,4 @@ hns3_rx_reassemble_pkts(struct rte_mbuf **rx_pkts,
return count; } -#endif /* _HNS3_RXTX_VEC_H_ */ +#endif /* HNS3_RXTX_VEC_H */ diff --git a/drivers/net/hns3/hns3_rxtx_vec_neon.h b/drivers/net/hns3/hns3_rxtx_vec_neon.h index 0edd4756f1..55d9bf817d 100644 --- a/drivers/net/hns3/hns3_rxtx_vec_neon.h +++ b/drivers/net/hns3/hns3_rxtx_vec_neon.h @@ -2,8 +2,8 @@ * Copyright(c) 2020-2021 HiSilicon Limited. */
-#ifndef _HNS3_RXTX_VEC_NEON_H_ -#define _HNS3_RXTX_VEC_NEON_H_ +#ifndef HNS3_RXTX_VEC_NEON_H +#define HNS3_RXTX_VEC_NEON_H
#include <arm_neon.h>
@@ -299,4 +299,4 @@ hns3_recv_burst_vec(struct hns3_rx_queue *__restrict rxq,
return nb_rx; } -#endif /* _HNS3_RXTX_VEC_NEON_H_ */ +#endif /* HNS3_RXTX_VEC_NEON_H */ diff --git a/drivers/net/hns3/hns3_stats.h b/drivers/net/hns3/hns3_stats.h index 9a360f8870..74bc4173cc 100644 --- a/drivers/net/hns3/hns3_stats.h +++ b/drivers/net/hns3/hns3_stats.h @@ -2,8 +2,8 @@ * Copyright(c) 2018-2021 HiSilicon Limited. */
-#ifndef _HNS3_STATS_H_ -#define _HNS3_STATS_H_ +#ifndef HNS3_STATS_H +#define HNS3_STATS_H
#include <ethdev_driver.h> #include <rte_ethdev.h> @@ -172,4 +172,4 @@ void hns3_stats_uninit(struct hns3_hw *hw); int hns3_query_mac_stats_reg_num(struct hns3_hw *hw); void hns3_update_hw_stats(struct hns3_hw *hw);
-#endif /* _HNS3_STATS_H_ */ +#endif /* HNS3_STATS_H */ diff --git a/drivers/net/hns3/hns3_tm.h b/drivers/net/hns3/hns3_tm.h index 47345eeed1..0cac1a5bb2 100644 --- a/drivers/net/hns3/hns3_tm.h +++ b/drivers/net/hns3/hns3_tm.h @@ -2,8 +2,8 @@ * Copyright(c) 2020-2021 HiSilicon Limited. */
-#ifndef _HNS3_TM_H_ -#define _HNS3_TM_H_ +#ifndef HNS3_TM_H +#define HNS3_TM_H
#include <stdint.h> #include <rte_tailq.h> @@ -114,4 +114,4 @@ void hns3_tm_dev_start_proc(struct hns3_hw *hw); void hns3_tm_dev_stop_proc(struct hns3_hw *hw); int hns3_tm_conf_update(struct hns3_hw *hw);
-#endif /* _HNS3_TM_H */ +#endif /* HNS3_TM_H */
Currently, hns3 driver use 'ipv4-other' and 'ipv6-other' as the flag of IP packets to judge if enable RSS tuple field. But user may use 'RTE_ETH_RSS_IPV4' or 'RTE_ETH_RSS_IPV6' as the flag. So this patch adds the processing of these macros.
Fixes: 806f1d5ab0e3 ("net/hns3: set RSS hash type input configuration") Cc: stable@dpdk.org
Signed-off-by: Huisong Li lihuisong@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_rss.c | 14 ++++++++++++++ drivers/net/hns3/hns3_rss.h | 2 ++ 2 files changed, 16 insertions(+)
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c index fc912ed2e8..e7e114727f 100644 --- a/drivers/net/hns3/hns3_rss.c +++ b/drivers/net/hns3/hns3_rss.c @@ -102,6 +102,10 @@ static const struct { BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) }, { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_DST_ONLY, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) }, + { RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_L3_SRC_ONLY, + BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) }, + { RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_L3_DST_ONLY, + BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) }, { RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_SRC_ONLY, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) }, { RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_DST_ONLY, @@ -134,6 +138,10 @@ static const struct { BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) }, { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_DST_ONLY, BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) }, + { RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_L3_SRC_ONLY, + BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) }, + { RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_L3_DST_ONLY, + BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }, { RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_SRC_ONLY, BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) }, { RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_DST_ONLY, @@ -159,6 +167,9 @@ static const struct { BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) | BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) | BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_VER) }, + { RTE_ETH_RSS_IPV4, + BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) | + BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) }, { RTE_ETH_RSS_NONFRAG_IPV4_OTHER, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) | BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) }, @@ -177,6 +188,9 @@ static const struct { BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) | BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) | BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_VER) }, + { RTE_ETH_RSS_IPV6, + BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) | + BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }, { RTE_ETH_RSS_NONFRAG_IPV6_OTHER, BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) | BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) } diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h index 5c288c8bb2..9471e7039d 100644 --- a/drivers/net/hns3/hns3_rss.h +++ b/drivers/net/hns3/hns3_rss.h @@ -9,11 +9,13 @@ #include <rte_flow.h>
#define HNS3_ETH_RSS_SUPPORT ( \ + RTE_ETH_RSS_IPV4 | \ RTE_ETH_RSS_FRAG_IPV4 | \ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \ + RTE_ETH_RSS_IPV6 | \ RTE_ETH_RSS_FRAG_IPV6 | \ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
Fix spelling errors about IPV6-SCTP macro.
Fixes: 1bc633c34008 ("net/hns3: enable RSS for IPv6-SCTP dst/src port fields") Cc: stable@dpdk.org
Signed-off-by: Huisong Li lihuisong@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_rss.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c index e7e114727f..6d71ee94a9 100644 --- a/drivers/net/hns3/hns3_rss.c +++ b/drivers/net/hns3/hns3_rss.c @@ -57,8 +57,8 @@ enum hns3_tuple_field { HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S,
/* IPV6_SCTP ENABLE FIELD */ - HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D = 48, - HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S, + HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_D = 48, + HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_S, HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D, HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S, HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_VER, @@ -135,9 +135,9 @@ static const struct { { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_DST_ONLY, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) }, { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_SRC_ONLY, - BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) }, + BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_S) }, { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_DST_ONLY, - BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) }, + BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_D) }, { RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_L3_SRC_ONLY, BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) }, { RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_L3_DST_ONLY, @@ -185,8 +185,8 @@ static const struct { BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) }, { RTE_ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) | BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) | - BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) | - BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) | + BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_D) | + BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_S) | BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_VER) }, { RTE_ETH_RSS_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) |
When user only use 'ipv4' to set 'rss_hf', hns3 will enable all tuple fields for 'ipv4' flow. But if user use 'ipv4-tcp' , 'ipv4' and 'l4-src-only' to set 'rss_hf', driver does not enable all tuple fields for 'ipv4' flow.
Fixes: 806f1d5ab0e3 ("net/hns3: set RSS hash type input configuration") Cc: stable@dpdk.org
Signed-off-by: Huisong Li lihuisong@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_rss.c | 266 ++++++++++++++++++++++++------------ 1 file changed, 176 insertions(+), 90 deletions(-)
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c index 6d71ee94a9..ea745c791f 100644 --- a/drivers/net/hns3/hns3_rss.c +++ b/drivers/net/hns3/hns3_rss.c @@ -70,130 +70,209 @@ enum hns3_tuple_field { HNS3_RSS_FIELD_IPV6_FRAG_IP_S };
+enum hns3_rss_tuple_type { + HNS3_RSS_IP_TUPLE, + HNS3_RSS_IP_L4_TUPLE, +}; + static const struct { uint64_t rss_types; + uint16_t tuple_type; uint64_t rss_field; } hns3_set_tuple_table[] = { + /* IPV4-FRAG */ { RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_SRC_ONLY, + HNS3_RSS_IP_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) }, { RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_DST_ONLY, + HNS3_RSS_IP_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) }, + { RTE_ETH_RSS_FRAG_IPV4, + HNS3_RSS_IP_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) | BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) }, + + /* IPV4 */ + { RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_L3_SRC_ONLY, + HNS3_RSS_IP_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) }, + { RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_L3_DST_ONLY, + HNS3_RSS_IP_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) }, + { RTE_ETH_RSS_IPV4, + HNS3_RSS_IP_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) | + BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) }, + + /* IPV4-OTHER */ + { RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_SRC_ONLY, + HNS3_RSS_IP_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) }, + { RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_DST_ONLY, + HNS3_RSS_IP_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) }, + { RTE_ETH_RSS_NONFRAG_IPV4_OTHER, + HNS3_RSS_IP_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) | + BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) }, + + /* IPV4-TCP */ { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_SRC_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) }, { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_DST_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) }, { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_SRC_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) }, { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_DST_ONLY, + HNS3_RSS_IP_L4_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) }, + { RTE_ETH_RSS_NONFRAG_IPV4_TCP, + HNS3_RSS_IP_L4_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) | + BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) | + BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) | BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) }, + + /* IPV4-UDP */ { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_SRC_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) }, { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_DST_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) }, { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_SRC_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) }, { RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_DST_ONLY, + HNS3_RSS_IP_L4_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) }, + { RTE_ETH_RSS_NONFRAG_IPV4_UDP, + HNS3_RSS_IP_L4_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) | + BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) | + BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) | BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) }, + + /* IPV4-SCTP */ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_SRC_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) }, { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_DST_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) }, { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_SRC_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) }, { RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_DST_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) }, - { RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_L3_SRC_ONLY, - BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) }, - { RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_L3_DST_ONLY, - BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) }, - { RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_SRC_ONLY, - BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) }, - { RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_DST_ONLY, - BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) }, + { RTE_ETH_RSS_NONFRAG_IPV4_SCTP, + HNS3_RSS_IP_L4_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) | + BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) | + BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) | + BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) | + BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_VER) }, + + /* IPV6-FRAG */ { RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_SRC_ONLY, + HNS3_RSS_IP_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) }, { RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_DST_ONLY, + HNS3_RSS_IP_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) }, + { RTE_ETH_RSS_FRAG_IPV6, + HNS3_RSS_IP_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) | + BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) }, + + /* IPV6 */ + { RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_L3_SRC_ONLY, + HNS3_RSS_IP_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) }, + { RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_L3_DST_ONLY, + HNS3_RSS_IP_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }, + { RTE_ETH_RSS_IPV6, + HNS3_RSS_IP_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) | + BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }, + + /* IPV6-OTHER */ + { RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_SRC_ONLY, + HNS3_RSS_IP_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) }, + { RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_DST_ONLY, + HNS3_RSS_IP_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }, + { RTE_ETH_RSS_NONFRAG_IPV6_OTHER, + HNS3_RSS_IP_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) | + BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }, + + /* IPV6-TCP */ { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_SRC_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) }, { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_DST_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) }, { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_SRC_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) }, { RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_DST_ONLY, + HNS3_RSS_IP_L4_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) }, + { RTE_ETH_RSS_NONFRAG_IPV6_TCP, + HNS3_RSS_IP_L4_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) | + BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) | + BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) | BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) }, + + /* IPV6-UDP */ { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_SRC_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) }, { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_DST_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) }, { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_SRC_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) }, { RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_DST_ONLY, + HNS3_RSS_IP_L4_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) }, + { RTE_ETH_RSS_NONFRAG_IPV6_UDP, + HNS3_RSS_IP_L4_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) | + BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) | + BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) | BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) }, + + /* IPV6-SCTP */ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_SRC_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) }, { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_DST_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) }, { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_SRC_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_S) }, { RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_DST_ONLY, + HNS3_RSS_IP_L4_TUPLE, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_D) }, - { RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_L3_SRC_ONLY, - BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) }, - { RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_L3_DST_ONLY, - BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }, - { RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_SRC_ONLY, - BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) }, - { RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_DST_ONLY, - BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }, -}; - -static const struct { - uint64_t rss_types; - uint64_t rss_field; -} hns3_set_rss_types[] = { - { RTE_ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) | - BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) }, - { RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) | - BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) | - BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) | - BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) }, - { RTE_ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) | - BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) | - BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) | - BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) }, - { RTE_ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) | - BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) | - BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) | - BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) | - BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_VER) }, - { RTE_ETH_RSS_IPV4, - BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) | - BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) }, - { RTE_ETH_RSS_NONFRAG_IPV4_OTHER, - BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) | - BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) }, - { RTE_ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) | - BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) }, - { RTE_ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) | - BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) | - BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) | - BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) }, - { RTE_ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) | - BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) | - BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) | - BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) }, - { RTE_ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) | + { RTE_ETH_RSS_NONFRAG_IPV6_SCTP, + HNS3_RSS_IP_L4_TUPLE, + BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) | BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) | BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_D) | BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_S) | BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_VER) }, - { RTE_ETH_RSS_IPV6, - BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) | - BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }, - { RTE_ETH_RSS_NONFRAG_IPV6_OTHER, - BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) | - BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) } };
/* @@ -321,46 +400,53 @@ hns3_rss_reset_indir_table(struct hns3_hw *hw) return ret; }
+static uint64_t +hns3_rss_calc_tuple_filed(uint64_t rss_hf) +{ + uint64_t l3_only_mask = RTE_ETH_RSS_L3_SRC_ONLY | + RTE_ETH_RSS_L3_DST_ONLY; + uint64_t l4_only_mask = RTE_ETH_RSS_L4_SRC_ONLY | + RTE_ETH_RSS_L4_DST_ONLY; + uint64_t l3_l4_only_mask = l3_only_mask | l4_only_mask; + bool has_l3_l4_only = !!(rss_hf & l3_l4_only_mask); + bool has_l3_only = !!(rss_hf & l3_only_mask); + uint64_t tuple = 0; + uint32_t i; + + for (i = 0; i < RTE_DIM(hns3_set_tuple_table); i++) { + if ((rss_hf & hns3_set_tuple_table[i].rss_types) != + hns3_set_tuple_table[i].rss_types) + continue; + + if (hns3_set_tuple_table[i].tuple_type == HNS3_RSS_IP_TUPLE) { + if (hns3_set_tuple_table[i].rss_types & l3_only_mask || + !has_l3_only) + tuple |= hns3_set_tuple_table[i].rss_field; + continue; + } + + /* For IP types with L4, we need check both L3 and L4 */ + if (hns3_set_tuple_table[i].rss_types & l3_l4_only_mask || + !has_l3_l4_only) + tuple |= hns3_set_tuple_table[i].rss_field; + } + + return tuple; +} + int hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw, uint64_t rss_hf) { struct hns3_rss_input_tuple_cmd *req; struct hns3_cmd_desc desc; - uint32_t fields_count = 0; /* count times for setting tuple fields */ - uint32_t i; + uint64_t tuple_field; int ret;
hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_RSS_INPUT_TUPLE, false); - req = (struct hns3_rss_input_tuple_cmd *)desc.data;
- for (i = 0; i < RTE_DIM(hns3_set_tuple_table); i++) { - if ((rss_hf & hns3_set_tuple_table[i].rss_types) == - hns3_set_tuple_table[i].rss_types) { - req->tuple_field |= - rte_cpu_to_le_64(hns3_set_tuple_table[i].rss_field); - fields_count++; - } - } - - /* - * When user does not specify the following types or a combination of - * the following types, it enables all fields for the supported RSS - * types. the following types as: - * - RTE_ETH_RSS_L3_SRC_ONLY - * - RTE_ETH_RSS_L3_DST_ONLY - * - RTE_ETH_RSS_L4_SRC_ONLY - * - RTE_ETH_RSS_L4_DST_ONLY - */ - if (fields_count == 0) { - for (i = 0; i < RTE_DIM(hns3_set_rss_types); i++) { - if ((rss_hf & hns3_set_rss_types[i].rss_types) == - hns3_set_rss_types[i].rss_types) - req->tuple_field |= rte_cpu_to_le_64( - hns3_set_rss_types[i].rss_field); - } - } - + tuple_field = hns3_rss_calc_tuple_filed(rss_hf); + req->tuple_field = rte_cpu_to_le_64(tuple_field); ret = hns3_cmd_send(hw, &desc, 1); if (ret) { hns3_err(hw, "Update RSS flow types tuples failed %d", ret);
When user set 'L3_SRC/DST_ONLY' or 'L4_SRC/DST_ONLY' to 'rss_hf' and do not specify the packet type, these types will be not set to hardware. So this patch adds a check for them.
Fixes: 806f1d5ab0e3 ("net/hns3: set RSS hash type input configuration") Cc: stable@dpdk.org
Signed-off-by: Huisong Li lihuisong@huawei.com --- drivers/net/hns3/hns3_rss.c | 31 +++++++++++++++++++++++++++++-- 1 file changed, 29 insertions(+), 2 deletions(-)
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c index ea745c791f..ca5a129234 100644 --- a/drivers/net/hns3/hns3_rss.c +++ b/drivers/net/hns3/hns3_rss.c @@ -400,8 +400,34 @@ hns3_rss_reset_indir_table(struct hns3_hw *hw) return ret; }
+static void +hns3_rss_check_l3l4_types(struct hns3_hw *hw, uint64_t rss_hf) +{ + uint64_t ip_mask = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | + RTE_ETH_RSS_NONFRAG_IPV4_OTHER | + RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | + RTE_ETH_RSS_NONFRAG_IPV6_OTHER; + uint64_t l4_mask = RTE_ETH_RSS_NONFRAG_IPV4_TCP | + RTE_ETH_RSS_NONFRAG_IPV4_UDP | + RTE_ETH_RSS_NONFRAG_IPV4_SCTP | + RTE_ETH_RSS_NONFRAG_IPV6_TCP | + RTE_ETH_RSS_NONFRAG_IPV6_UDP | + RTE_ETH_RSS_NONFRAG_IPV6_SCTP; + uint64_t l3_src_dst_mask = RTE_ETH_RSS_L3_SRC_ONLY | + RTE_ETH_RSS_L3_DST_ONLY; + uint64_t l4_src_dst_mask = RTE_ETH_RSS_L4_SRC_ONLY | + RTE_ETH_RSS_L4_DST_ONLY; + + if (rss_hf & l3_src_dst_mask && + !(rss_hf & ip_mask || rss_hf & l4_mask)) + hns3_warn(hw, "packet type isn't specified, L3_SRC/DST_ONLY is ignored."); + + if (rss_hf & l4_src_dst_mask && !(rss_hf & l4_mask)) + hns3_warn(hw, "packet type isn't specified, L4_SRC/DST_ONLY is ignored."); +} + static uint64_t -hns3_rss_calc_tuple_filed(uint64_t rss_hf) +hns3_rss_calc_tuple_filed(struct hns3_hw *hw, uint64_t rss_hf) { uint64_t l3_only_mask = RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY; @@ -430,6 +456,7 @@ hns3_rss_calc_tuple_filed(uint64_t rss_hf) !has_l3_l4_only) tuple |= hns3_set_tuple_table[i].rss_field; } + hns3_rss_check_l3l4_types(hw, rss_hf);
return tuple; } @@ -445,7 +472,7 @@ hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw, uint64_t rss_hf) hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_RSS_INPUT_TUPLE, false); req = (struct hns3_rss_input_tuple_cmd *)desc.data;
- tuple_field = hns3_rss_calc_tuple_filed(rss_hf); + tuple_field = hns3_rss_calc_tuple_filed(hw, rss_hf); req->tuple_field = rte_cpu_to_le_64(tuple_field); ret = hns3_cmd_send(hw, &desc, 1); if (ret) {
From: Chengwen Feng fengchengwen@huawei.com
VF's command receive queue was mainly used to receive mailbox messages from PF. There are two type mailbox messages: request response message and message pushed by PF.
There are two types of threads that can handle these messages: 1) the interrupt thread of the main process: it could handle both types of messages. 2) other threads: it could only handle request response messages.
The collaboration mechanism between the two type threads is that other threads set the opcode of processed messages to zero so that the interrupt thread of the main process does not process these messages again. Because other threads can only process part of the messages, after the processing is complete, the next-to-use pointer of the command receive queue should not be updated. Otherwise, some messages (e.g. messages pushed by PF) maybe discarded.
Unfortunately, the patch to be reverted updates next-to-use pointer of the command receive queue in other threads context, and this will lead to discard some mailbox message.
So this commit reverts commit 599ef84add7e ("net/hns3: fix mailbox communication with HW")
Fixes: 599ef84add7e ("net/hns3: fix mailbox communication with HW") Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng fengchengwen@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_mbx.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index b3563d4694..2de55a6417 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -436,8 +436,10 @@ hns3_handle_mbx_msg_out_intr(struct hns3_hw *hw) next_to_use = (next_to_use + 1) % hw->cmq.crq.desc_num; }
- crq->next_to_use = next_to_use; - hns3_write_dev(hw, HNS3_CMDQ_RX_HEAD_REG, crq->next_to_use); + /* + * Note: the crq->next_to_use field should not updated, otherwise, + * mailbox messages may be discarded. + */ }
void
From: Chengwen Feng fengchengwen@huawei.com
VF's command receive queue was mainly used to receive mailbox messages from PF. There are two type mailbox messages: request response message and message pushed by PF.
There are two types of threads that can handle these messages: 1) the interrupt thread of the main process: it could handle both types of messages. 2) other threads: it could only handle request response messages.
The collaboration mechanism between the two type threads is that other threads set the opcode of processed messages to zero so that the interrupt thread of the main process does not process these messages again.
Unfortunately, the other threads mark the message pointed to by the crq->next-to-use variable which is fixed in the loop, not the message pointed to by the next-to-use variable.
Fixes: dbbbad23e380 ("net/hns3: fix VF handling LSC event in secondary process") Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng fengchengwen@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_mbx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index 2de55a6417..9a05f0d1ee 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -429,7 +429,7 @@ hns3_handle_mbx_msg_out_intr(struct hns3_hw *hw) * Clear opcode to inform intr thread don't process * again. */ - crq->desc[crq->next_to_use].opcode = 0; + crq->desc[next_to_use].opcode = 0; }
scan_next:
From: Jie Hai haijie1@huawei.com
When packet length in Tx is less than length hardware supported, the minimum frame length in hns3 is used to do padding to avoid hardware error. Currently, this length is fixed by macro, which is very unfavorable for subsequent hardware evolution. So fix it as firmware report.
Fixes: 395b5e08ef8d ("net/hns3: add Tx short frame padding compatibility") Cc: stable@dpdk.org
Signed-off-by: Jie Hai haijie1@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com --- drivers/net/hns3/hns3_cmd.h | 6 ++++++ drivers/net/hns3/hns3_ethdev.c | 4 +++- drivers/net/hns3/hns3_ethdev.h | 3 +-- drivers/net/hns3/hns3_ethdev_vf.c | 4 +++- 4 files changed, 13 insertions(+), 4 deletions(-)
diff --git a/drivers/net/hns3/hns3_cmd.h b/drivers/net/hns3/hns3_cmd.h index 8ac8b45819..994dfc48cc 100644 --- a/drivers/net/hns3/hns3_cmd.h +++ b/drivers/net/hns3/hns3_cmd.h @@ -967,6 +967,12 @@ struct hns3_dev_specs_0_cmd { uint32_t max_tm_rate; };
+struct hns3_dev_specs_1_cmd { + uint8_t rsv0[12]; + uint8_t min_tx_pkt_len; + uint8_t rsv1[11]; +}; + struct hns3_query_rpu_cmd { uint32_t tc_queue_num; uint32_t rsv1[2]; diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index 7b0e8fc77d..7330515535 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -2661,14 +2661,17 @@ static void hns3_parse_dev_specifications(struct hns3_hw *hw, struct hns3_cmd_desc *desc) { struct hns3_dev_specs_0_cmd *req0; + struct hns3_dev_specs_1_cmd *req1;
req0 = (struct hns3_dev_specs_0_cmd *)desc[0].data; + req1 = (struct hns3_dev_specs_1_cmd *)desc[1].data;
hw->max_non_tso_bd_num = req0->max_non_tso_bd_num; hw->rss_ind_tbl_size = rte_le_to_cpu_16(req0->rss_ind_tbl_size); hw->rss_key_size = rte_le_to_cpu_16(req0->rss_key_size); hw->max_tm_rate = rte_le_to_cpu_32(req0->max_tm_rate); hw->intr.int_ql_max = rte_le_to_cpu_16(req0->intr_ql_max); + hw->min_tx_pkt_len = req1->min_tx_pkt_len; }
static int @@ -2763,7 +2766,6 @@ hns3_get_capability(struct hns3_hw *hw) hw->tso_mode = HNS3_TSO_HW_CAL_PSEUDO_H_CSUM; hw->vlan_mode = HNS3_HW_SHIFT_AND_DISCARD_MODE; hw->drop_stats_mode = HNS3_PKTS_DROP_STATS_MODE2; - hw->min_tx_pkt_len = HNS3_HIP09_MIN_TX_PKT_LEN; pf->tqp_config_mode = HNS3_FLEX_MAX_TQP_NUM_MODE; hw->rss_info.ipv6_sctp_offload_supported = true; hw->udp_cksum_mode = HNS3_SPECIAL_PORT_HW_CKSUM_MODE; diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h index 40476bf882..4406611fe9 100644 --- a/drivers/net/hns3/hns3_ethdev.h +++ b/drivers/net/hns3/hns3_ethdev.h @@ -75,7 +75,6 @@ #define HNS3_DEFAULT_MTU 1500UL #define HNS3_DEFAULT_FRAME_LEN (HNS3_DEFAULT_MTU + HNS3_ETH_OVERHEAD) #define HNS3_HIP08_MIN_TX_PKT_LEN 33 -#define HNS3_HIP09_MIN_TX_PKT_LEN 9
#define HNS3_BITS_PER_BYTE 8
@@ -550,7 +549,7 @@ struct hns3_hw { * The minimum length of the packet supported by hardware in the Tx * direction. */ - uint32_t min_tx_pkt_len; + uint8_t min_tx_pkt_len;
struct hns3_queue_intr intr; /* diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 72d60191ab..6976a9f23d 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -701,13 +701,16 @@ static void hns3vf_parse_dev_specifications(struct hns3_hw *hw, struct hns3_cmd_desc *desc) { struct hns3_dev_specs_0_cmd *req0; + struct hns3_dev_specs_1_cmd *req1;
req0 = (struct hns3_dev_specs_0_cmd *)desc[0].data; + req1 = (struct hns3_dev_specs_1_cmd *)desc[1].data;
hw->max_non_tso_bd_num = req0->max_non_tso_bd_num; hw->rss_ind_tbl_size = rte_le_to_cpu_16(req0->rss_ind_tbl_size); hw->rss_key_size = rte_le_to_cpu_16(req0->rss_key_size); hw->intr.int_ql_max = rte_le_to_cpu_16(req0->intr_ql_max); + hw->min_tx_pkt_len = req1->min_tx_pkt_len; }
static int @@ -846,7 +849,6 @@ hns3vf_get_capability(struct hns3_hw *hw) hw->intr.gl_unit = HNS3_INTR_COALESCE_GL_UINT_1US; hw->tso_mode = HNS3_TSO_HW_CAL_PSEUDO_H_CSUM; hw->drop_stats_mode = HNS3_PKTS_DROP_STATS_MODE2; - hw->min_tx_pkt_len = HNS3_HIP09_MIN_TX_PKT_LEN; hw->rss_info.ipv6_sctp_offload_supported = true; hw->promisc_mode = HNS3_LIMIT_PROMISC_MODE;
From: "Min Hu (Connor)" humin29@huawei.com
Added the ethdev Rx/Tx desc dump API which provides functions for query descriptor from device. HW descriptor info differs in different NICs. The information demonstrates I/O process which is important for debug. As the information is different between NICs, the new API is introduced.
Signed-off-by: Min Hu (Connor) humin29@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com Reviewed-by: Ferruh Yigit ferruh.yigit@xilinx.com --- lib/ethdev/ethdev_driver.h | 53 ++++++++++++++++++++++++++++++++++++ lib/ethdev/rte_ethdev.c | 52 +++++++++++++++++++++++++++++++++++ lib/ethdev/rte_ethdev.h | 55 ++++++++++++++++++++++++++++++++++++++ lib/ethdev/version.map | 4 +++ 4 files changed, 164 insertions(+)
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index e24ff7064c..41f67f2740 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -1010,6 +1010,54 @@ typedef int (*eth_rx_metadata_negotiate_t)(struct rte_eth_dev *dev, */ typedef int (*eth_dev_priv_dump_t)(struct rte_eth_dev *dev, FILE *file);
+/** + * @internal + * Dump Rx descriptor info to a file. + * + * It is used for debugging, not a dataplane API. + * + * @param dev + * Port (ethdev) handle. + * @param queue_id + * A Rx queue identifier on this port. + * @param offset + * The offset of the descriptor starting from tail. (0 is the next + * packet to be received by the driver). + * @param num + * The number of the descriptors to dump. + * @param file + * A pointer to a file for output. + * @return + * Negative errno value on error, zero on success. + */ +typedef int (*eth_rx_descriptor_dump_t)(const struct rte_eth_dev *dev, + uint16_t queue_id, uint16_t offset, + uint16_t num, FILE *file); + +/** + * @internal + * Dump Tx descriptor info to a file. + * + * This API is used for debugging, not a dataplane API. + * + * @param dev + * Port (ethdev) handle. + * @param queue_id + * A Tx queue identifier on this port. + * @param offset + * The offset of the descriptor starting from tail. (0 is the place where + * the next packet will be send). + * @param num + * The number of the descriptors to dump. + * @param file + * A pointer to a file for output. + * @return + * Negative errno value on error, zero on success. + */ +typedef int (*eth_tx_descriptor_dump_t)(const struct rte_eth_dev *dev, + uint16_t queue_id, uint16_t offset, + uint16_t num, FILE *file); + /** * @internal A structure containing the functions exported by an Ethernet driver. */ @@ -1209,6 +1257,11 @@ struct eth_dev_ops {
/** Dump private info from device */ eth_dev_priv_dump_t eth_dev_priv_dump; + + /** Dump Rx descriptor info */ + eth_rx_descriptor_dump_t eth_rx_descriptor_dump; + /** Dump Tx descriptor info */ + eth_tx_descriptor_dump_t eth_tx_descriptor_dump; };
/** diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 25c9f0c123..b95f501b51 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -6509,6 +6509,58 @@ rte_eth_dev_priv_dump(uint16_t port_id, FILE *file) return eth_err(port_id, (*dev->dev_ops->eth_dev_priv_dump)(dev, file)); }
+int +rte_eth_rx_descriptor_dump(uint16_t port_id, uint16_t queue_id, + uint16_t offset, uint16_t num, FILE *file) +{ + struct rte_eth_dev *dev; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + dev = &rte_eth_devices[port_id]; + + if (queue_id >= dev->data->nb_rx_queues) { + RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + return -EINVAL; + } + + if (file == NULL) { + RTE_ETHDEV_LOG(ERR, "Invalid file (NULL)\n"); + return -EINVAL; + } + + if (*dev->dev_ops->eth_rx_descriptor_dump == NULL) + return -ENOTSUP; + + return eth_err(port_id, (*dev->dev_ops->eth_rx_descriptor_dump)(dev, + queue_id, offset, num, file)); +} + +int +rte_eth_tx_descriptor_dump(uint16_t port_id, uint16_t queue_id, + uint16_t offset, uint16_t num, FILE *file) +{ + struct rte_eth_dev *dev; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + dev = &rte_eth_devices[port_id]; + + if (queue_id >= dev->data->nb_tx_queues) { + RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", queue_id); + return -EINVAL; + } + + if (file == NULL) { + RTE_ETHDEV_LOG(ERR, "Invalid file (NULL)\n"); + return -EINVAL; + } + + if (*dev->dev_ops->eth_tx_descriptor_dump == NULL) + return -ENOTSUP; + + return eth_err(port_id, (*dev->dev_ops->eth_tx_descriptor_dump)(dev, + queue_id, offset, num, file)); +} + RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
RTE_INIT(ethdev_init_telemetry) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 082166ed42..8c894e090d 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -5213,6 +5213,61 @@ int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features); __rte_experimental int rte_eth_dev_priv_dump(uint16_t port_id, FILE *file);
+/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * Dump ethdev Rx descriptor info to a file. + * + * This API is used for debugging, not a dataplane API. + * + * @param port_id + * The port identifier of the Ethernet device. + * @param queue_id + * A Rx queue identifier on this port. + * @param offset + * The offset of the descriptor starting from tail. (0 is the next + * packet to be received by the driver). + * @param num + * The number of the descriptors to dump. + * @param file + * A pointer to a file for output. + * @return + * - On success, zero. + * - On failure, a negative value. + */ +__rte_experimental +int rte_eth_rx_descriptor_dump(uint16_t port_id, uint16_t queue_id, + uint16_t offset, uint16_t num, FILE *file); + +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * Dump ethdev Tx descriptor info to a file. + * + * This API is used for debugging, not a dataplane API. + * + * @param port_id + * The port identifier of the Ethernet device. + * @param queue_id + * A Tx queue identifier on this port. + * @param offset + * The offset of the descriptor starting from tail. (0 is the place where + * the next packet will be send). + * @param num + * The number of the descriptors to dump. + * @param file + * A pointer to a file for output. + * @return + * - On success, zero. + * - On failure, a negative value. + */ +__rte_experimental +int rte_eth_tx_descriptor_dump(uint16_t port_id, uint16_t queue_id, + uint16_t offset, uint16_t num, FILE *file); + + #include <rte_ethdev_core.h>
/** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index f29c60eda4..09dba86bee 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -259,6 +259,10 @@ EXPERIMENTAL {
# added in 22.03 rte_eth_dev_priv_dump; + + # added in 22.11 + rte_eth_rx_descriptor_dump; + rte_eth_tx_descriptor_dump; };
INTERNAL {
From: "Min Hu (Connor)" humin29@huawei.com
This patch support query HW descriptor from hns3 device. HW descriptor is also called BD (buffer description) which is shared memory between software and hardware.
Signed-off-by: Min Hu (Connor) humin29@huawei.com Signed-off-by: Dongdong Liu liudongdong3@huawei.com Acked-by: Ferruh Yigit ferruh.yigit@xilinx.com --- drivers/net/hns3/hns3_dump.c | 88 +++++++++++++++++++++++++++++++ drivers/net/hns3/hns3_dump.h | 4 ++ drivers/net/hns3/hns3_ethdev.c | 2 + drivers/net/hns3/hns3_ethdev_vf.c | 2 + 4 files changed, 96 insertions(+)
diff --git a/drivers/net/hns3/hns3_dump.c b/drivers/net/hns3/hns3_dump.c index cf5b500be1..1007b09bd2 100644 --- a/drivers/net/hns3/hns3_dump.c +++ b/drivers/net/hns3/hns3_dump.c @@ -11,6 +11,9 @@ #include "hns3_logs.h" #include "hns3_dump.h"
+#define HNS3_BD_DW_NUM 8 +#define HNS3_BD_ADDRESS_LAST_DW 2 + static const char * hns3_get_adapter_state_name(enum hns3_adapter_state state) { @@ -873,3 +876,88 @@ hns3_eth_dev_priv_dump(struct rte_eth_dev *dev, FILE *file)
return 0; } + +int +hns3_rx_descriptor_dump(const struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t offset, uint16_t num, FILE *file) +{ + struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct hns3_rx_queue *rxq = dev->data->rx_queues[queue_id]; + uint32_t *bd_data; + uint16_t count = 0; + uint16_t desc_id; + int i; + + if (offset >= rxq->nb_rx_desc) + return -EINVAL; + + if (num > rxq->nb_rx_desc) { + hns3_err(hw, "Invalid BD num=%u\n", num); + return -EINVAL; + } + + while (count < num) { + desc_id = (rxq->next_to_use + offset + count) % rxq->nb_rx_desc; + bd_data = (uint32_t *)(&rxq->rx_ring[desc_id]); + fprintf(file, "Rx queue id:%u BD id:%u\n", queue_id, desc_id); + for (i = 0; i < HNS3_BD_DW_NUM; i++) { + /* + * For the sake of security, first 8 bytes of BD which + * stands for physical address of packet should not be + * shown. + */ + if (i < HNS3_BD_ADDRESS_LAST_DW) { + fprintf(file, "RX BD WORD[%d]:0x%08x\n", i, 0); + continue; + } + fprintf(file, "RX BD WORD[%d]:0x%08x\n", i, + *(bd_data + i)); + } + count++; + } + + return 0; +} + +int +hns3_tx_descriptor_dump(const struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t offset, uint16_t num, FILE *file) +{ + struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct hns3_tx_queue *txq = dev->data->tx_queues[queue_id]; + uint32_t *bd_data; + uint16_t count = 0; + uint16_t desc_id; + int i; + + if (offset >= txq->nb_tx_desc) + return -EINVAL; + + if (num > txq->nb_tx_desc) { + hns3_err(hw, "Invalid BD num=%u\n", num); + return -EINVAL; + } + + while (count < num) { + desc_id = (txq->next_to_use + offset + count) % txq->nb_tx_desc; + bd_data = (uint32_t *)(&txq->tx_ring[desc_id]); + fprintf(file, "Tx queue id:%u BD id:%u\n", queue_id, desc_id); + for (i = 0; i < HNS3_BD_DW_NUM; i++) { + /* + * For the sake of security, first 8 bytes of BD which + * stands for physical address of packet should not be + * shown. + */ + if (i < HNS3_BD_ADDRESS_LAST_DW) { + fprintf(file, "TX BD WORD[%d]:0x%08x\n", i, 0); + continue; + } + + fprintf(file, "Tx BD WORD[%d]:0x%08x\n", i, + *(bd_data + i)); + } + count++; + } + + return 0; +} diff --git a/drivers/net/hns3/hns3_dump.h b/drivers/net/hns3/hns3_dump.h index 616cb70d6e..021ce1bbdb 100644 --- a/drivers/net/hns3/hns3_dump.h +++ b/drivers/net/hns3/hns3_dump.h @@ -7,4 +7,8 @@
int hns3_eth_dev_priv_dump(struct rte_eth_dev *dev, FILE *file);
+int hns3_rx_descriptor_dump(const struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t offset, uint16_t num, FILE *file); +int hns3_tx_descriptor_dump(const struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t offset, uint16_t num, FILE *file); #endif /* HNS3_DUMP_H */ diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index 7330515535..f83cff4d98 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -6558,6 +6558,8 @@ static const struct eth_dev_ops hns3_eth_dev_ops = { .timesync_read_time = hns3_timesync_read_time, .timesync_write_time = hns3_timesync_write_time, .eth_dev_priv_dump = hns3_eth_dev_priv_dump, + .eth_rx_descriptor_dump = hns3_rx_descriptor_dump, + .eth_tx_descriptor_dump = hns3_tx_descriptor_dump, };
static const struct hns3_reset_ops hns3_reset_ops = { diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 6976a9f23d..1022b02697 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -2302,6 +2302,8 @@ static const struct eth_dev_ops hns3vf_eth_dev_ops = { .dev_supported_ptypes_get = hns3_dev_supported_ptypes_get, .tx_done_cleanup = hns3_tx_done_cleanup, .eth_dev_priv_dump = hns3_eth_dev_priv_dump, + .eth_rx_descriptor_dump = hns3_rx_descriptor_dump, + .eth_tx_descriptor_dump = hns3_tx_descriptor_dump, };
static const struct hns3_reset_ops hns3vf_reset_ops = {
From: Stephen Hemminger stephen@networkplumber.org
Functions like free, rte_free, and rte_mempool_free already handle NULL pointer so the checks here are not necessary.
Remove redundant NULL pointer checks before free functions found by nullfree.cocci
Note: This patch only captures some hns3 modification from the following patch: Fixes: 06c047b68061 ("remove unnecessary null checks")
Signed-off-by: Stephen Hemminger stephen@networkplumber.org --- drivers/net/hns3/hns3_rxtx.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index 8ad40a49c7..29caaeafbd 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -86,8 +86,7 @@ hns3_rx_queue_release(void *queue) hns3_rx_queue_release_mbufs(rxq); if (rxq->mz) rte_memzone_free(rxq->mz); - if (rxq->sw_ring) - rte_free(rxq->sw_ring); + rte_free(rxq->sw_ring); rte_free(rxq); } } @@ -100,10 +99,8 @@ hns3_tx_queue_release(void *queue) hns3_tx_queue_release_mbufs(txq); if (txq->mz) rte_memzone_free(txq->mz); - if (txq->sw_ring) - rte_free(txq->sw_ring); - if (txq->free) - rte_free(txq->free); + rte_free(txq->sw_ring); + rte_free(txq->free); rte_free(txq); } }
From: Ferruh Yigit ferruh.yigit@intel.com
Multiple PMDs have dummy/noop Rx/Tx packet burst functions. These dummy functions are very simple, introduce a common function in the ethdev and update drivers to use it instead of each driver having its own functions.
Note: Note: This patch only captures some hns3 modification from the following patch: Fixes: a41f593f1bce ("ethdev: introduce generic dummy packet burst function")
Signed-off-by: Ferruh Yigit ferruh.yigit@intel.com Acked-by: Morten Brørup mb@smartsharesystems.com Acked-by: Viacheslav Ovsiienko viacheslavo@nvidia.com Acked-by: Thomas Monjalon thomas@monjalon.net --- drivers/net/hns3/hns3_rxtx.c | 18 +++++------------- drivers/net/hns3/hns3_rxtx.h | 3 --- lib/ethdev/ethdev_driver.c | 13 +++++++++++++ lib/ethdev/ethdev_driver.h | 17 +++++++++++++++++ lib/ethdev/meson.build | 1 + lib/ethdev/version.map | 1 + 6 files changed, 37 insertions(+), 16 deletions(-) create mode 100644 lib/ethdev/ethdev_driver.c
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index 29caaeafbd..3c02fd54e1 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -4365,14 +4365,6 @@ hns3_get_tx_function(struct rte_eth_dev *dev, eth_tx_prep_t *prep) return hns3_xmit_pkts; }
-uint16_t -hns3_dummy_rxtx_burst(void *dpdk_txq __rte_unused, - struct rte_mbuf **pkts __rte_unused, - uint16_t pkts_n __rte_unused) -{ - return 0; -} - static void hns3_trace_rxtx_function(struct rte_eth_dev *dev) { @@ -4416,14 +4408,14 @@ hns3_set_rxtx_function(struct rte_eth_dev *eth_dev) eth_dev->rx_pkt_burst = hns3_get_rx_function(eth_dev); eth_dev->rx_descriptor_status = hns3_dev_rx_descriptor_status; eth_dev->tx_pkt_burst = hw->set_link_down ? - hns3_dummy_rxtx_burst : + rte_eth_pkt_burst_dummy : hns3_get_tx_function(eth_dev, &prep); eth_dev->tx_pkt_prepare = prep; eth_dev->tx_descriptor_status = hns3_dev_tx_descriptor_status; hns3_trace_rxtx_function(eth_dev); } else { - eth_dev->rx_pkt_burst = hns3_dummy_rxtx_burst; - eth_dev->tx_pkt_burst = hns3_dummy_rxtx_burst; + eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy; + eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; eth_dev->tx_pkt_prepare = NULL; }
@@ -4637,7 +4629,7 @@ hns3_tx_done_cleanup(void *txq, uint32_t free_cnt)
if (dev->tx_pkt_burst == hns3_xmit_pkts) return hns3_tx_done_cleanup_full(q, free_cnt); - else if (dev->tx_pkt_burst == hns3_dummy_rxtx_burst) + else if (dev->tx_pkt_burst == rte_eth_pkt_burst_dummy) return 0; else return -ENOTSUP; @@ -4747,7 +4739,7 @@ hns3_enable_rxd_adv_layout(struct hns3_hw *hw) void hns3_stop_tx_datapath(struct rte_eth_dev *dev) { - dev->tx_pkt_burst = hns3_dummy_rxtx_burst; + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy; dev->tx_pkt_prepare = NULL; hns3_eth_dev_fp_ops_config(dev);
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h index ed40621b3a..e633b336b1 100644 --- a/drivers/net/hns3/hns3_rxtx.h +++ b/drivers/net/hns3/hns3_rxtx.h @@ -742,9 +742,6 @@ void hns3_init_rx_ptype_tble(struct rte_eth_dev *dev); void hns3_set_rxtx_function(struct rte_eth_dev *eth_dev); eth_tx_burst_t hns3_get_tx_function(struct rte_eth_dev *dev, eth_tx_prep_t *prep); -uint16_t hns3_dummy_rxtx_burst(void *dpdk_txq __rte_unused, - struct rte_mbuf **pkts __rte_unused, - uint16_t pkts_n __rte_unused);
uint32_t hns3_get_tqp_intr_reg_offset(uint16_t tqp_intr_id); void hns3_set_queue_intr_gl(struct hns3_hw *hw, uint16_t queue_id, diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c new file mode 100644 index 0000000000..fb7323f4d3 --- /dev/null +++ b/lib/ethdev/ethdev_driver.c @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2022 Intel Corporation + */ + +#include "ethdev_driver.h" + +uint16_t +rte_eth_pkt_burst_dummy(void *queue __rte_unused, + struct rte_mbuf **pkts __rte_unused, + uint16_t nb_pkts __rte_unused) +{ + return 0; +} diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index 41f67f2740..d3de203d7a 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -1497,6 +1497,23 @@ rte_eth_linkstatus_get(const struct rte_eth_dev *dev, *dst = __atomic_load_n(src, __ATOMIC_SEQ_CST); }
+/** + * @internal + * Dummy DPDK callback for Rx/Tx packet burst. + * + * @param queue + * Pointer to Rx/Tx queue + * @param pkts + * Packet array + * @param nb_pkts + * Number of packets in packet array + */ +__rte_internal +uint16_t +rte_eth_pkt_burst_dummy(void *queue __rte_unused, + struct rte_mbuf **pkts __rte_unused, + uint16_t nb_pkts __rte_unused); + /** * Allocate an unique switch domain identifier. * diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build index 0205c853df..a094585bf7 100644 --- a/lib/ethdev/meson.build +++ b/lib/ethdev/meson.build @@ -2,6 +2,7 @@ # Copyright(c) 2017 Intel Corporation
sources = files( + 'ethdev_driver.c', 'ethdev_private.c', 'ethdev_profile.c', 'ethdev_trace_points.c', diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 09dba86bee..590aa5a0a6 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -286,6 +286,7 @@ INTERNAL { rte_eth_hairpin_queue_peer_bind; rte_eth_hairpin_queue_peer_unbind; rte_eth_hairpin_queue_peer_update; + rte_eth_pkt_burst_dummy; rte_eth_representor_id_get; rte_eth_switch_domain_alloc; rte_eth_switch_domain_free;
From: Josh Soref jsoref@gmail.com
The tool comes from https://github.com/jsoref
Signed-off-by: Josh Soref jsoref@gmail.com Signed-off-by: Thomas Monjalon thomas@monjalon.net --- app/proc-info/main.c | 6 +-- app/test-acl/main.c | 6 +-- .../comp_perf_test_cyclecount.c | 2 +- .../comp_perf_test_throughput.c | 2 +- .../comp_perf_test_verify.c | 2 +- app/test-compress-perf/main.c | 2 +- .../cperf_test_pmd_cyclecount.c | 2 +- app/test-eventdev/evt_options.c | 2 +- app/test-eventdev/test_order_common.c | 2 +- app/test-fib/main.c | 4 +- app/test-flow-perf/config.h | 2 +- app/test-flow-perf/main.c | 2 +- app/test-pmd/cmdline.c | 2 +- app/test-pmd/cmdline_flow.c | 6 +-- app/test-pmd/cmdline_tm.c | 4 +- app/test-pmd/csumonly.c | 2 +- app/test-pmd/parameters.c | 2 +- app/test-pmd/testpmd.c | 2 +- app/test-pmd/txonly.c | 4 +- app/test/test_barrier.c | 2 +- app/test/test_bpf.c | 4 +- app/test/test_compressdev.c | 2 +- app/test/test_cryptodev.c | 2 +- app/test/test_fib_perf.c | 2 +- app/test/test_kni.c | 4 +- app/test/test_kvargs.c | 16 ++++---- app/test/test_lpm6_data.h | 2 +- app/test/test_member.c | 2 +- app/test/test_mempool.c | 4 +- app/test/test_memzone.c | 6 +-- app/test/test_metrics.c | 2 +- app/test/test_pcapng.c | 2 +- app/test/test_power_cpufreq.c | 2 +- app/test/test_rcu_qsbr.c | 4 +- app/test/test_red.c | 8 ++-- app/test/test_security.c | 2 +- app/test/test_table_pipeline.c | 2 +- app/test/test_thash.c | 2 +- buildtools/binutils-avx512-check.py | 2 +- devtools/check-symbol-change.sh | 6 +-- .../virtio_user_for_container_networking.svg | 2 +- doc/guides/nics/af_packet.rst | 2 +- doc/guides/nics/mlx4.rst | 2 +- doc/guides/nics/mlx5.rst | 6 +-- doc/guides/prog_guide/cryptodev_lib.rst | 2 +- .../prog_guide/env_abstraction_layer.rst | 4 +- doc/guides/prog_guide/img/turbo_tb_decode.svg | 2 +- doc/guides/prog_guide/img/turbo_tb_encode.svg | 2 +- doc/guides/prog_guide/qos_framework.rst | 6 +-- doc/guides/prog_guide/rte_flow.rst | 2 +- doc/guides/rawdevs/cnxk_bphy.rst | 2 +- doc/guides/regexdevs/features_overview.rst | 2 +- doc/guides/rel_notes/release_16_07.rst | 2 +- doc/guides/rel_notes/release_17_08.rst | 2 +- doc/guides/rel_notes/release_2_1.rst | 2 +- doc/guides/sample_app_ug/ip_reassembly.rst | 4 +- doc/guides/sample_app_ug/l2_forward_cat.rst | 2 +- doc/guides/sample_app_ug/server_node_efd.rst | 2 +- doc/guides/sample_app_ug/skeleton.rst | 2 +- .../sample_app_ug/vm_power_management.rst | 2 +- doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 +- drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 8 ++-- drivers/baseband/null/bbdev_null.c | 2 +- .../baseband/turbo_sw/bbdev_turbo_software.c | 2 +- drivers/bus/dpaa/dpaa_bus.c | 2 +- drivers/bus/dpaa/include/fsl_qman.h | 6 +-- drivers/bus/dpaa/include/fsl_usd.h | 2 +- drivers/bus/dpaa/include/process.h | 2 +- drivers/bus/fslmc/fslmc_bus.c | 2 +- drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 2 +- drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +- .../fslmc/qbman/include/fsl_qbman_portal.h | 20 +++++----- drivers/bus/pci/linux/pci_vfio.c | 2 +- drivers/bus/vdev/rte_bus_vdev.h | 2 +- drivers/bus/vmbus/vmbus_common.c | 2 +- drivers/common/cnxk/roc_bphy_cgx.c | 2 +- drivers/common/cnxk/roc_nix_bpf.c | 2 +- drivers/common/cnxk/roc_nix_tm_ops.c | 2 +- drivers/common/cnxk/roc_npc_mcam.c | 2 +- drivers/common/cnxk/roc_npc_priv.h | 2 +- drivers/common/cpt/cpt_ucode.h | 4 +- drivers/common/cpt/cpt_ucode_asym.h | 2 +- drivers/common/dpaax/caamflib/desc/algo.h | 2 +- drivers/common/dpaax/caamflib/desc/sdap.h | 6 +-- drivers/common/dpaax/dpaax_iova_table.c | 2 +- drivers/common/iavf/iavf_type.h | 2 +- drivers/common/iavf/virtchnl.h | 2 +- drivers/common/mlx5/mlx5_common.c | 2 +- drivers/common/mlx5/mlx5_common_mr.c | 2 +- drivers/common/mlx5/mlx5_devx_cmds.c | 2 +- drivers/common/mlx5/mlx5_malloc.c | 4 +- drivers/common/mlx5/mlx5_malloc.h | 2 +- drivers/common/mlx5/mlx5_prm.h | 2 +- drivers/common/mlx5/windows/mlx5_common_os.c | 4 +- drivers/common/mlx5/windows/mlx5_common_os.h | 2 +- .../qat/qat_adf/adf_transport_access_macros.h | 2 +- drivers/common/sfc_efx/efsys.h | 2 +- drivers/compress/octeontx/include/zip_regs.h | 4 +- drivers/compress/octeontx/otx_zip.h | 2 +- drivers/compress/qat/qat_comp_pmd.c | 2 +- drivers/crypto/bcmfs/bcmfs_device.h | 2 +- drivers/crypto/bcmfs/bcmfs_qp.c | 2 +- drivers/crypto/bcmfs/bcmfs_sym_defs.h | 6 +-- drivers/crypto/bcmfs/bcmfs_sym_engine.h | 2 +- drivers/crypto/bcmfs/hw/bcmfs5_rm.c | 2 +- drivers/crypto/caam_jr/caam_jr_hw_specific.h | 4 +- drivers/crypto/caam_jr/caam_jr_pvt.h | 4 +- drivers/crypto/caam_jr/caam_jr_uio.c | 2 +- drivers/crypto/ccp/ccp_crypto.c | 2 +- drivers/crypto/ccp/ccp_crypto.h | 2 +- drivers/crypto/ccp/ccp_dev.h | 2 +- drivers/crypto/dpaa_sec/dpaa_sec.c | 2 +- .../crypto/octeontx/otx_cryptodev_hw_access.c | 2 +- drivers/crypto/octeontx/otx_cryptodev_mbox.h | 2 +- drivers/crypto/octeontx/otx_cryptodev_ops.c | 2 +- drivers/crypto/qat/qat_asym.c | 2 +- drivers/crypto/qat/qat_sym.c | 2 +- drivers/crypto/virtio/virtqueue.h | 2 +- drivers/dma/skeleton/skeleton_dmadev.c | 2 +- drivers/event/cnxk/cnxk_eventdev_selftest.c | 4 +- drivers/event/dlb2/dlb2.c | 2 +- drivers/event/dlb2/dlb2_priv.h | 2 +- drivers/event/dlb2/dlb2_selftest.c | 2 +- drivers/event/dlb2/rte_pmd_dlb2.h | 2 +- drivers/event/dpaa2/dpaa2_eventdev_selftest.c | 2 +- drivers/event/dsw/dsw_evdev.h | 4 +- drivers/event/dsw/dsw_event.c | 4 +- drivers/event/octeontx/ssovf_evdev.h | 2 +- drivers/event/octeontx/ssovf_evdev_selftest.c | 2 +- drivers/event/octeontx2/otx2_evdev_selftest.c | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- drivers/event/opdl/opdl_evdev.c | 2 +- drivers/event/opdl/opdl_test.c | 2 +- drivers/event/sw/sw_evdev.h | 2 +- drivers/event/sw/sw_evdev_selftest.c | 2 +- drivers/mempool/dpaa/dpaa_mempool.c | 2 +- drivers/mempool/octeontx/octeontx_fpavf.c | 4 +- drivers/net/ark/ark_global.h | 2 +- drivers/net/atlantic/atl_ethdev.c | 2 +- drivers/net/atlantic/atl_rxtx.c | 2 +- drivers/net/atlantic/hw_atl/hw_atl_b0.c | 2 +- drivers/net/axgbe/axgbe_dev.c | 2 +- drivers/net/axgbe/axgbe_ethdev.c | 2 +- drivers/net/axgbe/axgbe_ethdev.h | 2 +- drivers/net/axgbe/axgbe_phy_impl.c | 4 +- drivers/net/axgbe/axgbe_rxtx_vec_sse.c | 2 +- drivers/net/bnx2x/bnx2x.c | 38 +++++++++---------- drivers/net/bnx2x/bnx2x.h | 10 ++--- drivers/net/bnx2x/bnx2x_stats.c | 2 +- drivers/net/bnx2x/bnx2x_stats.h | 4 +- drivers/net/bnx2x/bnx2x_vfpf.c | 2 +- drivers/net/bnx2x/bnx2x_vfpf.h | 2 +- drivers/net/bnx2x/ecore_fw_defs.h | 2 +- drivers/net/bnx2x/ecore_hsi.h | 26 ++++++------- drivers/net/bnx2x/ecore_init_ops.h | 6 +-- drivers/net/bnx2x/ecore_reg.h | 28 +++++++------- drivers/net/bnx2x/ecore_sp.c | 6 +-- drivers/net/bnx2x/ecore_sp.h | 6 +-- drivers/net/bnx2x/elink.c | 20 +++++----- drivers/net/bnxt/bnxt_hwrm.c | 2 +- drivers/net/bnxt/tf_core/tfp.c | 2 +- drivers/net/bnxt/tf_core/tfp.h | 2 +- drivers/net/bonding/eth_bond_8023ad_private.h | 2 +- drivers/net/bonding/eth_bond_private.h | 2 +- drivers/net/bonding/rte_eth_bond_8023ad.c | 20 +++++----- drivers/net/bonding/rte_eth_bond_8023ad.h | 4 +- drivers/net/bonding/rte_eth_bond_alb.h | 2 +- drivers/net/bonding/rte_eth_bond_api.c | 4 +- drivers/net/cnxk/cn10k_ethdev.h | 2 +- drivers/net/cnxk/cn10k_tx.h | 6 +-- drivers/net/cnxk/cn9k_tx.h | 6 +-- drivers/net/cnxk/cnxk_ptp.c | 2 +- drivers/net/cxgbe/cxgbe_flow.c | 2 +- drivers/net/cxgbe/cxgbevf_main.c | 2 +- drivers/net/cxgbe/sge.c | 8 ++-- drivers/net/dpaa/dpaa_ethdev.c | 6 +-- drivers/net/dpaa/dpaa_rxtx.c | 4 +- drivers/net/dpaa/fmlib/fm_ext.h | 2 +- drivers/net/dpaa/fmlib/fm_pcd_ext.h | 8 ++-- drivers/net/dpaa/fmlib/fm_port_ext.h | 14 +++---- drivers/net/dpaa2/dpaa2_ethdev.c | 14 +++---- drivers/net/dpaa2/dpaa2_ethdev.h | 2 +- drivers/net/dpaa2/dpaa2_flow.c | 8 ++-- drivers/net/dpaa2/dpaa2_mux.c | 2 +- drivers/net/dpaa2/dpaa2_rxtx.c | 6 +-- drivers/net/dpaa2/mc/fsl_dpni.h | 10 ++--- drivers/net/e1000/e1000_ethdev.h | 4 +- drivers/net/e1000/em_ethdev.c | 10 ++--- drivers/net/e1000/em_rxtx.c | 6 +-- drivers/net/e1000/igb_ethdev.c | 18 ++++----- drivers/net/e1000/igb_flow.c | 4 +- drivers/net/e1000/igb_pf.c | 2 +- drivers/net/e1000/igb_rxtx.c | 14 +++---- drivers/net/ena/ena_ethdev.c | 2 +- drivers/net/ena/ena_ethdev.h | 2 +- drivers/net/enetfec/enet_regs.h | 2 +- drivers/net/enic/enic_flow.c | 18 ++++----- drivers/net/enic/enic_fm_flow.c | 10 ++--- drivers/net/enic/enic_main.c | 2 +- drivers/net/enic/enic_rxtx.c | 2 +- drivers/net/fm10k/fm10k.h | 2 +- drivers/net/fm10k/fm10k_ethdev.c | 12 +++--- drivers/net/fm10k/fm10k_rxtx_vec.c | 10 ++--- drivers/net/hinic/hinic_pmd_ethdev.c | 4 +- drivers/net/hinic/hinic_pmd_ethdev.h | 2 +- drivers/net/hinic/hinic_pmd_flow.c | 4 +- drivers/net/hinic/hinic_pmd_tx.c | 2 +- drivers/net/hns3/hns3_cmd.c | 4 +- drivers/net/hns3/hns3_common.c | 2 +- drivers/net/hns3/hns3_dcb.c | 10 ++--- drivers/net/hns3/hns3_ethdev.c | 14 +++---- drivers/net/hns3/hns3_ethdev.h | 8 ++-- drivers/net/hns3/hns3_ethdev_vf.c | 4 +- drivers/net/hns3/hns3_fdir.h | 2 +- drivers/net/hns3/hns3_flow.c | 12 +++--- drivers/net/hns3/hns3_mbx.c | 4 +- drivers/net/hns3/hns3_mbx.h | 2 +- drivers/net/hns3/hns3_rss.h | 2 +- drivers/net/hns3/hns3_rxtx.c | 16 ++++---- drivers/net/hns3/hns3_rxtx.h | 2 +- drivers/net/hns3/hns3_stats.c | 2 +- drivers/net/i40e/i40e_ethdev.c | 12 +++--- drivers/net/i40e/i40e_ethdev.h | 10 ++--- drivers/net/i40e/i40e_fdir.c | 10 ++--- drivers/net/i40e/i40e_flow.c | 2 +- drivers/net/i40e/i40e_pf.c | 4 +- drivers/net/i40e/i40e_rxtx.c | 20 +++++----- drivers/net/i40e/i40e_rxtx_vec_altivec.c | 2 +- drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 +- drivers/net/i40e/i40e_rxtx_vec_sse.c | 6 +-- drivers/net/i40e/rte_pmd_i40e.c | 2 +- drivers/net/iavf/iavf_ethdev.c | 6 +-- drivers/net/iavf/iavf_ipsec_crypto.c | 14 +++---- drivers/net/iavf/iavf_ipsec_crypto.h | 2 +- drivers/net/iavf/iavf_rxtx.c | 4 +- drivers/net/iavf/iavf_rxtx_vec_sse.c | 4 +- drivers/net/iavf/iavf_vchnl.c | 4 +- drivers/net/ice/ice_dcf.c | 2 +- drivers/net/ice/ice_dcf_ethdev.c | 2 +- drivers/net/ice/ice_ethdev.c | 12 +++--- drivers/net/ice/ice_rxtx.c | 10 ++--- drivers/net/ice/ice_rxtx_vec_sse.c | 4 +- drivers/net/igc/igc_filter.c | 2 +- drivers/net/igc/igc_txrx.c | 4 +- drivers/net/ionic/ionic_if.h | 6 +-- drivers/net/ipn3ke/ipn3ke_ethdev.c | 2 +- drivers/net/ipn3ke/ipn3ke_ethdev.h | 4 +- drivers/net/ipn3ke/ipn3ke_flow.c | 2 +- drivers/net/ipn3ke/ipn3ke_representor.c | 12 +++--- drivers/net/ipn3ke/meson.build | 2 +- drivers/net/ixgbe/ixgbe_bypass.c | 2 +- drivers/net/ixgbe/ixgbe_bypass_api.h | 4 +- drivers/net/ixgbe/ixgbe_ethdev.c | 18 ++++----- drivers/net/ixgbe/ixgbe_ethdev.h | 2 +- drivers/net/ixgbe/ixgbe_fdir.c | 2 +- drivers/net/ixgbe/ixgbe_flow.c | 4 +- drivers/net/ixgbe/ixgbe_ipsec.c | 2 +- drivers/net/ixgbe/ixgbe_pf.c | 2 +- drivers/net/ixgbe/ixgbe_rxtx.c | 10 ++--- drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 2 +- drivers/net/memif/memif_socket.c | 2 +- drivers/net/memif/rte_eth_memif.c | 2 +- drivers/net/mlx4/mlx4.h | 2 +- drivers/net/mlx4/mlx4_ethdev.c | 2 +- drivers/net/mlx5/linux/mlx5_os.c | 8 ++-- drivers/net/mlx5/mlx5.c | 10 ++--- drivers/net/mlx5/mlx5.h | 8 ++-- drivers/net/mlx5/mlx5_flow.c | 20 +++++----- drivers/net/mlx5/mlx5_flow.h | 6 +-- drivers/net/mlx5/mlx5_flow_dv.c | 14 +++---- drivers/net/mlx5/mlx5_flow_flex.c | 4 +- drivers/net/mlx5/mlx5_flow_meter.c | 10 ++--- drivers/net/mlx5/mlx5_rx.c | 2 +- drivers/net/mlx5/mlx5_rxq.c | 4 +- drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 2 +- drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 2 +- drivers/net/mlx5/mlx5_rxtx_vec_sse.h | 2 +- drivers/net/mlx5/mlx5_tx.c | 2 +- drivers/net/mlx5/mlx5_utils.h | 2 +- drivers/net/mlx5/windows/mlx5_flow_os.c | 2 +- drivers/net/mlx5/windows/mlx5_os.c | 2 +- drivers/net/mvneta/mvneta_ethdev.c | 2 +- drivers/net/mvpp2/mrvl_ethdev.c | 2 +- drivers/net/mvpp2/mrvl_qos.c | 4 +- drivers/net/netvsc/hn_nvs.c | 2 +- drivers/net/netvsc/hn_rxtx.c | 4 +- drivers/net/netvsc/hn_vf.c | 2 +- .../net/nfp/nfpcore/nfp-common/nfp_resid.h | 6 +-- drivers/net/nfp/nfpcore/nfp_cppcore.c | 2 +- drivers/net/nfp/nfpcore/nfp_nsp.h | 2 +- drivers/net/nfp/nfpcore/nfp_resource.c | 2 +- drivers/net/nfp/nfpcore/nfp_rtsym.c | 2 +- drivers/net/ngbe/ngbe_ethdev.c | 6 +-- drivers/net/ngbe/ngbe_pf.c | 2 +- drivers/net/octeontx/octeontx_ethdev.c | 2 +- drivers/net/octeontx2/otx2_ethdev_irq.c | 2 +- drivers/net/octeontx2/otx2_ptp.c | 2 +- drivers/net/octeontx2/otx2_tx.h | 4 +- drivers/net/octeontx2/otx2_vlan.c | 2 +- drivers/net/octeontx_ep/otx2_ep_vf.c | 2 +- drivers/net/octeontx_ep/otx_ep_vf.c | 2 +- drivers/net/pfe/pfe_ethdev.c | 2 +- drivers/net/pfe/pfe_hal.c | 2 +- drivers/net/pfe/pfe_hif.c | 4 +- drivers/net/pfe/pfe_hif.h | 2 +- drivers/net/pfe/pfe_hif_lib.c | 8 ++-- drivers/net/qede/qede_debug.c | 4 +- drivers/net/qede/qede_ethdev.c | 2 +- drivers/net/qede/qede_rxtx.c | 12 +++--- drivers/net/qede/qede_rxtx.h | 2 +- drivers/net/sfc/sfc.c | 2 +- drivers/net/sfc/sfc_dp.c | 2 +- drivers/net/sfc/sfc_dp_rx.h | 4 +- drivers/net/sfc/sfc_ef100.h | 2 +- drivers/net/sfc/sfc_ef100_rx.c | 2 +- drivers/net/sfc/sfc_ef10_essb_rx.c | 2 +- drivers/net/sfc/sfc_ef10_rx_ev.h | 2 +- drivers/net/sfc/sfc_intr.c | 2 +- drivers/net/sfc/sfc_rx.c | 6 +-- drivers/net/sfc/sfc_tx.c | 2 +- drivers/net/softnic/rte_eth_softnic_flow.c | 2 +- drivers/net/tap/rte_eth_tap.c | 2 +- drivers/net/tap/tap_bpf_api.c | 4 +- drivers/net/tap/tap_flow.c | 4 +- drivers/net/thunderx/nicvf_svf.c | 2 +- drivers/net/txgbe/txgbe_ethdev.c | 6 +-- drivers/net/txgbe/txgbe_ethdev_vf.c | 6 +-- drivers/net/txgbe/txgbe_ipsec.c | 2 +- drivers/net/txgbe/txgbe_pf.c | 2 +- drivers/net/virtio/virtio_ethdev.c | 4 +- drivers/net/virtio/virtio_pci.c | 2 +- drivers/net/virtio/virtio_rxtx.c | 2 +- drivers/net/virtio/virtio_rxtx_packed_avx.h | 2 +- drivers/net/virtio/virtqueue.c | 2 +- drivers/net/virtio/virtqueue.h | 4 +- drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 2 +- drivers/raw/dpaa2_qdma/dpaa2_qdma.h | 4 +- drivers/raw/ifpga/ifpga_rawdev.c | 10 ++--- drivers/raw/ntb/ntb.h | 2 +- drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 2 +- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 2 +- examples/bbdev_app/main.c | 2 +- examples/bond/main.c | 4 +- examples/dma/dmafwd.c | 2 +- examples/ethtool/lib/rte_ethtool.c | 2 +- examples/ethtool/lib/rte_ethtool.h | 4 +- examples/ip_reassembly/main.c | 8 ++-- examples/ipsec-secgw/event_helper.c | 2 +- examples/ipsec-secgw/ipsec-secgw.c | 14 +++---- examples/ipsec-secgw/sa.c | 6 +-- examples/ipsec-secgw/sp4.c | 2 +- examples/ipsec-secgw/sp6.c | 2 +- examples/ipsec-secgw/test/common_defs.sh | 4 +- examples/kni/main.c | 2 +- examples/l2fwd-cat/l2fwd-cat.c | 2 +- examples/l2fwd-event/l2fwd_event_generic.c | 2 +- .../l2fwd-event/l2fwd_event_internal_port.c | 2 +- examples/l2fwd-jobstats/main.c | 2 +- examples/l3fwd-acl/main.c | 6 +-- examples/l3fwd-power/main.c | 4 +- examples/l3fwd/l3fwd_common.h | 4 +- examples/l3fwd/l3fwd_neon.h | 2 +- examples/l3fwd/l3fwd_sse.h | 2 +- examples/multi_process/hotplug_mp/commands.c | 2 +- examples/multi_process/simple_mp/main.c | 2 +- examples/multi_process/symmetric_mp/main.c | 2 +- examples/ntb/ntb_fwd.c | 2 +- examples/packet_ordering/main.c | 2 +- examples/performance-thread/common/lthread.c | 6 +-- .../performance-thread/common/lthread_diag.c | 2 +- .../performance-thread/common/lthread_int.h | 2 +- .../performance-thread/common/lthread_tls.c | 2 +- .../performance-thread/l3fwd-thread/main.c | 12 +++--- .../pthread_shim/pthread_shim.h | 2 +- examples/pipeline/examples/registers.spec | 2 +- examples/qos_sched/cmdline.c | 2 +- examples/server_node_efd/node/node.c | 2 +- examples/skeleton/basicfwd.c | 2 +- examples/vhost/main.c | 10 ++--- examples/vm_power_manager/channel_monitor.c | 2 +- examples/vm_power_manager/power_manager.h | 2 +- examples/vmdq/main.c | 2 +- kernel/linux/kni/kni_fifo.h | 2 +- lib/acl/acl_bld.c | 2 +- lib/acl/acl_run_altivec.h | 2 +- lib/acl/acl_run_avx512.c | 2 +- lib/acl/acl_run_avx512x16.h | 14 +++---- lib/acl/acl_run_avx512x8.h | 12 +++--- lib/bpf/bpf_convert.c | 4 +- lib/dmadev/rte_dmadev.h | 4 +- lib/eal/arm/include/rte_cycles_32.h | 2 +- lib/eal/freebsd/eal_interrupts.c | 4 +- lib/eal/include/generic/rte_pflock.h | 2 +- lib/eal/include/rte_malloc.h | 4 +- lib/eal/linux/eal_interrupts.c | 4 +- lib/eal/linux/eal_vfio.h | 2 +- lib/eal/windows/eal_windows.h | 2 +- lib/eal/windows/include/dirent.h | 4 +- lib/eal/windows/include/fnmatch.h | 4 +- lib/eal/x86/include/rte_atomic.h | 2 +- lib/eventdev/rte_event_eth_rx_adapter.c | 6 +-- lib/fib/rte_fib.c | 6 +-- lib/fib/rte_fib.h | 4 +- lib/fib/rte_fib6.c | 6 +-- lib/fib/rte_fib6.h | 4 +- lib/ipsec/ipsec_telemetry.c | 2 +- lib/ipsec/rte_ipsec_sad.h | 2 +- lib/ipsec/sa.c | 2 +- lib/mbuf/rte_mbuf_core.h | 2 +- lib/meson.build | 2 +- lib/net/rte_l2tpv2.h | 4 +- lib/pipeline/rte_swx_ctl.h | 4 +- lib/pipeline/rte_swx_pipeline_internal.h | 4 +- lib/pipeline/rte_swx_pipeline_spec.c | 2 +- lib/power/power_cppc_cpufreq.c | 2 +- lib/regexdev/rte_regexdev.h | 6 +-- lib/ring/rte_ring_core.h | 2 +- lib/sched/rte_pie.h | 6 +-- lib/sched/rte_red.h | 4 +- lib/sched/rte_sched.c | 2 +- lib/sched/rte_sched.h | 2 +- lib/table/rte_swx_table.h | 2 +- lib/table/rte_swx_table_selector.h | 2 +- lib/telemetry/telemetry.c | 2 +- lib/telemetry/telemetry_json.h | 2 +- lib/vhost/vhost_user.c | 4 +- 426 files changed, 869 insertions(+), 869 deletions(-)
diff --git a/app/proc-info/main.c b/app/proc-info/main.c index fbc1715ce9..accb5e716d 100644 --- a/app/proc-info/main.c +++ b/app/proc-info/main.c @@ -637,7 +637,7 @@ metrics_display(int port_id)
names = rte_malloc(NULL, sizeof(struct rte_metric_name) * len, 0); if (names == NULL) { - printf("Cannot allocate memory for metrcis names\n"); + printf("Cannot allocate memory for metrics names\n"); rte_free(metrics); return; } @@ -1135,7 +1135,7 @@ show_tm(void) caplevel.n_nodes_max, caplevel.n_nodes_nonleaf_max, caplevel.n_nodes_leaf_max); - printf("\t -- indetical: non leaf %u leaf %u\n", + printf("\t -- identical: non leaf %u leaf %u\n", caplevel.non_leaf_nodes_identical, caplevel.leaf_nodes_identical);
@@ -1289,7 +1289,7 @@ show_ring(char *name) printf(" - Name (%s) on socket (%d)\n" " - flags:\n" "\t -- Single Producer Enqueue (%u)\n" - "\t -- Single Consmer Dequeue (%u)\n", + "\t -- Single Consumer Dequeue (%u)\n", ptr->name, ptr->memzone->socket_id, ptr->flags & RING_F_SP_ENQ, diff --git a/app/test-acl/main.c b/app/test-acl/main.c index c2de18770d..06e3847ab9 100644 --- a/app/test-acl/main.c +++ b/app/test-acl/main.c @@ -386,8 +386,8 @@ parse_cb_ipv4_trace(char *str, struct ipv4_5tuple *v) }
/* - * Parses IPV6 address, exepcts the following format: - * XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX (where X - is a hexedecimal digit). + * Parse IPv6 address, expects the following format: + * XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX (where X is a hexadecimal digit). */ static int parse_ipv6_addr(const char *in, const char **end, uint32_t v[IPV6_ADDR_U32], @@ -994,7 +994,7 @@ print_usage(const char *prgname) "should be either 1 or multiple of %zu, " "but not greater then %u]\n" "[--" OPT_MAX_SIZE - "=<size limit (in bytes) for runtime ACL strucutures> " + "=<size limit (in bytes) for runtime ACL structures> " "leave 0 for default behaviour]\n" "[--" OPT_ITER_NUM "=<number of iterations to perform>]\n" "[--" OPT_VERBOSE "=<verbose level>]\n" diff --git a/app/test-compress-perf/comp_perf_test_cyclecount.c b/app/test-compress-perf/comp_perf_test_cyclecount.c index da55b02b74..1d8e5fe6c2 100644 --- a/app/test-compress-perf/comp_perf_test_cyclecount.c +++ b/app/test-compress-perf/comp_perf_test_cyclecount.c @@ -180,7 +180,7 @@ main_loop(struct cperf_cyclecount_ctx *ctx, enum rte_comp_xform_type type)
if (ops == NULL) { RTE_LOG(ERR, USER1, - "Can't allocate memory for ops strucures\n"); + "Can't allocate memory for ops structures\n"); return -1; }
diff --git a/app/test-compress-perf/comp_perf_test_throughput.c b/app/test-compress-perf/comp_perf_test_throughput.c index d3dff070b0..4569599eb9 100644 --- a/app/test-compress-perf/comp_perf_test_throughput.c +++ b/app/test-compress-perf/comp_perf_test_throughput.c @@ -72,7 +72,7 @@ main_loop(struct cperf_benchmark_ctx *ctx, enum rte_comp_xform_type type)
if (ops == NULL) { RTE_LOG(ERR, USER1, - "Can't allocate memory for ops strucures\n"); + "Can't allocate memory for ops structures\n"); return -1; }
diff --git a/app/test-compress-perf/comp_perf_test_verify.c b/app/test-compress-perf/comp_perf_test_verify.c index f6e21368e8..7d06029488 100644 --- a/app/test-compress-perf/comp_perf_test_verify.c +++ b/app/test-compress-perf/comp_perf_test_verify.c @@ -75,7 +75,7 @@ main_loop(struct cperf_verify_ctx *ctx, enum rte_comp_xform_type type)
if (ops == NULL) { RTE_LOG(ERR, USER1, - "Can't allocate memory for ops strucures\n"); + "Can't allocate memory for ops structures\n"); return -1; }
diff --git a/app/test-compress-perf/main.c b/app/test-compress-perf/main.c index cc9951a9b1..6ff6a2f04a 100644 --- a/app/test-compress-perf/main.c +++ b/app/test-compress-perf/main.c @@ -67,7 +67,7 @@ comp_perf_check_capabilities(struct comp_test_data *test_data, uint8_t cdev_id)
uint64_t comp_flags = cap->comp_feature_flags;
- /* Huffman enconding */ + /* Huffman encoding */ if (test_data->huffman_enc == RTE_COMP_HUFFMAN_FIXED && (comp_flags & RTE_COMP_FF_HUFFMAN_FIXED) == 0) { RTE_LOG(ERR, USER1, diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c index ba1f104f72..5842f29d43 100644 --- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c +++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c @@ -334,7 +334,7 @@ pmd_cyclecount_bench_burst_sz( * queue, so we never get any failed enqs unless the driver won't accept * the exact number of descriptors we requested, or the driver won't * wrap around the end of the TX ring. However, since we're only - * dequeueing once we've filled up the queue, we have to benchmark it + * dequeuing once we've filled up the queue, we have to benchmark it * piecemeal and then average out the results. */ cur_op = 0; diff --git a/app/test-eventdev/evt_options.c b/app/test-eventdev/evt_options.c index 753a7dbd7d..4ae44801da 100644 --- a/app/test-eventdev/evt_options.c +++ b/app/test-eventdev/evt_options.c @@ -336,7 +336,7 @@ usage(char *program) "\t--deq_tmo_nsec : global dequeue timeout\n" "\t--prod_type_ethdev : use ethernet device as producer.\n" "\t--prod_type_timerdev : use event timer device as producer.\n" - "\t expity_nsec would be the timeout\n" + "\t expiry_nsec would be the timeout\n" "\t in ns.\n" "\t--prod_type_timerdev_burst : use timer device as producer\n" "\t burst mode.\n" diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c index ff7813f9c2..603e7c9178 100644 --- a/app/test-eventdev/test_order_common.c +++ b/app/test-eventdev/test_order_common.c @@ -253,7 +253,7 @@ void order_opt_dump(struct evt_options *opt) { evt_dump_producer_lcores(opt); - evt_dump("nb_wrker_lcores", "%d", evt_nr_active_lcores(opt->wlcores)); + evt_dump("nb_worker_lcores", "%d", evt_nr_active_lcores(opt->wlcores)); evt_dump_worker_lcores(opt); evt_dump("nb_evdev_ports", "%d", order_nb_event_ports(opt)); } diff --git a/app/test-fib/main.c b/app/test-fib/main.c index ecd420116a..622703dce8 100644 --- a/app/test-fib/main.c +++ b/app/test-fib/main.c @@ -624,7 +624,7 @@ print_usage(void) "(if -f is not specified)>]\n" "[-r <percentage ratio of random ip's to lookup" "(if -t is not specified)>]\n" - "[-c <do comarison with LPM library>]\n" + "[-c <do comparison with LPM library>]\n" "[-6 <do tests with ipv6 (default ipv4)>]\n" "[-s <shuffle randomly generated routes>]\n" "[-a <check nexthops for all ipv4 address space" @@ -641,7 +641,7 @@ print_usage(void) "[-g <number of tbl8's for dir24_8 or trie FIBs>]\n" "[-w <path to the file to dump routing table>]\n" "[-u <path to the file to dump ip's for lookup>]\n" - "[-v <type of loookup function:" + "[-v <type of lookup function:" "\ts1, s2, s3 (3 types of scalar), v (vector) -" " for DIR24_8 based FIB\n" "\ts, v - for TRIE based ipv6 FIB>]\n", diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h index 0db2254bd1..29b63298e0 100644 --- a/app/test-flow-perf/config.h +++ b/app/test-flow-perf/config.h @@ -28,7 +28,7 @@ #define PORT_ID_DST 1 #define TEID_VALUE 1
-/* Flow items/acctions max size */ +/* Flow items/actions max size */ #define MAX_ITEMS_NUM 32 #define MAX_ACTIONS_NUM 32 #define MAX_ATTRS_NUM 16 diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c index 11f1ee0e1e..56d43734e3 100644 --- a/app/test-flow-perf/main.c +++ b/app/test-flow-perf/main.c @@ -1519,7 +1519,7 @@ dump_used_cpu_time(const char *item, * threads time. * * Throughput: total count of rte rules divided - * over the average of the time cosumed by all + * over the average of the time consumed by all * threads time. */ double insertion_latency_time; diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 1f9fd61394..26d95e64e0 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -561,7 +561,7 @@ static void cmd_help_long_parsed(void *parsed_result, " Set the option to enable display of RX and TX bursts.\n"
"set port (port_id) vf (vf_id) rx|tx on|off\n" - " Enable/Disable a VF receive/tranmit from a port\n\n" + " Enable/Disable a VF receive/transmit from a port\n\n"
"set port (port_id) vf (vf_id) rxmode (AUPE|ROPE|BAM" "|MPE) (on|off)\n" diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index bbe3dc0115..5c2bba48ad 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -2162,7 +2162,7 @@ static const struct token token_list[] = { }, [COMMON_POLICY_ID] = { .name = "{policy_id}", - .type = "POLCIY_ID", + .type = "POLICY_ID", .help = "policy id", .call = parse_int, .comp = comp_none, @@ -2370,7 +2370,7 @@ static const struct token token_list[] = { }, [TUNNEL_DESTROY] = { .name = "destroy", - .help = "destroy tunel", + .help = "destroy tunnel", .next = NEXT(NEXT_ENTRY(TUNNEL_DESTROY_ID), NEXT_ENTRY(COMMON_PORT_ID)), .args = ARGS(ARGS_ENTRY(struct buffer, port)), @@ -2378,7 +2378,7 @@ static const struct token token_list[] = { }, [TUNNEL_DESTROY_ID] = { .name = "id", - .help = "tunnel identifier to testroy", + .help = "tunnel identifier to destroy", .next = NEXT(NEXT_ENTRY(COMMON_UNSIGNED)), .args = ARGS(ARGS_ENTRY(struct tunnel_ops, id)), .call = parse_tunnel, diff --git a/app/test-pmd/cmdline_tm.c b/app/test-pmd/cmdline_tm.c index bfbd43ca9b..c058b8946e 100644 --- a/app/test-pmd/cmdline_tm.c +++ b/app/test-pmd/cmdline_tm.c @@ -69,7 +69,7 @@ print_err_msg(struct rte_tm_error *error) [RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS] = "num shared shapers field (node params)", [RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE] - = "wfq weght mode field (node params)", + = "wfq weight mode field (node params)", [RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES] = "num strict priorities field (node params)", [RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN] @@ -479,7 +479,7 @@ static void cmd_show_port_tm_level_cap_parsed(void *parsed_result, cmdline_parse_inst_t cmd_show_port_tm_level_cap = { .f = cmd_show_port_tm_level_cap_parsed, .data = NULL, - .help_str = "Show Port TM Hierarhical level Capabilities", + .help_str = "Show port TM hierarchical level capabilities", .tokens = { (void *)&cmd_show_port_tm_level_cap_show, (void *)&cmd_show_port_tm_level_cap_port, diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c index 2aeea243b6..0177284d9c 100644 --- a/app/test-pmd/csumonly.c +++ b/app/test-pmd/csumonly.c @@ -796,7 +796,7 @@ pkt_copy_split(const struct rte_mbuf *pkt) * * The testpmd command line for this forward engine sets the flags * TESTPMD_TX_OFFLOAD_* in ports[tx_port].tx_ol_flags. They control - * wether a checksum must be calculated in software or in hardware. The + * whether a checksum must be calculated in software or in hardware. The * IP, UDP, TCP and SCTP flags always concern the inner layer. The * OUTER_IP is only useful for tunnel packets. */ diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index 24e03e769c..435687fa6d 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -113,7 +113,7 @@ usage(char* progname) "If the drop-queue doesn't exist, the packet is dropped. " "By default drop-queue=127.\n"); #ifdef RTE_LIB_LATENCYSTATS - printf(" --latencystats=N: enable latency and jitter statistcs " + printf(" --latencystats=N: enable latency and jitter statistics " "monitoring on forwarding lcore id N.\n"); #endif printf(" --disable-crc-strip: disable CRC stripping by hardware.\n"); diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 66d5167f57..2be92af9f8 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -453,7 +453,7 @@ uint32_t bypass_timeout = RTE_PMD_IXGBE_BYPASS_TMT_OFF; uint8_t latencystats_enabled;
/* - * Lcore ID to serive latency statistics. + * Lcore ID to service latency statistics. */ lcoreid_t latencystats_lcore_id = -1;
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index b8497e733d..e8c0c7b926 100644 --- a/app/test-pmd/txonly.c +++ b/app/test-pmd/txonly.c @@ -174,14 +174,14 @@ update_pkt_header(struct rte_mbuf *pkt, uint32_t total_pkt_len) sizeof(struct rte_ether_hdr) + sizeof(struct rte_ipv4_hdr) + sizeof(struct rte_udp_hdr))); - /* updata udp pkt length */ + /* update UDP packet length */ udp_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_udp_hdr *, sizeof(struct rte_ether_hdr) + sizeof(struct rte_ipv4_hdr)); pkt_len = (uint16_t) (pkt_data_len + sizeof(struct rte_udp_hdr)); udp_hdr->dgram_len = RTE_CPU_TO_BE_16(pkt_len);
- /* updata ip pkt length and csum */ + /* update IP packet length and checksum */ ip_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, sizeof(struct rte_ether_hdr)); ip_hdr->hdr_checksum = 0; diff --git a/app/test/test_barrier.c b/app/test/test_barrier.c index 6d6d48749c..ec69af25bf 100644 --- a/app/test/test_barrier.c +++ b/app/test/test_barrier.c @@ -11,7 +11,7 @@ * (https://en.wikipedia.org/wiki/Peterson%27s_algorithm) * for two execution units to make sure that rte_smp_mb() prevents * store-load reordering to happen. - * Also when executed on a single lcore could be used as a approxiamate + * Also when executed on a single lcore could be used as a approximate * estimation of number of cycles particular implementation of rte_smp_mb() * will take. */ diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c index 46bcb51f86..2d755a872f 100644 --- a/app/test/test_bpf.c +++ b/app/test/test_bpf.c @@ -23,7 +23,7 @@ /* * Basic functional tests for librte_bpf. * The main procedure - load eBPF program, execute it and - * compare restuls with expected values. + * compare results with expected values. */
struct dummy_offset { @@ -2707,7 +2707,7 @@ test_ld_mbuf1_check(uint64_t rc, const void *arg) }
/* - * same as ld_mbuf1, but then trancate the mbuf by 1B, + * same as ld_mbuf1, but then truncate the mbuf by 1B, * so load of last 4B fail. */ static void diff --git a/app/test/test_compressdev.c b/app/test/test_compressdev.c index c63b5b6737..57c566aa92 100644 --- a/app/test/test_compressdev.c +++ b/app/test/test_compressdev.c @@ -1256,7 +1256,7 @@ test_deflate_comp_run(const struct interim_data_params *int_data, /* * Store original operation index in private data, * since ordering does not have to be maintained, - * when dequeueing from compressdev, so a comparison + * when dequeuing from compressdev, so a comparison * at the end of the test can be done. */ priv_data = (struct priv_op_data *) (ops[i] + 1); diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index 10b48cdadb..6c949605b8 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -6870,7 +6870,7 @@ test_snow3g_decryption_with_digest_test_case_1(void) }
/* - * Function prepare data for hash veryfication test case. + * Function prepare data for hash verification test case. * Digest is allocated in 4 last bytes in plaintext, pattern. */ snow3g_hash_test_vector_setup(&snow3g_test_case_7, &snow3g_hash_data); diff --git a/app/test/test_fib_perf.c b/app/test/test_fib_perf.c index 86b2f832b8..7a25fe8df7 100644 --- a/app/test/test_fib_perf.c +++ b/app/test/test_fib_perf.c @@ -346,7 +346,7 @@ test_fib_perf(void) fib = rte_fib_create(__func__, SOCKET_ID_ANY, &config); TEST_FIB_ASSERT(fib != NULL);
- /* Measue add. */ + /* Measure add. */ begin = rte_rdtsc();
for (i = 0; i < NUM_ROUTE_ENTRIES; i++) { diff --git a/app/test/test_kni.c b/app/test/test_kni.c index 40ab0d5c4c..2761de9b07 100644 --- a/app/test/test_kni.c +++ b/app/test/test_kni.c @@ -326,7 +326,7 @@ test_kni_register_handler_mp(void)
/* Check with the invalid parameters */ if (rte_kni_register_handlers(kni, NULL) == 0) { - printf("Unexpectedly register successuflly " + printf("Unexpectedly register successfully " "with NULL ops pointer\n"); exit(-1); } @@ -475,7 +475,7 @@ test_kni_processing(uint16_t port_id, struct rte_mempool *mp)
/** * Check multiple processes support on - * registerring/unregisterring handlers. + * registering/unregistering handlers. */ if (test_kni_register_handler_mp() < 0) { printf("fail to check multiple process support\n"); diff --git a/app/test/test_kvargs.c b/app/test/test_kvargs.c index a91ea8dc47..b7b97a0dd9 100644 --- a/app/test/test_kvargs.c +++ b/app/test/test_kvargs.c @@ -11,7 +11,7 @@
#include "test.h"
-/* incrementd in handler, to check it is properly called once per +/* incremented in handler, to check it is properly called once per * key/value association */ static unsigned count;
@@ -107,14 +107,14 @@ static int test_valid_kvargs(void) goto fail; } count = 0; - /* call check_handler() for all entries with key="unexistant_key" */ - if (rte_kvargs_process(kvlist, "unexistant_key", check_handler, NULL) < 0) { + /* call check_handler() for all entries with key="nonexistent_key" */ + if (rte_kvargs_process(kvlist, "nonexistent_key", check_handler, NULL) < 0) { printf("rte_kvargs_process() error\n"); rte_kvargs_free(kvlist); goto fail; } if (count != 0) { - printf("invalid count value %d after rte_kvargs_process(unexistant_key)\n", + printf("invalid count value %d after rte_kvargs_process(nonexistent_key)\n", count); rte_kvargs_free(kvlist); goto fail; @@ -135,10 +135,10 @@ static int test_valid_kvargs(void) rte_kvargs_free(kvlist); goto fail; } - /* count all entries with key="unexistant_key" */ - count = rte_kvargs_count(kvlist, "unexistant_key"); + /* count all entries with key="nonexistent_key" */ + count = rte_kvargs_count(kvlist, "nonexistent_key"); if (count != 0) { - printf("invalid count value %d after rte_kvargs_count(unexistant_key)\n", + printf("invalid count value %d after rte_kvargs_count(nonexistent_key)\n", count); rte_kvargs_free(kvlist); goto fail; @@ -156,7 +156,7 @@ static int test_valid_kvargs(void) /* call check_handler() on all entries with key="check", it * should fail as the value is not recognized by the handler */ if (rte_kvargs_process(kvlist, "check", check_handler, NULL) == 0) { - printf("rte_kvargs_process() is success bu should not\n"); + printf("rte_kvargs_process() is success but should not\n"); rte_kvargs_free(kvlist); goto fail; } diff --git a/app/test/test_lpm6_data.h b/app/test/test_lpm6_data.h index c3894f730e..da9b161f20 100644 --- a/app/test/test_lpm6_data.h +++ b/app/test/test_lpm6_data.h @@ -22,7 +22,7 @@ struct ips_tbl_entry { * in previous test_lpm6_routes.h . Because this table has only 1000 * lines, keeping it doesn't make LPM6 test case so large and also * make the algorithm to generate rule table unnecessary and the - * algorithm to genertate test input IPv6 and associated expected + * algorithm to generate test input IPv6 and associated expected * next_hop much simple. */
diff --git a/app/test/test_member.c b/app/test/test_member.c index 40aa4c8627..af9d50915c 100644 --- a/app/test/test_member.c +++ b/app/test/test_member.c @@ -459,7 +459,7 @@ static int test_member_multimatch(void) MAX_MATCH, set_ids_cache); /* * For cache mode, keys overwrite when signature same. - * the mutimatch should work like single match. + * the multimatch should work like single match. */ TEST_ASSERT(ret_ht == M_MATCH_CNT && ret_vbf == M_MATCH_CNT && ret_cache == 1, diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c index f6c650d11f..8e493eda47 100644 --- a/app/test/test_mempool.c +++ b/app/test/test_mempool.c @@ -304,7 +304,7 @@ static int test_mempool_single_consumer(void) }
/* - * test function for mempool test based on singple consumer and single producer, + * test function for mempool test based on single consumer and single producer, * can run on one lcore only */ static int @@ -322,7 +322,7 @@ my_mp_init(struct rte_mempool *mp, __rte_unused void *arg) }
/* - * it tests the mempool operations based on singple producer and single consumer + * it tests the mempool operations based on single producer and single consumer */ static int test_mempool_sp_sc(void) diff --git a/app/test/test_memzone.c b/app/test/test_memzone.c index 6ddd0fbab5..c9255e5763 100644 --- a/app/test/test_memzone.c +++ b/app/test/test_memzone.c @@ -543,7 +543,7 @@ test_memzone_reserve_max(void) }
if (mz->len != maxlen) { - printf("Memzone reserve with 0 size did not return bigest block\n"); + printf("Memzone reserve with 0 size did not return biggest block\n"); printf("Expected size = %zu, actual size = %zu\n", maxlen, mz->len); rte_dump_physmem_layout(stdout); @@ -606,7 +606,7 @@ test_memzone_reserve_max_aligned(void)
if (mz->len < minlen || mz->len > maxlen) { printf("Memzone reserve with 0 size and alignment %u did not return" - " bigest block\n", align); + " biggest block\n", align); printf("Expected size = %zu-%zu, actual size = %zu\n", minlen, maxlen, mz->len); rte_dump_physmem_layout(stdout); @@ -1054,7 +1054,7 @@ test_memzone_basic(void) if (mz != memzone1) return -1;
- printf("test duplcate zone name\n"); + printf("test duplicate zone name\n"); mz = rte_memzone_reserve(TEST_MEMZONE_NAME("testzone1"), 100, SOCKET_ID_ANY, 0); if (mz != NULL) diff --git a/app/test/test_metrics.c b/app/test/test_metrics.c index e736019ae4..11222133d0 100644 --- a/app/test/test_metrics.c +++ b/app/test/test_metrics.c @@ -121,7 +121,7 @@ test_metrics_update_value(void) err = rte_metrics_update_value(RTE_METRICS_GLOBAL, KEY, VALUE); TEST_ASSERT(err >= 0, "%s, %d", __func__, __LINE__);
- /* Successful Test: Valid port_id otherthan RTE_METRICS_GLOBAL, key + /* Successful Test: Valid port_id other than RTE_METRICS_GLOBAL, key * and value */ err = rte_metrics_update_value(9, KEY, VALUE); diff --git a/app/test/test_pcapng.c b/app/test/test_pcapng.c index c2dbeaf603..34c5e12346 100644 --- a/app/test/test_pcapng.c +++ b/app/test/test_pcapng.c @@ -109,7 +109,7 @@ test_setup(void) return -1; }
- /* Make a pool for cloned packeets */ + /* Make a pool for cloned packets */ mp = rte_pktmbuf_pool_create_by_ops("pcapng_test_pool", NUM_PACKETS, 0, 0, rte_pcapng_mbuf_size(pkt_len), diff --git a/app/test/test_power_cpufreq.c b/app/test/test_power_cpufreq.c index 1a9549527e..4d013cd7bb 100644 --- a/app/test/test_power_cpufreq.c +++ b/app/test/test_power_cpufreq.c @@ -659,7 +659,7 @@ test_power_cpufreq(void) /* test of exit power management for an invalid lcore */ ret = rte_power_exit(TEST_POWER_LCORE_INVALID); if (ret == 0) { - printf("Unpectedly exit power management successfully for " + printf("Unexpectedly exit power management successfully for " "lcore %u\n", TEST_POWER_LCORE_INVALID); rte_power_unset_env(); return -1; diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c index ab37a068cd..70404e89e6 100644 --- a/app/test/test_rcu_qsbr.c +++ b/app/test/test_rcu_qsbr.c @@ -408,7 +408,7 @@ test_rcu_qsbr_synchronize_reader(void *arg)
/* * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered - * the queiscent state. + * the quiescent state. */ static int test_rcu_qsbr_synchronize(void) @@ -443,7 +443,7 @@ test_rcu_qsbr_synchronize(void) rte_rcu_qsbr_synchronize(t[0], RTE_MAX_LCORE - 1); rte_rcu_qsbr_thread_offline(t[0], RTE_MAX_LCORE - 1);
- /* Test if the API returns after unregisterng all the threads */ + /* Test if the API returns after unregistering all the threads */ for (i = 0; i < RTE_MAX_LCORE; i++) rte_rcu_qsbr_thread_unregister(t[0], i); rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID); diff --git a/app/test/test_red.c b/app/test/test_red.c index 05936cfee8..33a9f4ebb7 100644 --- a/app/test/test_red.c +++ b/app/test/test_red.c @@ -1566,10 +1566,10 @@ static void ovfl_check_avg(uint32_t avg) }
static struct test_config ovfl_test1_config = { - .ifname = "queue avergage overflow test interface", + .ifname = "queue average overflow test interface", .msg = "overflow test 1 : use one RED configuration,\n" " increase average queue size to target level,\n" - " check maximum number of bits requirte_red to represent avg_s\n\n", + " check maximum number of bits required to represent avg_s\n\n", .htxt = "avg queue size " "wq_log2 " "fraction bits " @@ -1757,12 +1757,12 @@ test_invalid_parameters(void) printf("%i: rte_red_config_init should have failed!\n", __LINE__); return -1; } - /* min_treshold == max_treshold */ + /* min_threshold == max_threshold */ if (rte_red_config_init(&config, 0, 1, 1, 0) == 0) { printf("%i: rte_red_config_init should have failed!\n", __LINE__); return -1; } - /* min_treshold > max_treshold */ + /* min_threshold > max_threshold */ if (rte_red_config_init(&config, 0, 2, 1, 0) == 0) { printf("%i: rte_red_config_init should have failed!\n", __LINE__); return -1; diff --git a/app/test/test_security.c b/app/test/test_security.c index 060cf1ffa8..059731b65d 100644 --- a/app/test/test_security.c +++ b/app/test/test_security.c @@ -237,7 +237,7 @@ * increases .called counter. Function returns value stored in .ret field * of the structure. * In case of some parameters in some functions the expected value is unknown - * and cannot be detrmined prior to call. Such parameters are stored + * and cannot be determined prior to call. Such parameters are stored * in structure and can be compared or analyzed later in test case code. * * Below structures and functions follow the rules just described. diff --git a/app/test/test_table_pipeline.c b/app/test/test_table_pipeline.c index aabf4375db..915c451fed 100644 --- a/app/test/test_table_pipeline.c +++ b/app/test/test_table_pipeline.c @@ -364,7 +364,7 @@ setup_pipeline(int test_type) .action = RTE_PIPELINE_ACTION_PORT, {.port_id = port_out_id[i^1]}, }; - printf("Setting secont table to output to port\n"); + printf("Setting second table to output to port\n");
/* Add the default action for the table. */ ret = rte_pipeline_table_default_entry_add(p, diff --git a/app/test/test_thash.c b/app/test/test_thash.c index a62530673f..62ba4a9528 100644 --- a/app/test/test_thash.c +++ b/app/test/test_thash.c @@ -684,7 +684,7 @@ test_predictable_rss_multirange(void)
/* * calculate hashes, complements, then adjust keys with - * complements and recalsulate hashes + * complements and recalculate hashes */ for (i = 0; i < RTE_DIM(rng_arr); i++) { for (k = 0; k < 100; k++) { diff --git a/buildtools/binutils-avx512-check.py b/buildtools/binutils-avx512-check.py index a4e14f3593..57392ecdc8 100644 --- a/buildtools/binutils-avx512-check.py +++ b/buildtools/binutils-avx512-check.py @@ -1,5 +1,5 @@ #! /usr/bin/env python3 -# SPDX-License-Identitifer: BSD-3-Clause +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2020 Intel Corporation
import subprocess diff --git a/devtools/check-symbol-change.sh b/devtools/check-symbol-change.sh index 8fcd0ce1a1..8992214ac8 100755 --- a/devtools/check-symbol-change.sh +++ b/devtools/check-symbol-change.sh @@ -25,7 +25,7 @@ build_map_changes()
# Triggering this rule, which starts a line and ends it # with a { identifies a versioned section. The section name is - # the rest of the line with the + and { symbols remvoed. + # the rest of the line with the + and { symbols removed. # Triggering this rule sets in_sec to 1, which actives the # symbol rule below /^.*{/ { @@ -35,7 +35,7 @@ build_map_changes() } }
- # This rule idenfies the end of a section, and disables the + # This rule identifies the end of a section, and disables the # symbol rule /.*}/ {in_sec=0}
@@ -100,7 +100,7 @@ check_for_rule_violations() # Just inform the user of this occurrence, but # don't flag it as an error echo -n "INFO: symbol $symname is added but " - echo -n "patch has insuficient context " + echo -n "patch has insufficient context " echo -n "to determine the section name " echo -n "please ensure the version is " echo "EXPERIMENTAL" diff --git a/doc/guides/howto/img/virtio_user_for_container_networking.svg b/doc/guides/howto/img/virtio_user_for_container_networking.svg index de80806649..dc9b318e7e 100644 --- a/doc/guides/howto/img/virtio_user_for_container_networking.svg +++ b/doc/guides/howto/img/virtio_user_for_container_networking.svg @@ -465,7 +465,7 @@ v:mID="63" id="shape63-63"><title id="title149">Sheet.63</title><desc - id="desc151">Contanier/App</desc><v:textBlock + id="desc151">Container/App</desc><v:textBlock v:margins="rect(4,4,4,4)" /><v:textRect height="22.5" width="90" diff --git a/doc/guides/nics/af_packet.rst b/doc/guides/nics/af_packet.rst index 8292369141..66b977e1a2 100644 --- a/doc/guides/nics/af_packet.rst +++ b/doc/guides/nics/af_packet.rst @@ -9,7 +9,7 @@ packets. This Linux-specific PMD binds to an AF_PACKET socket and allows a DPDK application to send and receive raw packets through the Kernel.
In order to improve Rx and Tx performance this implementation makes use of -PACKET_MMAP, which provides a mmap'ed ring buffer, shared between user space +PACKET_MMAP, which provides a mmapped ring buffer, shared between user space and kernel, that's used to send and receive packets. This helps reducing system calls and the copies needed between user space and Kernel.
diff --git a/doc/guides/nics/mlx4.rst b/doc/guides/nics/mlx4.rst index a25add7c47..c81105730d 100644 --- a/doc/guides/nics/mlx4.rst +++ b/doc/guides/nics/mlx4.rst @@ -178,7 +178,7 @@ DPDK and must be installed separately:
- mlx4_core: hardware driver managing Mellanox ConnectX-3 devices. - mlx4_en: Ethernet device driver that provides kernel network interfaces. - - mlx4_ib: InifiniBand device driver. + - mlx4_ib: InfiniBand device driver. - ib_uverbs: user space driver for verbs (entry point for libibverbs).
- **Firmware update** diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index feb2e57cee..daa7f2affb 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -649,7 +649,7 @@ Driver options
A timeout value is set in the driver to control the waiting time before dropping a packet. Once the timer is expired, the delay drop will be - deactivated for all the Rx queues with this feature enable. To re-activeate + deactivated for all the Rx queues with this feature enable. To re-activate it, a rearming is needed and it is part of the kernel driver starting from OFED 5.5.
@@ -1033,7 +1033,7 @@ Driver options
For the MARK action the last 16 values in the full range are reserved for internal PMD purposes (to emulate FLAG action). The valid range for the - MARK action values is 0-0xFFEF for the 16-bit mode and 0-xFFFFEF + MARK action values is 0-0xFFEF for the 16-bit mode and 0-0xFFFFEF for the 24-bit mode, the flows with the MARK action value outside the specified range will be rejected.
@@ -1317,7 +1317,7 @@ DPDK and must be installed separately: - mlx5_core: hardware driver managing Mellanox ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices and related Ethernet kernel network devices. - - mlx5_ib: InifiniBand device driver. + - mlx5_ib: InfiniBand device driver. - ib_uverbs: user space driver for Verbs (entry point for libibverbs).
- **Firmware update** diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst index 0af35f5e74..8766bc34a9 100644 --- a/doc/guides/prog_guide/cryptodev_lib.rst +++ b/doc/guides/prog_guide/cryptodev_lib.rst @@ -751,7 +751,7 @@ feature is useful when the user wants to abandon partially enqueued operations for a failed enqueue burst operation and try enqueuing in a whole later.
Similar as enqueue, there are two dequeue functions: -``rte_cryptodev_raw_dequeue`` for dequeing single operation, and +``rte_cryptodev_raw_dequeue`` for dequeuing single operation, and ``rte_cryptodev_raw_dequeue_burst`` for dequeuing a burst of operations (e.g. all operations in a ``struct rte_crypto_sym_vec`` descriptor). The ``rte_cryptodev_raw_dequeue_burst`` function allows the user to provide callback diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst index 29f6fefc48..c6accce701 100644 --- a/doc/guides/prog_guide/env_abstraction_layer.rst +++ b/doc/guides/prog_guide/env_abstraction_layer.rst @@ -433,7 +433,7 @@ and decides on a preferred IOVA mode.
- if all buses report RTE_IOVA_PA, then the preferred IOVA mode is RTE_IOVA_PA, - if all buses report RTE_IOVA_VA, then the preferred IOVA mode is RTE_IOVA_VA, -- if all buses report RTE_IOVA_DC, no bus expressed a preferrence, then the +- if all buses report RTE_IOVA_DC, no bus expressed a preference, then the preferred mode is RTE_IOVA_DC, - if the buses disagree (at least one wants RTE_IOVA_PA and at least one wants RTE_IOVA_VA), then the preferred IOVA mode is RTE_IOVA_DC (see below with the @@ -658,7 +658,7 @@ Known Issues + rte_ring
rte_ring supports multi-producer enqueue and multi-consumer dequeue. - However, it is non-preemptive, this has a knock on effect of making rte_mempool non-preemptable. + However, it is non-preemptive, this has a knock on effect of making rte_mempool non-preemptible.
.. note::
diff --git a/doc/guides/prog_guide/img/turbo_tb_decode.svg b/doc/guides/prog_guide/img/turbo_tb_decode.svg index a259f45866..95779c3642 100644 --- a/doc/guides/prog_guide/img/turbo_tb_decode.svg +++ b/doc/guides/prog_guide/img/turbo_tb_decode.svg @@ -460,7 +460,7 @@ height="14.642858" x="39.285713" y="287.16254" /></flowRegion><flowPara - id="flowPara4817">offse</flowPara></flowRoot> <text + id="flowPara4817">offset</flowPara></flowRoot> <text xml:space="preserve" style="font-style:normal;font-weight:normal;font-size:3.14881921px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#9cc3e5;fill-opacity:1;stroke:none;stroke-width:0.23616144" x="74.16684" diff --git a/doc/guides/prog_guide/img/turbo_tb_encode.svg b/doc/guides/prog_guide/img/turbo_tb_encode.svg index e3708a9377..98a6b83983 100644 --- a/doc/guides/prog_guide/img/turbo_tb_encode.svg +++ b/doc/guides/prog_guide/img/turbo_tb_encode.svg @@ -649,7 +649,7 @@ height="14.642858" x="39.285713" y="287.16254" /></flowRegion><flowPara - id="flowPara4817">offse</flowPara></flowRoot> <text + id="flowPara4817">offset</flowPara></flowRoot> <text xml:space="preserve" style="font-style:normal;font-weight:normal;font-size:3.14881921px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;display:inline;fill:#a8d08d;fill-opacity:1;stroke:none;stroke-width:0.23616144" x="16.351753" diff --git a/doc/guides/prog_guide/qos_framework.rst b/doc/guides/prog_guide/qos_framework.rst index 89ea199529..22616117cb 100644 --- a/doc/guides/prog_guide/qos_framework.rst +++ b/doc/guides/prog_guide/qos_framework.rst @@ -1196,12 +1196,12 @@ In the case of severe congestion, the dropper resorts to tail drop. This occurs when a packet queue has reached maximum capacity and cannot store any more packets. In this situation, all arriving packets are dropped.
-The flow through the dropper is illustrated in :numref:`figure_flow_tru_droppper`. +The flow through the dropper is illustrated in :numref:`figure_flow_tru_dropper`. The RED/WRED/PIE algorithm is exercised first and tail drop second.
-.. _figure_flow_tru_droppper: +.. _figure_flow_tru_dropper:
-.. figure:: img/flow_tru_droppper.* +.. figure:: img/flow_tru_dropper.*
Flow Through the Dropper
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index c51ed88cfe..b4aa9c47c2 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -1379,7 +1379,7 @@ Matches a network service header (RFC 8300). - ``ttl``: maximum SFF hopes (6 bits). - ``length``: total length in 4 bytes words (6 bits). - ``reserved1``: reserved1 bits (4 bits). -- ``mdtype``: ndicates format of NSH header (4 bits). +- ``mdtype``: indicates format of NSH header (4 bits). - ``next_proto``: indicates protocol type of encap data (8 bits). - ``spi``: service path identifier (3 bytes). - ``sindex``: service index (1 byte). diff --git a/doc/guides/rawdevs/cnxk_bphy.rst b/doc/guides/rawdevs/cnxk_bphy.rst index 3cb2175688..522390bf1b 100644 --- a/doc/guides/rawdevs/cnxk_bphy.rst +++ b/doc/guides/rawdevs/cnxk_bphy.rst @@ -37,7 +37,7 @@ using ``rte_rawdev_queue_conf_get()``.
To perform data transfer use standard ``rte_rawdev_enqueue_buffers()`` and ``rte_rawdev_dequeue_buffers()`` APIs. Not all messages produce sensible -responses hence dequeueing is not always necessary. +responses hence dequeuing is not always necessary.
BPHY CGX/RPM PMD ---------------- diff --git a/doc/guides/regexdevs/features_overview.rst b/doc/guides/regexdevs/features_overview.rst index c512bde592..3e7ab409bf 100644 --- a/doc/guides/regexdevs/features_overview.rst +++ b/doc/guides/regexdevs/features_overview.rst @@ -22,7 +22,7 @@ PCRE back tracking ctrl Support PCRE back tracking ctrl.
PCRE call outs - Support PCRE call outes. + Support PCRE call routes.
PCRE forward reference Support Forward reference. diff --git a/doc/guides/rel_notes/release_16_07.rst b/doc/guides/rel_notes/release_16_07.rst index 5be2d171f1..c4f2f71222 100644 --- a/doc/guides/rel_notes/release_16_07.rst +++ b/doc/guides/rel_notes/release_16_07.rst @@ -192,7 +192,7 @@ EAL
* **igb_uio: Fixed possible mmap failure for Linux >= 4.5.**
- The mmaping of the iomem range of the PCI device fails for kernels that + The mmapping of the iomem range of the PCI device fails for kernels that enabled the ``CONFIG_IO_STRICT_DEVMEM`` option. The error seen by the user is as similar to the following::
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst index 25439dad45..1fd1755858 100644 --- a/doc/guides/rel_notes/release_17_08.rst +++ b/doc/guides/rel_notes/release_17_08.rst @@ -232,7 +232,7 @@ API Changes * The ``rte_cryptodev_configure()`` function does not create the session mempool for the device anymore. * The ``rte_cryptodev_queue_pair_attach_sym_session()`` and - ``rte_cryptodev_queue_pair_dettach_sym_session()`` functions require + ``rte_cryptodev_queue_pair_detach_sym_session()`` functions require the new parameter ``device id``. * Parameters of ``rte_cryptodev_sym_session_create()`` were modified to accept ``mempool``, instead of ``device id`` and ``rte_crypto_sym_xform``. diff --git a/doc/guides/rel_notes/release_2_1.rst b/doc/guides/rel_notes/release_2_1.rst index 35e6c88884..d0ad99ebce 100644 --- a/doc/guides/rel_notes/release_2_1.rst +++ b/doc/guides/rel_notes/release_2_1.rst @@ -671,7 +671,7 @@ Resolved Issues value 0.
- Fixes: 40b966a211ab ("ivshmem: library changes for mmaping using ivshmem") + Fixes: 40b966a211ab ("ivshmem: library changes for mmapping using ivshmem")
* **ixgbe/base: Fix SFP probing.** diff --git a/doc/guides/sample_app_ug/ip_reassembly.rst b/doc/guides/sample_app_ug/ip_reassembly.rst index 06289c2248..5280bf4ea0 100644 --- a/doc/guides/sample_app_ug/ip_reassembly.rst +++ b/doc/guides/sample_app_ug/ip_reassembly.rst @@ -154,8 +154,8 @@ each RX queue uses its own mempool.
.. literalinclude:: ../../../examples/ip_reassembly/main.c :language: c - :start-after: mbufs stored int the gragment table. 8< - :end-before: >8 End of mbufs stored int the fragmentation table. + :start-after: mbufs stored in the fragment table. 8< + :end-before: >8 End of mbufs stored in the fragmentation table. :dedent: 1
Packet Reassembly and Forwarding diff --git a/doc/guides/sample_app_ug/l2_forward_cat.rst b/doc/guides/sample_app_ug/l2_forward_cat.rst index 440642ef7c..3ada3575ba 100644 --- a/doc/guides/sample_app_ug/l2_forward_cat.rst +++ b/doc/guides/sample_app_ug/l2_forward_cat.rst @@ -176,7 +176,7 @@ function. The value returned is the number of parsed arguments: .. literalinclude:: ../../../examples/l2fwd-cat/l2fwd-cat.c :language: c :start-after: Initialize the Environment Abstraction Layer (EAL). 8< - :end-before: >8 End of initializion the Environment Abstraction Layer (EAL). + :end-before: >8 End of initialization the Environment Abstraction Layer (EAL). :dedent: 1
The next task is to initialize the PQoS library and configure CAT. The diff --git a/doc/guides/sample_app_ug/server_node_efd.rst b/doc/guides/sample_app_ug/server_node_efd.rst index 605eb09a61..c6cbc3def6 100644 --- a/doc/guides/sample_app_ug/server_node_efd.rst +++ b/doc/guides/sample_app_ug/server_node_efd.rst @@ -191,7 +191,7 @@ flow is not handled by the node. .. literalinclude:: ../../../examples/server_node_efd/node/node.c :language: c :start-after: Packets dequeued from the shared ring. 8< - :end-before: >8 End of packets dequeueing. + :end-before: >8 End of packets dequeuing.
Finally, note that both processes updates statistics, such as transmitted, received and dropped packets, which are shown and refreshed by the server app. diff --git a/doc/guides/sample_app_ug/skeleton.rst b/doc/guides/sample_app_ug/skeleton.rst index 6d0de64401..08ddd7aa59 100644 --- a/doc/guides/sample_app_ug/skeleton.rst +++ b/doc/guides/sample_app_ug/skeleton.rst @@ -54,7 +54,7 @@ function. The value returned is the number of parsed arguments: .. literalinclude:: ../../../examples/skeleton/basicfwd.c :language: c :start-after: Initializion the Environment Abstraction Layer (EAL). 8< - :end-before: >8 End of initializion the Environment Abstraction Layer (EAL). + :end-before: >8 End of initialization the Environment Abstraction Layer (EAL). :dedent: 1
diff --git a/doc/guides/sample_app_ug/vm_power_management.rst b/doc/guides/sample_app_ug/vm_power_management.rst index 7160b6a63a..9ce87956c9 100644 --- a/doc/guides/sample_app_ug/vm_power_management.rst +++ b/doc/guides/sample_app_ug/vm_power_management.rst @@ -681,7 +681,7 @@ The following is an example JSON string for a power management request. "resource_id": 10 }}
-To query the available frequences of an lcore, use the query_cpu_freq command. +To query the available frequencies of an lcore, use the query_cpu_freq command. Where {core_num} is the lcore to query. Before using this command, please enable responses via the set_query command on the host.
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 44228cd7d2..94792d88cc 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3510,7 +3510,7 @@ Tunnel offload Indicate tunnel offload rule type
- ``tunnel_set {tunnel_id}``: mark rule as tunnel offload decap_set type. -- ``tunnel_match {tunnel_id}``: mark rule as tunel offload match type. +- ``tunnel_match {tunnel_id}``: mark rule as tunnel offload match type.
Matching pattern ^^^^^^^^^^^^^^^^ diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c index 92decc3e05..21d35292a3 100644 --- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c +++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c @@ -2097,7 +2097,7 @@ dequeue_enc_one_op_cb(struct fpga_queue *q, struct rte_bbdev_enc_op **op, rte_bbdev_log_debug("DMA response desc %p", desc);
*op = desc->enc_req.op_addr; - /* Check the decriptor error field, return 1 on error */ + /* Check the descriptor error field, return 1 on error */ desc_error = check_desc_error(desc->enc_req.error); (*op)->status = desc_error << RTE_BBDEV_DATA_ERROR;
@@ -2139,7 +2139,7 @@ dequeue_enc_one_op_tb(struct fpga_queue *q, struct rte_bbdev_enc_op **op, for (cb_idx = 0; cb_idx < cbs_in_op; ++cb_idx) { desc = q->ring_addr + ((q->head_free_desc + desc_offset + cb_idx) & q->sw_ring_wrap_mask); - /* Check the decriptor error field, return 1 on error */ + /* Check the descriptor error field, return 1 on error */ desc_error = check_desc_error(desc->enc_req.error); status |= desc_error << RTE_BBDEV_DATA_ERROR; rte_bbdev_log_debug("DMA response desc %p", desc); @@ -2177,7 +2177,7 @@ dequeue_dec_one_op_cb(struct fpga_queue *q, struct rte_bbdev_dec_op **op, (*op)->turbo_dec.iter_count = (desc->dec_req.iter + 2) >> 1; /* crc_pass = 0 when decoder fails */ (*op)->status = !(desc->dec_req.crc_pass) << RTE_BBDEV_CRC_ERROR; - /* Check the decriptor error field, return 1 on error */ + /* Check the descriptor error field, return 1 on error */ desc_error = check_desc_error(desc->enc_req.error); (*op)->status |= desc_error << RTE_BBDEV_DATA_ERROR; return 1; @@ -2221,7 +2221,7 @@ dequeue_dec_one_op_tb(struct fpga_queue *q, struct rte_bbdev_dec_op **op, iter_count = RTE_MAX(iter_count, (uint8_t) desc->dec_req.iter); /* crc_pass = 0 when decoder fails, one fails all */ status |= !(desc->dec_req.crc_pass) << RTE_BBDEV_CRC_ERROR; - /* Check the decriptor error field, return 1 on error */ + /* Check the descriptor error field, return 1 on error */ desc_error = check_desc_error(desc->enc_req.error); status |= desc_error << RTE_BBDEV_DATA_ERROR; rte_bbdev_log_debug("DMA response desc %p", desc); diff --git a/drivers/baseband/null/bbdev_null.c b/drivers/baseband/null/bbdev_null.c index 753d920e18..08cff582b9 100644 --- a/drivers/baseband/null/bbdev_null.c +++ b/drivers/baseband/null/bbdev_null.c @@ -31,7 +31,7 @@ struct bbdev_null_params { uint16_t queues_num; /*< Null BBDEV queues number */ };
-/* Accecptable params for null BBDEV devices */ +/* Acceptable params for null BBDEV devices */ #define BBDEV_NULL_MAX_NB_QUEUES_ARG "max_nb_queues" #define BBDEV_NULL_SOCKET_ID_ARG "socket_id"
diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c b/drivers/baseband/turbo_sw/bbdev_turbo_software.c index b234bb751a..c6b1eb8679 100644 --- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c +++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c @@ -61,7 +61,7 @@ struct turbo_sw_params { uint16_t queues_num; /*< Turbo SW device queues number */ };
-/* Accecptable params for Turbo SW devices */ +/* Acceptable params for Turbo SW devices */ #define TURBO_SW_MAX_NB_QUEUES_ARG "max_nb_queues" #define TURBO_SW_SOCKET_ID_ARG "socket_id"
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c index 737ac8d8c5..5546a9cb8d 100644 --- a/drivers/bus/dpaa/dpaa_bus.c +++ b/drivers/bus/dpaa/dpaa_bus.c @@ -70,7 +70,7 @@ compare_dpaa_devices(struct rte_dpaa_device *dev1, { int comp = 0;
- /* Segragating ETH from SEC devices */ + /* Segregating ETH from SEC devices */ if (dev1->device_type > dev2->device_type) comp = 1; else if (dev1->device_type < dev2->device_type) diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h index 7ef2f3b2e3..9b63e559bc 100644 --- a/drivers/bus/dpaa/include/fsl_qman.h +++ b/drivers/bus/dpaa/include/fsl_qman.h @@ -1353,7 +1353,7 @@ __rte_internal int qman_irqsource_add(u32 bits);
/** - * qman_fq_portal_irqsource_add - samilar to qman_irqsource_add, but it + * qman_fq_portal_irqsource_add - similar to qman_irqsource_add, but it * takes portal (fq specific) as input rather than using the thread affined * portal. */ @@ -1416,7 +1416,7 @@ __rte_internal struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);
/** - * qman_dqrr_consume - Consume the DQRR entriy after volatile dequeue + * qman_dqrr_consume - Consume the DQRR entry after volatile dequeue * @fq: Frame Queue on which the volatile dequeue command is issued * @dq: DQRR entry to consume. This is the one which is provided by the * 'qbman_dequeue' command. @@ -2017,7 +2017,7 @@ int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal, * @cgr: the 'cgr' object to deregister * * "Unplugs" this CGR object from the portal affine to the cpu on which this API - * is executed. This must be excuted on the same affine portal on which it was + * is executed. This must be executed on the same affine portal on which it was * created. */ __rte_internal diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h index dcf35e4adb..97279421ad 100644 --- a/drivers/bus/dpaa/include/fsl_usd.h +++ b/drivers/bus/dpaa/include/fsl_usd.h @@ -40,7 +40,7 @@ struct dpaa_raw_portal { /* Specifies the stash request queue this portal should use */ uint8_t sdest;
- /* Specifes a specific portal index to map or QBMAN_ANY_PORTAL_IDX + /* Specifies a specific portal index to map or QBMAN_ANY_PORTAL_IDX * for don't care. The portal index will be populated by the * driver when the ioctl() successfully completes. */ diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h index a922988607..48d6b5693f 100644 --- a/drivers/bus/dpaa/include/process.h +++ b/drivers/bus/dpaa/include/process.h @@ -49,7 +49,7 @@ struct dpaa_portal_map { struct dpaa_ioctl_portal_map { /* Input parameter, is a qman or bman portal required. */ enum dpaa_portal_type type; - /* Specifes a specific portal index to map or 0xffffffff + /* Specifies a specific portal index to map or 0xffffffff * for don't care. */ uint32_t index; diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c index a0ef24cdc8..53fd75539e 100644 --- a/drivers/bus/fslmc/fslmc_bus.c +++ b/drivers/bus/fslmc/fslmc_bus.c @@ -539,7 +539,7 @@ rte_fslmc_driver_unregister(struct rte_dpaa2_driver *driver)
fslmc_bus = driver->fslmc_bus;
- /* Cleanup the PA->VA Translation table; From whereever this function + /* Cleanup the PA->VA Translation table; From wherever this function * is called from. */ if (rte_eal_iova_mode() == RTE_IOVA_PA) diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c index 2210a0fa4a..52605ea2c3 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c @@ -178,7 +178,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev) dpio_epoll_fd = epoll_create(1); ret = rte_dpaa2_intr_enable(dpio_dev->intr_handle, 0); if (ret) { - DPAA2_BUS_ERR("Interrupt registeration failed"); + DPAA2_BUS_ERR("Interrupt registration failed"); return -1; }
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h index b1bba1ac36..957fc62d4c 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h +++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h @@ -156,7 +156,7 @@ struct dpaa2_queue { struct rte_cryptodev_data *crypto_data; }; uint32_t fqid; /*!< Unique ID of this queue */ - uint16_t flow_id; /*!< To be used by DPAA2 frmework */ + uint16_t flow_id; /*!< To be used by DPAA2 framework */ uint8_t tc_index; /*!< traffic class identifier */ uint8_t cgid; /*! < Congestion Group id for this queue */ uint64_t rx_pkts; diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h index eb68c9cab5..5375ea386d 100644 --- a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h +++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h @@ -510,7 +510,7 @@ int qbman_result_has_new_result(struct qbman_swp *s, struct qbman_result *dq);
/** - * qbman_check_command_complete() - Check if the previous issued dq commnd + * qbman_check_command_complete() - Check if the previous issued dq command * is completed and results are available in memory. * @s: the software portal object. * @dq: the dequeue result read from the memory. @@ -687,7 +687,7 @@ uint16_t qbman_result_DQ_seqnum(const struct qbman_result *dq);
/** * qbman_result_DQ_odpid() - Get the seqnum field in dequeue response - * odpid is valid only if ODPVAILD flag is TRUE. + * odpid is valid only if ODPVALID flag is TRUE. * @dq: the dequeue result. * * Return odpid. @@ -743,7 +743,7 @@ const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq); * qbman_result_SCN_state() - Get the state field in State-change notification * @scn: the state change notification. * - * Return the state in the notifiation. + * Return the state in the notification. */ __rte_internal uint8_t qbman_result_SCN_state(const struct qbman_result *scn); @@ -825,7 +825,7 @@ uint64_t qbman_result_bpscn_ctx(const struct qbman_result *scn);
/* Parsing CGCU */ /** - * qbman_result_cgcu_cgid() - Check CGCU resouce id, i.e. cgid + * qbman_result_cgcu_cgid() - Check CGCU resource id, i.e. cgid * @scn: the state change notification. * * Return the CGCU resource id. @@ -903,14 +903,14 @@ void qbman_eq_desc_clear(struct qbman_eq_desc *d); __rte_internal void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success); /** - * qbman_eq_desc_set_orp() - Set order-resotration in the enqueue descriptor + * qbman_eq_desc_set_orp() - Set order-restoration in the enqueue descriptor * @d: the enqueue descriptor. * @response_success: 1 = enqueue with response always; 0 = enqueue with * rejections returned on a FQ. * @opr_id: the order point record id. * @seqnum: the order restoration sequence number. - * @incomplete: indiates whether this is the last fragments using the same - * sequeue number. + * @incomplete: indicates whether this is the last fragments using the same + * sequence number. */ __rte_internal void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success, @@ -1052,10 +1052,10 @@ __rte_internal uint8_t qbman_result_eqresp_rspid(struct qbman_result *eqresp);
/** - * qbman_result_eqresp_rc() - determines if enqueue command is sucessful. + * qbman_result_eqresp_rc() - determines if enqueue command is successful. * @eqresp: enqueue response. * - * Return 0 when command is sucessful. + * Return 0 when command is successful. */ __rte_internal uint8_t qbman_result_eqresp_rc(struct qbman_result *eqresp); @@ -1250,7 +1250,7 @@ int qbman_swp_fq_force(struct qbman_swp *s, uint32_t fqid); /** * These functions change the FQ flow-control stuff between XON/XOFF. (The * default is XON.) This setting doesn't affect enqueues to the FQ, just - * dequeues. XOFF FQs will remain in the tenatively-scheduled state, even when + * dequeues. XOFF FQs will remain in the tentatively-scheduled state, even when * non-empty, meaning they won't be selected for scheduled dequeuing. If a FQ is * changed to XOFF after it had already become truly-scheduled to a channel, and * a pull dequeue of that channel occurs that selects that FQ for dequeuing, diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c index 1a5e7c2d2a..cd0d0b1670 100644 --- a/drivers/bus/pci/linux/pci_vfio.c +++ b/drivers/bus/pci/linux/pci_vfio.c @@ -815,7 +815,7 @@ pci_vfio_map_resource_primary(struct rte_pci_device *dev) continue; }
- /* skip non-mmapable BARs */ + /* skip non-mmappable BARs */ if ((reg->flags & VFIO_REGION_INFO_FLAG_MMAP) == 0) { free(reg); continue; diff --git a/drivers/bus/vdev/rte_bus_vdev.h b/drivers/bus/vdev/rte_bus_vdev.h index 2856799953..5af6be009f 100644 --- a/drivers/bus/vdev/rte_bus_vdev.h +++ b/drivers/bus/vdev/rte_bus_vdev.h @@ -197,7 +197,7 @@ rte_vdev_remove_custom_scan(rte_vdev_scan_callback callback, void *user_arg); int rte_vdev_init(const char *name, const char *args);
/** - * Uninitalize a driver specified by name. + * Uninitialize a driver specified by name. * * @param name * The pointer to a driver name to be uninitialized. diff --git a/drivers/bus/vmbus/vmbus_common.c b/drivers/bus/vmbus/vmbus_common.c index 519ca9c6fe..367727367e 100644 --- a/drivers/bus/vmbus/vmbus_common.c +++ b/drivers/bus/vmbus/vmbus_common.c @@ -134,7 +134,7 @@ vmbus_probe_one_driver(struct rte_vmbus_driver *dr,
/* * If device class GUID matches, call the probe function of - * registere drivers for the vmbus device. + * register drivers for the vmbus device. * Return -1 if initialization failed, * and 1 if no driver found for this device. */ diff --git a/drivers/common/cnxk/roc_bphy_cgx.c b/drivers/common/cnxk/roc_bphy_cgx.c index 7449cbe77a..c3be3c9041 100644 --- a/drivers/common/cnxk/roc_bphy_cgx.c +++ b/drivers/common/cnxk/roc_bphy_cgx.c @@ -14,7 +14,7 @@ #define CGX_CMRX_INT_OVERFLW BIT_ULL(1) /* * CN10K stores number of lmacs in 4 bit filed - * in contraty to CN9K which uses only 3 bits. + * in contrary to CN9K which uses only 3 bits. * * In theory masks should differ yet on CN9K * bits beyond specified range contain zeros. diff --git a/drivers/common/cnxk/roc_nix_bpf.c b/drivers/common/cnxk/roc_nix_bpf.c index 6996a54be0..4941f62995 100644 --- a/drivers/common/cnxk/roc_nix_bpf.c +++ b/drivers/common/cnxk/roc_nix_bpf.c @@ -138,7 +138,7 @@ nix_lf_bpf_dump(__io struct nix_band_prof_s *bpf) { plt_dump("W0: cir_mantissa \t\t\t%d\nW0: pebs_mantissa \t\t\t0x%03x", bpf->cir_mantissa, bpf->pebs_mantissa); - plt_dump("W0: peir_matissa \t\t\t\t%d\nW0: cbs_exponent \t\t\t%d", + plt_dump("W0: peir_mantissa \t\t\t\t%d\nW0: cbs_exponent \t\t\t%d", bpf->peir_mantissa, bpf->cbs_exponent); plt_dump("W0: cir_exponent \t\t\t%d\nW0: pebs_exponent \t\t\t%d", bpf->cir_exponent, bpf->pebs_exponent); diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c index 3257fa67c7..3d81247a12 100644 --- a/drivers/common/cnxk/roc_nix_tm_ops.c +++ b/drivers/common/cnxk/roc_nix_tm_ops.c @@ -107,7 +107,7 @@ nix_tm_adjust_shaper_pps_rate(struct nix_tm_shaper_profile *profile) if (profile->peak.rate && min_rate > profile->peak.rate) min_rate = profile->peak.rate;
- /* Each packet accomulate single count, whereas HW + /* Each packet accumulate single count, whereas HW * considers each unit as Byte, so we need convert * user pps to bps */ diff --git a/drivers/common/cnxk/roc_npc_mcam.c b/drivers/common/cnxk/roc_npc_mcam.c index ba7f89b45b..82014a2ca0 100644 --- a/drivers/common/cnxk/roc_npc_mcam.c +++ b/drivers/common/cnxk/roc_npc_mcam.c @@ -234,7 +234,7 @@ npc_get_kex_capability(struct npc *npc) /* Ethtype: Offset 12B, len 2B */ kex_cap.bit.ethtype_0 = npc_is_kex_enabled( npc, NPC_LID_LA, NPC_LT_LA_ETHER, 12 * 8, 2 * 8); - /* QINQ VLAN Ethtype: ofset 8B, len 2B */ + /* QINQ VLAN Ethtype: offset 8B, len 2B */ kex_cap.bit.ethtype_x = npc_is_kex_enabled( npc, NPC_LID_LB, NPC_LT_LB_STAG_QINQ, 8 * 8, 2 * 8); /* VLAN ID0 : Outer VLAN: Offset 2B, len 2B */ diff --git a/drivers/common/cnxk/roc_npc_priv.h b/drivers/common/cnxk/roc_npc_priv.h index 712302bc5c..74e0fb2ece 100644 --- a/drivers/common/cnxk/roc_npc_priv.h +++ b/drivers/common/cnxk/roc_npc_priv.h @@ -363,7 +363,7 @@ struct npc { uint32_t rss_grps; /* rss groups supported */ uint16_t flow_prealloc_size; /* Pre allocated mcam size */ uint16_t flow_max_priority; /* Max priority for flow */ - uint16_t switch_header_type; /* Suppprted switch header type */ + uint16_t switch_header_type; /* Supported switch header type */ uint32_t mark_actions; /* Number of mark actions */ uint32_t vtag_strip_actions; /* vtag insert/strip actions */ uint16_t pf_func; /* pf_func of device */ diff --git a/drivers/common/cpt/cpt_ucode.h b/drivers/common/cpt/cpt_ucode.h index e015cf66a1..e1f2f6005d 100644 --- a/drivers/common/cpt/cpt_ucode.h +++ b/drivers/common/cpt/cpt_ucode.h @@ -246,7 +246,7 @@ cpt_fc_ciph_set_key(struct cpt_ctx *cpt_ctx, cipher_type_t type, if (cpt_ctx->fc_type == FC_GEN) { /* * We need to always say IV is from DPTR as user can - * sometimes iverride IV per operation. + * sometimes override IV per operation. */ fctx->enc.iv_source = CPT_FROM_DPTR;
@@ -3035,7 +3035,7 @@ prepare_iov_from_pkt_inplace(struct rte_mbuf *pkt, tailroom = rte_pktmbuf_tailroom(pkt); if (likely((headroom >= 24) && (tailroom >= 8))) { - /* In 83XX this is prerequivisit for Direct mode */ + /* In 83XX this is prerequisite for Direct mode */ *flags |= SINGLE_BUF_HEADTAILROOM; } param->bufs[0].vaddr = seg_data; diff --git a/drivers/common/cpt/cpt_ucode_asym.h b/drivers/common/cpt/cpt_ucode_asym.h index a67ded642a..f0b5dddd8c 100644 --- a/drivers/common/cpt/cpt_ucode_asym.h +++ b/drivers/common/cpt/cpt_ucode_asym.h @@ -779,7 +779,7 @@ cpt_ecdsa_verify_prep(struct rte_crypto_ecdsa_op_param *ecdsa, * Set dlen = sum(sizeof(fpm address), ROUNDUP8(message len), * ROUNDUP8(sign len(r and s), public key len(x and y coordinates), * prime len, order len)). - * Please note sign, public key and order can not excede prime length + * Please note sign, public key and order can not exceed prime length * i.e. 6 * p_align */ dlen = sizeof(fpm_table_iova) + m_align + (8 * p_align); diff --git a/drivers/common/dpaax/caamflib/desc/algo.h b/drivers/common/dpaax/caamflib/desc/algo.h index 6bb915054a..e0848f0940 100644 --- a/drivers/common/dpaax/caamflib/desc/algo.h +++ b/drivers/common/dpaax/caamflib/desc/algo.h @@ -67,7 +67,7 @@ cnstr_shdsc_zuce(uint32_t *descbuf, bool ps, bool swap, * @authlen: size of digest * * The IV prepended before hmac payload must be 8 bytes consisting - * of COUNT||BEAERER||DIR. The COUNT is of 32-bits, bearer is of 5 bits and + * of COUNT||BEARER||DIR. The COUNT is of 32-bits, bearer is of 5 bits and * direction is of 1 bit - totalling to 38 bits. * * Return: size of descriptor written in words or negative number on error diff --git a/drivers/common/dpaax/caamflib/desc/sdap.h b/drivers/common/dpaax/caamflib/desc/sdap.h index b2497a5424..07f55b5b40 100644 --- a/drivers/common/dpaax/caamflib/desc/sdap.h +++ b/drivers/common/dpaax/caamflib/desc/sdap.h @@ -492,10 +492,10 @@ pdcp_sdap_insert_snoop_op(struct program *p, bool swap __maybe_unused,
/* Set the variable size of data the register will write */ if (dir == OP_TYPE_ENCAP_PROTOCOL) { - /* We will add the interity data so add its length */ + /* We will add the integrity data so add its length */ MATHI(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2); } else { - /* We will check the interity data so remove its length */ + /* We will check the integrity data so remove its length */ MATHI(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2); /* Do not take the ICV in the out-snooping configuration */ MATHI(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQINSZ, 4, IMMED2); @@ -803,7 +803,7 @@ static inline int pdcp_sdap_insert_no_snoop_op( CLRW_CLR_C1MODE, CLRW, 0, 4, IMMED);
- /* Load the key for authentcation */ + /* Load the key for authentication */ KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen, INLINE_KEY(authdata));
diff --git a/drivers/common/dpaax/dpaax_iova_table.c b/drivers/common/dpaax/dpaax_iova_table.c index 3d661102cc..9daac4bc03 100644 --- a/drivers/common/dpaax/dpaax_iova_table.c +++ b/drivers/common/dpaax/dpaax_iova_table.c @@ -261,7 +261,7 @@ dpaax_iova_table_depopulate(void) rte_free(dpaax_iova_table_p->entries); dpaax_iova_table_p = NULL;
- DPAAX_DEBUG("IOVA Table cleanedup"); + DPAAX_DEBUG("IOVA Table cleaned"); }
int diff --git a/drivers/common/iavf/iavf_type.h b/drivers/common/iavf/iavf_type.h index 51267ca3b3..1cd87587d6 100644 --- a/drivers/common/iavf/iavf_type.h +++ b/drivers/common/iavf/iavf_type.h @@ -1006,7 +1006,7 @@ struct iavf_profile_tlv_section_record { u8 data[12]; };
-/* Generic AQ section in proflie */ +/* Generic AQ section in profile */ struct iavf_profile_aq_section { u16 opcode; u16 flags; diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h index 269578f7c0..80e754a1b2 100644 --- a/drivers/common/iavf/virtchnl.h +++ b/drivers/common/iavf/virtchnl.h @@ -233,7 +233,7 @@ static inline const char *virtchnl_op_str(enum virtchnl_ops v_opcode) case VIRTCHNL_OP_DCF_CMD_DESC: return "VIRTCHNL_OP_DCF_CMD_DESC"; case VIRTCHNL_OP_DCF_CMD_BUFF: - return "VIRTCHHNL_OP_DCF_CMD_BUFF"; + return "VIRTCHNL_OP_DCF_CMD_BUFF"; case VIRTCHNL_OP_DCF_DISABLE: return "VIRTCHNL_OP_DCF_DISABLE"; case VIRTCHNL_OP_DCF_GET_VSI_MAP: diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c index f1650f94c6..cc13022150 100644 --- a/drivers/common/mlx5/mlx5_common.c +++ b/drivers/common/mlx5/mlx5_common.c @@ -854,7 +854,7 @@ static void mlx5_common_driver_init(void) static bool mlx5_common_initialized;
/** - * One time innitialization routine for run-time dependency on glue library + * One time initialization routine for run-time dependency on glue library * for multiple PMDs. Each mlx5 PMD that depends on mlx5_common module, * must invoke in its constructor. */ diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c index c694aaf28c..1537b5d428 100644 --- a/drivers/common/mlx5/mlx5_common_mr.c +++ b/drivers/common/mlx5/mlx5_common_mr.c @@ -1541,7 +1541,7 @@ mlx5_mempool_reg_create(struct rte_mempool *mp, unsigned int mrs_n, * Destroy a mempool registration object. * * @param standalone - * Whether @p mpr owns its MRs excludively, i.e. they are not shared. + * Whether @p mpr owns its MRs exclusively, i.e. they are not shared. */ static void mlx5_mempool_reg_destroy(struct mlx5_mr_share_cache *share_cache, diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index e52b995ee3..7cd3d4fa98 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -1822,7 +1822,7 @@ mlx5_devx_cmd_create_td(void *ctx) * Pointer to file stream. * * @return - * 0 on success, a nagative value otherwise. + * 0 on success, a negative value otherwise. */ int mlx5_devx_cmd_flow_dump(void *fdb_domain __rte_unused, diff --git a/drivers/common/mlx5/mlx5_malloc.c b/drivers/common/mlx5/mlx5_malloc.c index b19501e1bc..cef3b88e11 100644 --- a/drivers/common/mlx5/mlx5_malloc.c +++ b/drivers/common/mlx5/mlx5_malloc.c @@ -58,7 +58,7 @@ static struct mlx5_sys_mem mlx5_sys_mem = { * Check if the address belongs to memory seg list. * * @param addr - * Memory address to be ckeced. + * Memory address to be checked. * @param msl * Memory seg list. * @@ -109,7 +109,7 @@ mlx5_mem_update_msl(void *addr) * Check if the address belongs to rte memory. * * @param addr - * Memory address to be ckeced. + * Memory address to be checked. * * @return * True if it belongs, false otherwise. diff --git a/drivers/common/mlx5/mlx5_malloc.h b/drivers/common/mlx5/mlx5_malloc.h index 74b7eeb26e..92149f7b92 100644 --- a/drivers/common/mlx5/mlx5_malloc.h +++ b/drivers/common/mlx5/mlx5_malloc.h @@ -19,7 +19,7 @@ extern "C" {
enum mlx5_mem_flags { MLX5_MEM_ANY = 0, - /* Memory will be allocated dpends on sys_mem_en. */ + /* Memory will be allocated depends on sys_mem_en. */ MLX5_MEM_SYS = 1 << 0, /* Memory should be allocated from system. */ MLX5_MEM_RTE = 1 << 1, diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 2ded67e85e..982a53ffbe 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -4172,7 +4172,7 @@ mlx5_flow_mark_get(uint32_t val) * timestamp format supported by the queue. * * @return - * Converted timstamp format settings. + * Converted timestamp format settings. */ static inline uint32_t mlx5_ts_format_conv(uint32_t ts_format) diff --git a/drivers/common/mlx5/windows/mlx5_common_os.c b/drivers/common/mlx5/windows/mlx5_common_os.c index 162c7476cc..c3cfc315f2 100644 --- a/drivers/common/mlx5/windows/mlx5_common_os.c +++ b/drivers/common/mlx5/windows/mlx5_common_os.c @@ -302,7 +302,7 @@ mlx5_os_umem_dereg(void *pumem) }
/** - * Register mr. Given protection doamin pointer, pointer to addr and length + * Register mr. Given protection domain pointer, pointer to addr and length * register the memory region. * * @param[in] pd @@ -310,7 +310,7 @@ mlx5_os_umem_dereg(void *pumem) * @param[in] addr * Pointer to memory start address (type devx_device_ctx). * @param[in] length - * Lengtoh of the memory to register. + * Length of the memory to register. * @param[out] pmd_mr * pmd_mr struct set with lkey, address, length, pointer to mr object, mkey * diff --git a/drivers/common/mlx5/windows/mlx5_common_os.h b/drivers/common/mlx5/windows/mlx5_common_os.h index 3afce56cd9..61fc8dd761 100644 --- a/drivers/common/mlx5/windows/mlx5_common_os.h +++ b/drivers/common/mlx5/windows/mlx5_common_os.h @@ -21,7 +21,7 @@ /** * This API allocates aligned or non-aligned memory. The free can be on either * aligned or nonaligned memory. To be protected - even though there may be no - * alignment - in Windows this API will unconditioanlly call _aligned_malloc() + * alignment - in Windows this API will unconditionally call _aligned_malloc() * with at least a minimal alignment size. * * @param[in] align diff --git a/drivers/common/qat/qat_adf/adf_transport_access_macros.h b/drivers/common/qat/qat_adf/adf_transport_access_macros.h index a6d403fac3..12a7258c60 100644 --- a/drivers/common/qat/qat_adf/adf_transport_access_macros.h +++ b/drivers/common/qat/qat_adf/adf_transport_access_macros.h @@ -72,7 +72,7 @@ #define ADF_SIZE_TO_RING_SIZE_IN_BYTES(SIZE) ((1 << (SIZE - 1)) << 7) #define ADF_RING_SIZE_IN_BYTES_TO_SIZE(SIZE) ((1 << (SIZE - 1)) >> 7)
-/* Minimum ring bufer size for memory allocation */ +/* Minimum ring buffer size for memory allocation */ #define ADF_RING_SIZE_BYTES_MIN(SIZE) ((SIZE < ADF_RING_SIZE_4K) ? \ ADF_RING_SIZE_4K : SIZE) #define ADF_RING_SIZE_MODULO(SIZE) (SIZE + 0x6) diff --git a/drivers/common/sfc_efx/efsys.h b/drivers/common/sfc_efx/efsys.h index 3860c2835a..224254bee7 100644 --- a/drivers/common/sfc_efx/efsys.h +++ b/drivers/common/sfc_efx/efsys.h @@ -616,7 +616,7 @@ typedef struct efsys_bar_s {
#define EFSYS_DMA_SYNC_FOR_KERNEL(_esmp, _offset, _size) ((void)0)
-/* Just avoid store and compiler (impliciltly) reordering */ +/* Just avoid store and compiler (implicitly) reordering */ #define EFSYS_DMA_SYNC_FOR_DEVICE(_esmp, _offset, _size) rte_wmb()
/* TIMESTAMP */ diff --git a/drivers/compress/octeontx/include/zip_regs.h b/drivers/compress/octeontx/include/zip_regs.h index 96e538bb75..94a48cde66 100644 --- a/drivers/compress/octeontx/include/zip_regs.h +++ b/drivers/compress/octeontx/include/zip_regs.h @@ -195,7 +195,7 @@ union zip_inst_s { uint64_t bf : 1; /** Comp/decomp operation */ uint64_t op : 2; - /** Data sactter */ + /** Data scatter */ uint64_t ds : 1; /** Data gather */ uint64_t dg : 1; @@ -376,7 +376,7 @@ union zip_inst_s { uint64_t bf : 1; /** Comp/decomp operation */ uint64_t op : 2; - /** Data sactter */ + /** Data scatter */ uint64_t ds : 1; /** Data gather */ uint64_t dg : 1; diff --git a/drivers/compress/octeontx/otx_zip.h b/drivers/compress/octeontx/otx_zip.h index e43f7f5c3e..118a95d738 100644 --- a/drivers/compress/octeontx/otx_zip.h +++ b/drivers/compress/octeontx/otx_zip.h @@ -31,7 +31,7 @@ extern int octtx_zip_logtype_driver; /**< PCI device id of ZIP VF */ #define PCI_DEVICE_ID_OCTEONTX_ZIPVF 0xA037
-/* maxmum number of zip vf devices */ +/* maximum number of zip vf devices */ #define ZIP_MAX_VFS 8
/* max size of one chunk */ diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c index 9b24d46e97..da6404c017 100644 --- a/drivers/compress/qat/qat_comp_pmd.c +++ b/drivers/compress/qat/qat_comp_pmd.c @@ -463,7 +463,7 @@ qat_comp_create_stream_pool(struct qat_comp_dev_private *comp_dev, } else if (info.error) { rte_mempool_obj_iter(mp, qat_comp_stream_destroy, NULL); QAT_LOG(ERR, - "Destoying mempool %s as at least one element failed initialisation", + "Destroying mempool %s as at least one element failed initialisation", stream_pool_name); rte_mempool_free(mp); mp = NULL; diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h index e5ca866977..4901a6cfd9 100644 --- a/drivers/crypto/bcmfs/bcmfs_device.h +++ b/drivers/crypto/bcmfs/bcmfs_device.h @@ -32,7 +32,7 @@ enum bcmfs_device_type { BCMFS_UNKNOWN };
-/* A table to store registered queue pair opertations */ +/* A table to store registered queue pair operations */ struct bcmfs_hw_queue_pair_ops_table { rte_spinlock_t tl; /* Number of used ops structs in the table. */ diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c index cb5ff6c61b..61d457f4e0 100644 --- a/drivers/crypto/bcmfs/bcmfs_qp.c +++ b/drivers/crypto/bcmfs/bcmfs_qp.c @@ -212,7 +212,7 @@ bcmfs_qp_setup(struct bcmfs_qp **qp_addr, nb_descriptors = FS_RM_MAX_REQS;
if (qp_conf->iobase == NULL) { - BCMFS_LOG(ERR, "IO onfig space null"); + BCMFS_LOG(ERR, "IO config space null"); return -EINVAL; }
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_defs.h b/drivers/crypto/bcmfs/bcmfs_sym_defs.h index eaefe97e26..9bb8a695a0 100644 --- a/drivers/crypto/bcmfs/bcmfs_sym_defs.h +++ b/drivers/crypto/bcmfs/bcmfs_sym_defs.h @@ -20,11 +20,11 @@ struct bcmfs_sym_request;
/** Crypto Request processing successful. */ #define BCMFS_SYM_RESPONSE_SUCCESS (0) -/** Crypot Request processing protocol failure. */ +/** Crypto Request processing protocol failure. */ #define BCMFS_SYM_RESPONSE_PROTO_FAILURE (1) -/** Crypot Request processing completion failure. */ +/** Crypto Request processing completion failure. */ #define BCMFS_SYM_RESPONSE_COMPL_ERROR (2) -/** Crypot Request processing hash tag check error. */ +/** Crypto Request processing hash tag check error. */ #define BCMFS_SYM_RESPONSE_HASH_TAG_ERROR (3)
/** Maximum threshold length to adjust AAD in continuation diff --git a/drivers/crypto/bcmfs/bcmfs_sym_engine.h b/drivers/crypto/bcmfs/bcmfs_sym_engine.h index d9594246b5..51ff9f75ed 100644 --- a/drivers/crypto/bcmfs/bcmfs_sym_engine.h +++ b/drivers/crypto/bcmfs/bcmfs_sym_engine.h @@ -12,7 +12,7 @@ #include "bcmfs_sym_defs.h" #include "bcmfs_sym_req.h"
-/* structure to hold element's arrtibutes */ +/* structure to hold element's attributes */ struct fsattr { void *va; uint64_t pa; diff --git a/drivers/crypto/bcmfs/hw/bcmfs5_rm.c b/drivers/crypto/bcmfs/hw/bcmfs5_rm.c index 86e53051dd..c677c0cd9b 100644 --- a/drivers/crypto/bcmfs/hw/bcmfs5_rm.c +++ b/drivers/crypto/bcmfs/hw/bcmfs5_rm.c @@ -441,7 +441,7 @@ static void bcmfs5_write_doorbell(struct bcmfs_qp *qp) { struct bcmfs_queue *txq = &qp->tx_q;
- /* sync in bfeore ringing the door-bell */ + /* sync in before ringing the door-bell */ rte_wmb();
FS_MMIO_WRITE32(txq->descs_inflight, diff --git a/drivers/crypto/caam_jr/caam_jr_hw_specific.h b/drivers/crypto/caam_jr/caam_jr_hw_specific.h index bbe8bc3f90..6ee7f7cef3 100644 --- a/drivers/crypto/caam_jr/caam_jr_hw_specific.h +++ b/drivers/crypto/caam_jr/caam_jr_hw_specific.h @@ -376,7 +376,7 @@ struct sec_job_ring_t { void *register_base_addr; /* Base address for SEC's * register memory for this job ring. */ - uint8_t coalescing_en; /* notifies if coelescing is + uint8_t coalescing_en; /* notifies if coalescing is * enabled for the job ring */ sec_job_ring_state_t jr_state; /* The state of this job ring */ @@ -479,7 +479,7 @@ void hw_job_ring_error_print(struct sec_job_ring_t *job_ring, int code);
/* @brief Set interrupt coalescing parameters on the Job Ring. * @param [in] job_ring The job ring - * @param [in] irq_coalesing_timer Interrupt coalescing timer threshold. + * @param [in] irq_coalescing_timer Interrupt coalescing timer threshold. * This value determines the maximum * amount of time after processing a * descriptor before raising an interrupt. diff --git a/drivers/crypto/caam_jr/caam_jr_pvt.h b/drivers/crypto/caam_jr/caam_jr_pvt.h index 552d6b9b1b..52f872bcd0 100644 --- a/drivers/crypto/caam_jr/caam_jr_pvt.h +++ b/drivers/crypto/caam_jr/caam_jr_pvt.h @@ -169,7 +169,7 @@ struct sec4_sg_entry {
/* Structure encompassing a job descriptor which is to be processed * by SEC. User should also initialise this structure with the callback - * function pointer which will be called by driver after recieving proccessed + * function pointer which will be called by driver after receiving processed * descriptor from SEC. User data is also passed in this data structure which * will be sent as an argument to the user callback function. */ @@ -288,7 +288,7 @@ int caam_jr_enable_irqs(int uio_fd); * value that indicates an IRQ disable action into UIO file descriptor * of this job ring. * - * @param [in] uio_fd UIO File descripto + * @param [in] uio_fd UIO File descriptor * @retval 0 for success * @retval -1 value for error * diff --git a/drivers/crypto/caam_jr/caam_jr_uio.c b/drivers/crypto/caam_jr/caam_jr_uio.c index e4ee102344..583ba3b523 100644 --- a/drivers/crypto/caam_jr/caam_jr_uio.c +++ b/drivers/crypto/caam_jr/caam_jr_uio.c @@ -227,7 +227,7 @@ caam_jr_enable_irqs(int uio_fd) * value that indicates an IRQ disable action into UIO file descriptor * of this job ring. * - * @param [in] uio_fd UIO File descripto + * @param [in] uio_fd UIO File descriptor * @retval 0 for success * @retval -1 value for error * diff --git a/drivers/crypto/ccp/ccp_crypto.c b/drivers/crypto/ccp/ccp_crypto.c index 70daed791e..4ed91a7436 100644 --- a/drivers/crypto/ccp/ccp_crypto.c +++ b/drivers/crypto/ccp/ccp_crypto.c @@ -1299,7 +1299,7 @@ ccp_auth_slot(struct ccp_session *session) case CCP_AUTH_ALGO_SHA512_HMAC: /** * 1. Load PHash1 = H(k ^ ipad); to LSB - * 2. generate IHash = H(hash on meassage with PHash1 + * 2. generate IHash = H(hash on message with PHash1 * as init values); * 3. Retrieve IHash 2 slots for 384/512 * 4. Load Phash2 = H(k ^ opad); to LSB diff --git a/drivers/crypto/ccp/ccp_crypto.h b/drivers/crypto/ccp/ccp_crypto.h index 8e6d03efc8..d307f73ee4 100644 --- a/drivers/crypto/ccp/ccp_crypto.h +++ b/drivers/crypto/ccp/ccp_crypto.h @@ -70,7 +70,7 @@ /* Maximum length for digest */ #define DIGEST_LENGTH_MAX 64
-/* SHA LSB intialiazation values */ +/* SHA LSB initialization values */
#define SHA1_H0 0x67452301UL #define SHA1_H1 0xefcdab89UL diff --git a/drivers/crypto/ccp/ccp_dev.h b/drivers/crypto/ccp/ccp_dev.h index 85c8fc47a2..2a205cd446 100644 --- a/drivers/crypto/ccp/ccp_dev.h +++ b/drivers/crypto/ccp/ccp_dev.h @@ -19,7 +19,7 @@ #include <rte_crypto_sym.h> #include <cryptodev_pmd.h>
-/**< CCP sspecific */ +/**< CCP specific */ #define MAX_HW_QUEUES 5 #define CCP_MAX_TRNG_RETRIES 10 #define CCP_ALIGN(x, y) ((((x) + (y - 1)) / y) * y) diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c index a552e64506..f20acdd123 100644 --- a/drivers/crypto/dpaa_sec/dpaa_sec.c +++ b/drivers/crypto/dpaa_sec/dpaa_sec.c @@ -723,7 +723,7 @@ dpaa_sec_deq(struct dpaa_sec_qp *qp, struct rte_crypto_op **ops, int nb_ops) } ops[pkts++] = op;
- /* report op status to sym->op and then free the ctx memeory */ + /* report op status to sym->op and then free the ctx memory */ rte_mempool_put(ctx->ctx_pool, (void *)ctx);
qman_dqrr_consume(fq, dq); diff --git a/drivers/crypto/octeontx/otx_cryptodev_hw_access.c b/drivers/crypto/octeontx/otx_cryptodev_hw_access.c index 20b288334a..27604459e4 100644 --- a/drivers/crypto/octeontx/otx_cryptodev_hw_access.c +++ b/drivers/crypto/octeontx/otx_cryptodev_hw_access.c @@ -296,7 +296,7 @@ cpt_vq_init(struct cpt_vf *cptvf, uint8_t group) /* CPT VF device initialization */ otx_cpt_vfvq_init(cptvf);
- /* Send msg to PF to assign currnet Q to required group */ + /* Send msg to PF to assign current Q to required group */ cptvf->vfgrp = group; err = otx_cpt_send_vf_grp_msg(cptvf, group); if (err) { diff --git a/drivers/crypto/octeontx/otx_cryptodev_mbox.h b/drivers/crypto/octeontx/otx_cryptodev_mbox.h index 508f3afd47..c1eedc1b9e 100644 --- a/drivers/crypto/octeontx/otx_cryptodev_mbox.h +++ b/drivers/crypto/octeontx/otx_cryptodev_mbox.h @@ -70,7 +70,7 @@ void otx_cpt_handle_mbox_intr(struct cpt_vf *cptvf);
/* - * Checks if VF is able to comminicate with PF + * Checks if VF is able to communicate with PF * and also gets the CPT number this VF is associated to. */ int diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c index 9e8fd495cf..f7ca8a8a8e 100644 --- a/drivers/crypto/octeontx/otx_cryptodev_ops.c +++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c @@ -558,7 +558,7 @@ otx_cpt_enq_single_sym(struct cpt_instance *instance, &mdata, (void **)&prep_req);
if (unlikely(ret)) { - CPT_LOG_DP_ERR("prep cryto req : op %p, cpt_op 0x%x " + CPT_LOG_DP_ERR("prep crypto req : op %p, cpt_op 0x%x " "ret 0x%x", op, (unsigned int)cpt_op, ret); return NULL; } diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c index f893508030..09d8761c5f 100644 --- a/drivers/crypto/qat/qat_asym.c +++ b/drivers/crypto/qat/qat_asym.c @@ -109,7 +109,7 @@ static void qat_clear_arrays_by_alg(struct qat_asym_op_cookie *cookie, static int qat_asym_check_nonzero(rte_crypto_param n) { if (n.length < 8) { - /* Not a case for any cryptograpic function except for DH + /* Not a case for any cryptographic function except for DH * generator which very often can be of one byte length */ size_t i; diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c index 93b257522b..00ec703754 100644 --- a/drivers/crypto/qat/qat_sym.c +++ b/drivers/crypto/qat/qat_sym.c @@ -419,7 +419,7 @@ qat_sym_build_request(void *in_op, uint8_t *out_msg, ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC) {
/* In case of AES-CCM this may point to user selected - * memory or iv offset in cypto_op + * memory or iv offset in crypto_op */ uint8_t *aad_data = op->sym->aead.aad.data; /* This is true AAD length, it not includes 18 bytes of diff --git a/drivers/crypto/virtio/virtqueue.h b/drivers/crypto/virtio/virtqueue.h index bf10c6579b..c96ca62992 100644 --- a/drivers/crypto/virtio/virtqueue.h +++ b/drivers/crypto/virtio/virtqueue.h @@ -145,7 +145,7 @@ virtqueue_notify(struct virtqueue *vq) { /* * Ensure updated avail->idx is visible to host. - * For virtio on IA, the notificaiton is through io port operation + * For virtio on IA, the notification is through io port operation * which is a serialization instruction itself. */ VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq); diff --git a/drivers/dma/skeleton/skeleton_dmadev.c b/drivers/dma/skeleton/skeleton_dmadev.c index d9e4f731d7..81cbdd286e 100644 --- a/drivers/dma/skeleton/skeleton_dmadev.c +++ b/drivers/dma/skeleton/skeleton_dmadev.c @@ -169,7 +169,7 @@ vchan_setup(struct skeldma_hw *hw, uint16_t nb_desc) struct rte_ring *completed; uint16_t i;
- desc = rte_zmalloc_socket("dma_skelteon_desc", + desc = rte_zmalloc_socket("dma_skeleton_desc", nb_desc * sizeof(struct skeldma_desc), RTE_CACHE_LINE_SIZE, hw->socket_id); if (desc == NULL) { diff --git a/drivers/event/cnxk/cnxk_eventdev_selftest.c b/drivers/event/cnxk/cnxk_eventdev_selftest.c index 69c15b1d0a..2fe6467f88 100644 --- a/drivers/event/cnxk/cnxk_eventdev_selftest.c +++ b/drivers/event/cnxk/cnxk_eventdev_selftest.c @@ -140,7 +140,7 @@ _eventdev_setup(int mode) struct rte_event_dev_info info; int i, ret;
- /* Create and destrory pool for each test case to make it standalone */ + /* Create and destroy pool for each test case to make it standalone */ eventdev_test_mempool = rte_pktmbuf_pool_create( pool_name, MAX_EVENTS, 0, 0, 512, rte_socket_id()); if (!eventdev_test_mempool) { @@ -1543,7 +1543,7 @@ cnxk_sso_selftest(const char *dev_name) cn9k_sso_set_rsrc(dev); if (cnxk_sso_testsuite_run(dev_name)) return rc; - /* Verift dual ws mode. */ + /* Verify dual ws mode. */ printf("Verifying CN9K Dual workslot mode\n"); dev->dual_ws = 1; cn9k_sso_set_rsrc(dev); diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c index 16e9764dbf..d75f12e382 100644 --- a/drivers/event/dlb2/dlb2.c +++ b/drivers/event/dlb2/dlb2.c @@ -2145,7 +2145,7 @@ dlb2_event_queue_detach_ldb(struct dlb2_eventdev *dlb2, }
/* This is expected with eventdev API! - * It blindly attemmpts to unmap all queues. + * It blindly attempts to unmap all queues. */ if (i == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) { DLB2_LOG_DBG("dlb2: ignoring LB QID %d not mapped for qm_port %d.\n", diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h index a5e2f8e46b..7837ae8733 100644 --- a/drivers/event/dlb2/dlb2_priv.h +++ b/drivers/event/dlb2/dlb2_priv.h @@ -519,7 +519,7 @@ struct dlb2_eventdev_port { bool setup_done; /* enq_configured is set when the qm port is created */ bool enq_configured; - uint8_t implicit_release; /* release events before dequeueing */ + uint8_t implicit_release; /* release events before dequeuing */ } __rte_cache_aligned;
struct dlb2_queue { diff --git a/drivers/event/dlb2/dlb2_selftest.c b/drivers/event/dlb2/dlb2_selftest.c index 2113bc2c99..1863ffe049 100644 --- a/drivers/event/dlb2/dlb2_selftest.c +++ b/drivers/event/dlb2/dlb2_selftest.c @@ -223,7 +223,7 @@ test_stop_flush(struct test *t) /* test to check we can properly flush events */ 0, RTE_EVENT_PORT_ATTR_DEQ_DEPTH, &dequeue_depth)) { - printf("%d: Error retrieveing dequeue depth\n", __LINE__); + printf("%d: Error retrieving dequeue depth\n", __LINE__); goto err; }
diff --git a/drivers/event/dlb2/rte_pmd_dlb2.h b/drivers/event/dlb2/rte_pmd_dlb2.h index 74399db018..1dbd885a16 100644 --- a/drivers/event/dlb2/rte_pmd_dlb2.h +++ b/drivers/event/dlb2/rte_pmd_dlb2.h @@ -24,7 +24,7 @@ extern "C" { * Selects the token pop mode for a DLB2 port. */ enum dlb2_token_pop_mode { - /* Pop the CQ tokens immediately after dequeueing. */ + /* Pop the CQ tokens immediately after dequeuing. */ AUTO_POP, /* Pop CQ tokens after (dequeue_depth - 1) events are released. * Supported on load-balanced ports only. diff --git a/drivers/event/dpaa2/dpaa2_eventdev_selftest.c b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c index bbbd20951f..b549bdfcbb 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev_selftest.c +++ b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c @@ -118,7 +118,7 @@ _eventdev_setup(int mode) struct rte_event_dev_info info; const char *pool_name = "evdev_dpaa2_test_pool";
- /* Create and destrory pool for each test case to make it standalone */ + /* Create and destroy pool for each test case to make it standalone */ eventdev_test_mempool = rte_pktmbuf_pool_create(pool_name, MAX_EVENTS, 0 /*MBUF_CACHE_SIZE*/, diff --git a/drivers/event/dsw/dsw_evdev.h b/drivers/event/dsw/dsw_evdev.h index e64ae26f6e..c907c00c78 100644 --- a/drivers/event/dsw/dsw_evdev.h +++ b/drivers/event/dsw/dsw_evdev.h @@ -24,7 +24,7 @@ /* Multiple 24-bit flow ids will map to the same DSW-level flow. The * number of DSW flows should be high enough make it unlikely that * flow ids of several large flows hash to the same DSW-level flow. - * Such collisions will limit parallism and thus the number of cores + * Such collisions will limit parallelism and thus the number of cores * that may be utilized. However, configuring a large number of DSW * flows might potentially, depending on traffic and actual * application flow id value range, result in each such DSW-level flow @@ -104,7 +104,7 @@ /* Only one outstanding migration per port is allowed */ #define DSW_MAX_PAUSED_FLOWS (DSW_MAX_PORTS*DSW_MAX_FLOWS_PER_MIGRATION)
-/* Enough room for paus request/confirm and unpaus request/confirm for +/* Enough room for pause request/confirm and unpaus request/confirm for * all possible senders. */ #define DSW_CTL_IN_RING_SIZE ((DSW_MAX_PORTS-1)*4) diff --git a/drivers/event/dsw/dsw_event.c b/drivers/event/dsw/dsw_event.c index c6ed470286..e209cd5b00 100644 --- a/drivers/event/dsw/dsw_event.c +++ b/drivers/event/dsw/dsw_event.c @@ -1096,7 +1096,7 @@ dsw_port_ctl_process(struct dsw_evdev *dsw, struct dsw_port *port) static void dsw_port_note_op(struct dsw_port *port, uint16_t num_events) { - /* To pull the control ring reasonbly often on busy ports, + /* To pull the control ring reasonably often on busy ports, * each dequeued/enqueued event is considered an 'op' too. */ port->ops_since_bg_task += (num_events+1); @@ -1180,7 +1180,7 @@ dsw_event_enqueue_burst_generic(struct dsw_port *source_port, * addition, a port cannot be left "unattended" (e.g. unused) * for long periods of time, since that would stall * migration. Eventdev API extensions to provide a cleaner way - * to archieve both of these functions should be + * to archive both of these functions should be * considered. */ if (unlikely(events_len == 0)) { diff --git a/drivers/event/octeontx/ssovf_evdev.h b/drivers/event/octeontx/ssovf_evdev.h index bb1056a955..e46dc055eb 100644 --- a/drivers/event/octeontx/ssovf_evdev.h +++ b/drivers/event/octeontx/ssovf_evdev.h @@ -88,7 +88,7 @@
/* * In Cavium OCTEON TX SoC, all accesses to the device registers are - * implictly strongly ordered. So, The relaxed version of IO operation is + * implicitly strongly ordered. So, The relaxed version of IO operation is * safe to use with out any IO memory barriers. */ #define ssovf_read64 rte_read64_relaxed diff --git a/drivers/event/octeontx/ssovf_evdev_selftest.c b/drivers/event/octeontx/ssovf_evdev_selftest.c index d7b0d22111..b55523632a 100644 --- a/drivers/event/octeontx/ssovf_evdev_selftest.c +++ b/drivers/event/octeontx/ssovf_evdev_selftest.c @@ -151,7 +151,7 @@ _eventdev_setup(int mode) struct rte_event_dev_info info; const char *pool_name = "evdev_octeontx_test_pool";
- /* Create and destrory pool for each test case to make it standalone */ + /* Create and destroy pool for each test case to make it standalone */ eventdev_test_mempool = rte_pktmbuf_pool_create(pool_name, MAX_EVENTS, 0 /*MBUF_CACHE_SIZE*/, diff --git a/drivers/event/octeontx2/otx2_evdev_selftest.c b/drivers/event/octeontx2/otx2_evdev_selftest.c index 48bfaf893d..a89637d60f 100644 --- a/drivers/event/octeontx2/otx2_evdev_selftest.c +++ b/drivers/event/octeontx2/otx2_evdev_selftest.c @@ -139,7 +139,7 @@ _eventdev_setup(int mode) struct rte_event_dev_info info; int i, ret;
- /* Create and destrory pool for each test case to make it standalone */ + /* Create and destroy pool for each test case to make it standalone */ eventdev_test_mempool = rte_pktmbuf_pool_create(pool_name, MAX_EVENTS, 0, 0, 512, rte_socket_id()); diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h index 36ae4dd88f..ca06d51c8a 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.h +++ b/drivers/event/octeontx2/otx2_worker_dual.h @@ -74,7 +74,7 @@ otx2_ssogws_dual_get_work(struct otx2_ssogws_state *ws, event.flow_id, flags, lookup_mem); /* Extracting tstamp, if PTP enabled. CGX will prepend * the timestamp at starting of packet data and it can - * be derieved from WQE 9 dword which corresponds to SG + * be derived from WQE 9 dword which corresponds to SG * iova. * rte_pktmbuf_mtod_offset can be used for this purpose * but it brings down the performance as it reads diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c index 15c10240b0..8b6890b220 100644 --- a/drivers/event/opdl/opdl_evdev.c +++ b/drivers/event/opdl/opdl_evdev.c @@ -703,7 +703,7 @@ opdl_probe(struct rte_vdev_device *vdev) }
PMD_DRV_LOG(INFO, "DEV_ID:[%02d] : " - "Success - creating eventdev device %s, numa_node:[%d], do_valdation:[%s]" + "Success - creating eventdev device %s, numa_node:[%d], do_validation:[%s]" " , self_test:[%s]\n", dev->data->dev_id, name, diff --git a/drivers/event/opdl/opdl_test.c b/drivers/event/opdl/opdl_test.c index e4fc70a440..24b92df476 100644 --- a/drivers/event/opdl/opdl_test.c +++ b/drivers/event/opdl/opdl_test.c @@ -864,7 +864,7 @@ qid_basic(struct test *t) }
- /* Start the devicea */ + /* Start the device */ if (!err) { if (rte_event_dev_start(evdev) < 0) { PMD_DRV_LOG(ERR, "%s:%d: Error with start call\n", diff --git a/drivers/event/sw/sw_evdev.h b/drivers/event/sw/sw_evdev.h index 33645bd1df..4fd1054470 100644 --- a/drivers/event/sw/sw_evdev.h +++ b/drivers/event/sw/sw_evdev.h @@ -180,7 +180,7 @@ struct sw_port { uint16_t outstanding_releases __rte_cache_aligned; uint16_t inflight_max; /* app requested max inflights for this port */ uint16_t inflight_credits; /* num credits this port has right now */ - uint8_t implicit_release; /* release events before dequeueing */ + uint8_t implicit_release; /* release events before dequeuing */
uint16_t last_dequeue_burst_sz; /* how big the burst was */ uint64_t last_dequeue_ticks; /* used to track burst processing time */ diff --git a/drivers/event/sw/sw_evdev_selftest.c b/drivers/event/sw/sw_evdev_selftest.c index 9768d3a0c7..cb97a4d615 100644 --- a/drivers/event/sw/sw_evdev_selftest.c +++ b/drivers/event/sw/sw_evdev_selftest.c @@ -1109,7 +1109,7 @@ xstats_tests(struct test *t) NULL, 0);
- /* Verify that the resetable stats are reset, and others are not */ + /* Verify that the resettable stats are reset, and others are not */ static const uint64_t queue_expected_zero[] = { 0 /* rx */, 0 /* tx */, diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c index f17aff9655..32639a3bfd 100644 --- a/drivers/mempool/dpaa/dpaa_mempool.c +++ b/drivers/mempool/dpaa/dpaa_mempool.c @@ -258,7 +258,7 @@ dpaa_mbuf_alloc_bulk(struct rte_mempool *pool, } /* assigning mbuf from the acquired objects */ for (i = 0; (i < ret) && bufs[i].addr; i++) { - /* TODO-errata - objerved that bufs may be null + /* TODO-errata - observed that bufs may be null * i.e. first buffer is valid, remaining 6 buffers * may be null. */ diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c index 94dc5cd815..8fd9edced2 100644 --- a/drivers/mempool/octeontx/octeontx_fpavf.c +++ b/drivers/mempool/octeontx/octeontx_fpavf.c @@ -669,7 +669,7 @@ octeontx_fpa_bufpool_destroy(uintptr_t handle, int node_id) break; }
- /* Imsert it into an ordered linked list */ + /* Insert it into an ordered linked list */ for (curr = &head; curr[0] != NULL; curr = curr[0]) { if ((uintptr_t)node <= (uintptr_t)curr[0]) break; @@ -705,7 +705,7 @@ octeontx_fpa_bufpool_destroy(uintptr_t handle, int node_id)
ret = octeontx_fpapf_aura_detach(gpool); if (ret) { - fpavf_log_err("Failed to dettach gaura %u. error code=%d\n", + fpavf_log_err("Failed to detach gaura %u. error code=%d\n", gpool, ret); }
diff --git a/drivers/net/ark/ark_global.h b/drivers/net/ark/ark_global.h index 6f9b3013d8..49193ac5b3 100644 --- a/drivers/net/ark/ark_global.h +++ b/drivers/net/ark/ark_global.h @@ -67,7 +67,7 @@ typedef void (*rx_user_meta_hook_fn)(struct rte_mbuf *mbuf, const uint32_t *meta, void *ext_user_data); -/* TX hook poplulate *meta, with up to 20 bytes. meta_cnt +/* TX hook populate *meta, with up to 20 bytes. meta_cnt * returns the number of uint32_t words populated, 0 to 5 */ typedef void (*tx_user_meta_hook_fn)(const struct rte_mbuf *mbuf, diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c index 1c03e8bfa1..3a028f4290 100644 --- a/drivers/net/atlantic/atl_ethdev.c +++ b/drivers/net/atlantic/atl_ethdev.c @@ -1423,7 +1423,7 @@ atl_dev_interrupt_action(struct rte_eth_dev *dev, * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c index e3f57ded73..aeb79bf5a2 100644 --- a/drivers/net/atlantic/atl_rxtx.c +++ b/drivers/net/atlantic/atl_rxtx.c @@ -1094,7 +1094,7 @@ atl_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) * register. * Update the RDT with the value of the last processed RX descriptor * minus 1, to guarantee that the RDT register is never equal to the - * RDH register, which creates a "full" ring situtation from the + * RDH register, which creates a "full" ring situation from the * hardware point of view... */ nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold); diff --git a/drivers/net/atlantic/hw_atl/hw_atl_b0.c b/drivers/net/atlantic/hw_atl/hw_atl_b0.c index 7d0e724019..d0eb4af928 100644 --- a/drivers/net/atlantic/hw_atl/hw_atl_b0.c +++ b/drivers/net/atlantic/hw_atl/hw_atl_b0.c @@ -281,7 +281,7 @@ int hw_atl_b0_hw_init_rx_path(struct aq_hw_s *self) hw_atl_rpf_vlan_outer_etht_set(self, 0x88A8U); hw_atl_rpf_vlan_inner_etht_set(self, 0x8100U);
- /* VLAN proimisc bu defauld */ + /* VLAN promisc by default */ hw_atl_rpf_vlan_prom_mode_en_set(self, 1);
/* Rx Interrupts */ diff --git a/drivers/net/axgbe/axgbe_dev.c b/drivers/net/axgbe/axgbe_dev.c index daeb3308f4..6a7fddffca 100644 --- a/drivers/net/axgbe/axgbe_dev.c +++ b/drivers/net/axgbe/axgbe_dev.c @@ -1046,7 +1046,7 @@ static int axgbe_config_rx_threshold(struct axgbe_port *pdata, return 0; }
-/*Distrubting fifo size */ +/* Distributing FIFO size */ static void axgbe_config_rx_fifo_size(struct axgbe_port *pdata) { unsigned int fifo_size; diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c index b209ab67cf..f6c49bbbda 100644 --- a/drivers/net/axgbe/axgbe_ethdev.c +++ b/drivers/net/axgbe/axgbe_ethdev.c @@ -284,7 +284,7 @@ static int axgbe_phy_reset(struct axgbe_port *pdata) * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void diff --git a/drivers/net/axgbe/axgbe_ethdev.h b/drivers/net/axgbe/axgbe_ethdev.h index a207f2ae1b..e06d40f9eb 100644 --- a/drivers/net/axgbe/axgbe_ethdev.h +++ b/drivers/net/axgbe/axgbe_ethdev.h @@ -641,7 +641,7 @@ struct axgbe_port {
unsigned int kr_redrv;
- /* Auto-negotiation atate machine support */ + /* Auto-negotiation state machine support */ unsigned int an_int; unsigned int an_status; enum axgbe_an an_result; diff --git a/drivers/net/axgbe/axgbe_phy_impl.c b/drivers/net/axgbe/axgbe_phy_impl.c index 02236ec192..72104f8a3f 100644 --- a/drivers/net/axgbe/axgbe_phy_impl.c +++ b/drivers/net/axgbe/axgbe_phy_impl.c @@ -347,7 +347,7 @@ static int axgbe_phy_i2c_read(struct axgbe_port *pdata, unsigned int target,
retry = 1; again2: - /* Read the specfied register */ + /* Read the specified register */ i2c_op.cmd = AXGBE_I2C_CMD_READ; i2c_op.target = target; i2c_op.len = val_len; @@ -1093,7 +1093,7 @@ static int axgbe_phy_an_config(struct axgbe_port *pdata __rte_unused) { return 0; /* Dummy API since there is no case to support - * external phy devices registred through kerenl apis + * external phy devices registered through kernel APIs */ }
diff --git a/drivers/net/axgbe/axgbe_rxtx_vec_sse.c b/drivers/net/axgbe/axgbe_rxtx_vec_sse.c index 816371cd79..d95a446bef 100644 --- a/drivers/net/axgbe/axgbe_rxtx_vec_sse.c +++ b/drivers/net/axgbe/axgbe_rxtx_vec_sse.c @@ -11,7 +11,7 @@ #include <rte_mempool.h> #include <rte_mbuf.h>
-/* Useful to avoid shifting for every descriptor prepration*/ +/* Useful to avoid shifting for every descriptor preparation */ #define TX_DESC_CTRL_FLAGS 0xb000000000000000 #define TX_DESC_CTRL_FLAG_TMST 0x40000000 #define TX_FREE_BULK 8 diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c index f67db015b5..74e3018eab 100644 --- a/drivers/net/bnx2x/bnx2x.c +++ b/drivers/net/bnx2x/bnx2x.c @@ -926,7 +926,7 @@ storm_memset_eq_prod(struct bnx2x_softc *sc, uint16_t eq_prod, uint16_t pfid) * block. * * RAMROD_CMD_ID_ETH_UPDATE - * Used to update the state of the leading connection, usually to udpate + * Used to update the state of the leading connection, usually to update * the RSS indirection table. Completes on the RCQ of the leading * connection. (Not currently used under FreeBSD until OS support becomes * available.) @@ -941,7 +941,7 @@ storm_memset_eq_prod(struct bnx2x_softc *sc, uint16_t eq_prod, uint16_t pfid) * the RCQ of the leading connection. * * RAMROD_CMD_ID_ETH_CFC_DEL - * Used when tearing down a conneciton prior to driver unload. Completes + * Used when tearing down a connection prior to driver unload. Completes * on the RCQ of the leading connection (since the current connection * has been completely removed from controller memory). * @@ -1072,7 +1072,7 @@ bnx2x_sp_post(struct bnx2x_softc *sc, int command, int cid, uint32_t data_hi,
/* * It's ok if the actual decrement is issued towards the memory - * somewhere between the lock and unlock. Thus no more explict + * somewhere between the lock and unlock. Thus no more explicit * memory barrier is needed. */ if (common) { @@ -1190,7 +1190,7 @@ bnx2x_sp_event(struct bnx2x_softc *sc, struct bnx2x_fastpath *fp, break;
case (RAMROD_CMD_ID_ETH_TERMINATE): - PMD_DRV_LOG(DEBUG, sc, "got MULTI[%d] teminate ramrod", cid); + PMD_DRV_LOG(DEBUG, sc, "got MULTI[%d] terminate ramrod", cid); drv_cmd = ECORE_Q_CMD_TERMINATE; break;
@@ -1476,7 +1476,7 @@ bnx2x_fill_accept_flags(struct bnx2x_softc *sc, uint32_t rx_mode, case BNX2X_RX_MODE_ALLMULTI_PROMISC: case BNX2X_RX_MODE_PROMISC: /* - * According to deffinition of SI mode, iface in promisc mode + * According to definition of SI mode, iface in promisc mode * should receive matched and unmatched (in resolution of port) * unicast packets. */ @@ -1944,7 +1944,7 @@ static void bnx2x_disable_close_the_gate(struct bnx2x_softc *sc)
/* * Cleans the object that have internal lists without sending - * ramrods. Should be run when interrutps are disabled. + * ramrods. Should be run when interrupts are disabled. */ static void bnx2x_squeeze_objects(struct bnx2x_softc *sc) { @@ -2043,7 +2043,7 @@ bnx2x_nic_unload(struct bnx2x_softc *sc, uint32_t unload_mode, uint8_t keep_link
/* * Nothing to do during unload if previous bnx2x_nic_load() - * did not completed successfully - all resourses are released. + * did not complete successfully - all resources are released. */ if ((sc->state == BNX2X_STATE_CLOSED) || (sc->state == BNX2X_STATE_ERROR)) { return 0; @@ -2084,7 +2084,7 @@ bnx2x_nic_unload(struct bnx2x_softc *sc, uint32_t unload_mode, uint8_t keep_link /* * Prevent transactions to host from the functions on the * engine that doesn't reset global blocks in case of global - * attention once gloabl blocks are reset and gates are opened + * attention once global blocks are reset and gates are opened * (the engine which leader will perform the recovery * last). */ @@ -2101,7 +2101,7 @@ bnx2x_nic_unload(struct bnx2x_softc *sc, uint32_t unload_mode, uint8_t keep_link
/* * At this stage no more interrupts will arrive so we may safely clean - * the queue'able objects here in case they failed to get cleaned so far. + * the queueable objects here in case they failed to get cleaned so far. */ if (IS_PF(sc)) { bnx2x_squeeze_objects(sc); @@ -2151,7 +2151,7 @@ bnx2x_nic_unload(struct bnx2x_softc *sc, uint32_t unload_mode, uint8_t keep_link }
/* - * Encapsulte an mbuf cluster into the tx bd chain and makes the memory + * Encapsulate an mbuf cluster into the Tx BD chain and makes the memory * visible to the controller. * * If an mbuf is submitted to this routine and cannot be given to the @@ -2719,7 +2719,7 @@ static uint8_t bnx2x_clear_pf_load(struct bnx2x_softc *sc) return val1 != 0; }
-/* send load requrest to mcp and analyze response */ +/* send load request to MCP and analyze response */ static int bnx2x_nic_load_request(struct bnx2x_softc *sc, uint32_t * load_code) { PMD_INIT_FUNC_TRACE(sc); @@ -5325,7 +5325,7 @@ static void bnx2x_func_init(struct bnx2x_softc *sc, struct bnx2x_func_init_param * sum of vn_min_rates. * or * 0 - if all the min_rates are 0. - * In the later case fainess algorithm should be deactivated. + * In the later case fairness algorithm should be deactivated. * If all min rates are not zero then those that are zeroes will be set to 1. */ static void bnx2x_calc_vn_min(struct bnx2x_softc *sc, struct cmng_init_input *input) @@ -6564,7 +6564,7 @@ bnx2x_pf_tx_q_prep(struct bnx2x_softc *sc, struct bnx2x_fastpath *fp, txq_init->fw_sb_id = fp->fw_sb_id;
/* - * set the TSS leading client id for TX classfication to the + * set the TSS leading client id for Tx classification to the * leading RSS client id */ txq_init->tss_leading_cl_id = BNX2X_FP(sc, 0, cl_id); @@ -7634,8 +7634,8 @@ static uint8_t bnx2x_is_pcie_pending(struct bnx2x_softc *sc) }
/* -* Walk the PCI capabiites list for the device to find what features are -* supported. These capabilites may be enabled/disabled by firmware so it's +* Walk the PCI capabilities list for the device to find what features are +* supported. These capabilities may be enabled/disabled by firmware so it's * best to walk the list rather than make assumptions. */ static void bnx2x_probe_pci_caps(struct bnx2x_softc *sc) @@ -8425,7 +8425,7 @@ static int bnx2x_get_device_info(struct bnx2x_softc *sc) } else { sc->devinfo.int_block = INT_BLOCK_IGU;
-/* do not allow device reset during IGU info preocessing */ +/* do not allow device reset during IGU info processing */ bnx2x_acquire_hw_lock(sc, HW_LOCK_RESOURCE_RESET);
val = REG_RD(sc, IGU_REG_BLOCK_CONFIGURATION); @@ -9765,7 +9765,7 @@ int bnx2x_attach(struct bnx2x_softc *sc)
sc->igu_base_addr = IS_VF(sc) ? PXP_VF_ADDR_IGU_START : BAR_IGU_INTMEM;
- /* get PCI capabilites */ + /* get PCI capabilities */ bnx2x_probe_pci_caps(sc);
if (sc->devinfo.pcie_msix_cap_reg != 0) { @@ -10284,7 +10284,7 @@ static int bnx2x_init_hw_common(struct bnx2x_softc *sc) * stay set) * f. If this is VNIC 3 of a port then also init * first_timers_ilt_entry to zero and last_timers_ilt_entry - * to the last enrty in the ILT. + * to the last entry in the ILT. * * Notes: * Currently the PF error in the PGLC is non recoverable. @@ -11090,7 +11090,7 @@ static void bnx2x_hw_enable_status(struct bnx2x_softc *sc) /** * bnx2x_pf_flr_clnup * a. re-enable target read on the PF - * b. poll cfc per function usgae counter + * b. poll cfc per function usage counter * c. poll the qm perfunction usage counter * d. poll the tm per function usage counter * e. poll the tm per function scan-done indication diff --git a/drivers/net/bnx2x/bnx2x.h b/drivers/net/bnx2x/bnx2x.h index 80d19cbfd6..d7e1729e68 100644 --- a/drivers/net/bnx2x/bnx2x.h +++ b/drivers/net/bnx2x/bnx2x.h @@ -681,13 +681,13 @@ struct bnx2x_slowpath { }; /* struct bnx2x_slowpath */
/* - * Port specifc data structure. + * Port specific data structure. */ struct bnx2x_port { /* * Port Management Function (for 57711E only). * When this field is set the driver instance is - * responsible for managing port specifc + * responsible for managing port specific * configurations such as handling link attentions. */ uint32_t pmf; @@ -732,7 +732,7 @@ struct bnx2x_port {
/* * MCP scratchpad address for port specific statistics. - * The device is responsible for writing statistcss + * The device is responsible for writing statistics * back to the MCP for use with management firmware such * as UMP/NC-SI. */ @@ -937,8 +937,8 @@ struct bnx2x_devinfo { * already registered for this port (which means that the user wants storage * services). * 2. During cnic-related load, to know if offload mode is already configured - * in the HW or needs to be configrued. Since the transition from nic-mode to - * offload-mode in HW causes traffic coruption, nic-mode is configured only + * in the HW or needs to be configured. Since the transition from nic-mode to + * offload-mode in HW causes traffic corruption, nic-mode is configured only * in ports on which storage services where never requested. */ #define CONFIGURE_NIC_MODE(sc) (!CHIP_IS_E1x(sc) && !CNIC_ENABLED(sc)) diff --git a/drivers/net/bnx2x/bnx2x_stats.c b/drivers/net/bnx2x/bnx2x_stats.c index 1cd972591a..c07b01510a 100644 --- a/drivers/net/bnx2x/bnx2x_stats.c +++ b/drivers/net/bnx2x/bnx2x_stats.c @@ -1358,7 +1358,7 @@ bnx2x_prep_fw_stats_req(struct bnx2x_softc *sc)
/* * Prepare the first stats ramrod (will be completed with - * the counters equal to zero) - init counters to somethig different. + * the counters equal to zero) - init counters to something different. */ memset(&sc->fw_stats_data->storm_counters, 0xff, sizeof(struct stats_counter)); diff --git a/drivers/net/bnx2x/bnx2x_stats.h b/drivers/net/bnx2x/bnx2x_stats.h index 635412bdd3..11ddab5039 100644 --- a/drivers/net/bnx2x/bnx2x_stats.h +++ b/drivers/net/bnx2x/bnx2x_stats.h @@ -314,7 +314,7 @@ struct bnx2x_eth_stats_old { };
struct bnx2x_eth_q_stats_old { - /* Fields to perserve over fw reset*/ + /* Fields to preserve over FW reset */ uint32_t total_unicast_bytes_received_hi; uint32_t total_unicast_bytes_received_lo; uint32_t total_broadcast_bytes_received_hi; @@ -328,7 +328,7 @@ struct bnx2x_eth_q_stats_old { uint32_t total_multicast_bytes_transmitted_hi; uint32_t total_multicast_bytes_transmitted_lo;
- /* Fields to perserve last of */ + /* Fields to preserve last of */ uint32_t total_bytes_received_hi; uint32_t total_bytes_received_lo; uint32_t total_bytes_transmitted_hi; diff --git a/drivers/net/bnx2x/bnx2x_vfpf.c b/drivers/net/bnx2x/bnx2x_vfpf.c index 945e3df84f..63953c2979 100644 --- a/drivers/net/bnx2x/bnx2x_vfpf.c +++ b/drivers/net/bnx2x/bnx2x_vfpf.c @@ -73,7 +73,7 @@ bnx2x_add_tlv(__rte_unused struct bnx2x_softc *sc, void *tlvs_list, tl->length = length; }
-/* Initiliaze header of the first tlv and clear mailbox*/ +/* Initialize header of the first TLV and clear mailbox */ static void bnx2x_vf_prep(struct bnx2x_softc *sc, struct vf_first_tlv *first_tlv, uint16_t type, uint16_t length) diff --git a/drivers/net/bnx2x/bnx2x_vfpf.h b/drivers/net/bnx2x/bnx2x_vfpf.h index 9577341266..d71e81c005 100644 --- a/drivers/net/bnx2x/bnx2x_vfpf.h +++ b/drivers/net/bnx2x/bnx2x_vfpf.h @@ -241,7 +241,7 @@ struct vf_close_tlv { uint8_t pad[2]; };
-/* rlease the VF's acquired resources */ +/* release the VF's acquired resources */ struct vf_release_tlv { struct vf_first_tlv first_tlv; uint16_t vf_id; /* for debug */ diff --git a/drivers/net/bnx2x/ecore_fw_defs.h b/drivers/net/bnx2x/ecore_fw_defs.h index 93bca8ad33..6fc1fce7e2 100644 --- a/drivers/net/bnx2x/ecore_fw_defs.h +++ b/drivers/net/bnx2x/ecore_fw_defs.h @@ -379,7 +379,7 @@ /* temporarily used for RTT */ #define XSEMI_CLK1_RESUL_CHIP (1e-3)
-/* used for Host Coallescing */ +/* used for Host Coalescing */ #define SDM_TIMER_TICK_RESUL_CHIP (4 * (1e-6)) #define TSDM_TIMER_TICK_RESUL_CHIP (1 * (1e-6))
diff --git a/drivers/net/bnx2x/ecore_hsi.h b/drivers/net/bnx2x/ecore_hsi.h index 5508c53639..eda79408e9 100644 --- a/drivers/net/bnx2x/ecore_hsi.h +++ b/drivers/net/bnx2x/ecore_hsi.h @@ -1062,7 +1062,7 @@ struct port_feat_cfg { /* port 0: 0x454 port 1: 0x4c8 */ #define PORT_FEATURE_MBA_LINK_SPEED_20G 0x20000000
/* Secondary MBA configuration, - * see mba_config for the fileds defination. + * see mba_config for the fields definition. */ uint32_t mba_config2;
@@ -1075,7 +1075,7 @@ struct port_feat_cfg { /* port 0: 0x454 port 1: 0x4c8 */ #define PORT_FEATURE_BOFM_CFGD_VEN 0x00080000
/* Secondary MBA configuration, - * see mba_vlan_cfg for the fileds defination. + * see mba_vlan_cfg for the fields definition. */ uint32_t mba_vlan_cfg2;
@@ -1429,7 +1429,7 @@ struct extended_dev_info_shared_cfg { /* NVRAM OFFSET */ #define EXTENDED_DEV_INFO_SHARED_CFG_DBG_GEN3_COMPLI_ENA 0x00080000
/* Override Rx signal detect threshold when enabled the threshold - * will be set staticaly + * will be set statically */ #define EXTENDED_DEV_INFO_SHARED_CFG_OVERRIDE_RX_SIG_MASK 0x00100000 #define EXTENDED_DEV_INFO_SHARED_CFG_OVERRIDE_RX_SIG_SHIFT 20 @@ -2189,9 +2189,9 @@ struct eee_remote_vals { * elements on a per byte or word boundary. * * example: an array with 8 entries each 4 bit wide. This array will fit into - * a single dword. The diagrmas below show the array order of the nibbles. + * a single dword. The diagrams below show the array order of the nibbles. * - * SHMEM_ARRAY_BITPOS(i, 4, 4) defines the stadard ordering: + * SHMEM_ARRAY_BITPOS(i, 4, 4) defines the standard ordering: * * | | | | * 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | @@ -2519,17 +2519,17 @@ struct shmem_lfa { };
/* - * Used to suppoert NSCI get OS driver version + * Used to support NSCI get OS driver version * On driver load the version value will be set * On driver unload driver value of 0x0 will be set */ struct os_drv_ver { #define DRV_VER_NOT_LOADED 0 - /*personalites orrder is importent */ + /* personalities order is important */ #define DRV_PERS_ETHERNET 0 #define DRV_PERS_ISCSI 1 #define DRV_PERS_FCOE 2 - /*shmem2 struct is constatnt can't add more personalites here*/ + /* shmem2 struct is constant can't add more personalities here */ #define MAX_DRV_PERS 3 uint32_t versions[MAX_DRV_PERS]; }; @@ -2821,7 +2821,7 @@ struct shmem2_region { /* Flag to the driver that PF's drv_info_host_addr buffer was read */ uint32_t mfw_drv_indication; /* Offset 0x19c */
- /* We use inidcation for each PF (0..3) */ + /* We use indication for each PF (0..3) */ #define MFW_DRV_IND_READ_DONE_OFFSET(_pf_) (1 << (_pf_))
union { /* For various OEMs */ /* Offset 0x1a0 */ @@ -6195,7 +6195,7 @@ struct hc_sb_data {
/* - * Segment types for host coaslescing + * Segment types for host coalescing */ enum hc_segment { HC_REGULAR_SEGMENT, @@ -6242,7 +6242,7 @@ struct hc_status_block_data_e2 {
/* - * IGU block operartion modes (in Everest2) + * IGU block operation modes (in Everest2) */ enum igu_mode { HC_IGU_BC_MODE, @@ -6508,7 +6508,7 @@ struct stats_query_header {
/* - * Types of statistcis query entry + * Types of statistics query entry */ enum stats_query_type { STATS_TYPE_QUEUE, @@ -6542,7 +6542,7 @@ enum storm_id {
/* - * Taffic types used in ETS and flow control algorithms + * Traffic types used in ETS and flow control algorithms */ enum traffic_type { LLFC_TRAFFIC_TYPE_NW, diff --git a/drivers/net/bnx2x/ecore_init_ops.h b/drivers/net/bnx2x/ecore_init_ops.h index 0945e79993..4ed811fdd4 100644 --- a/drivers/net/bnx2x/ecore_init_ops.h +++ b/drivers/net/bnx2x/ecore_init_ops.h @@ -534,7 +534,7 @@ static void ecore_init_pxp_arb(struct bnx2x_softc *sc, int r_order, REG_WR(sc, PXP2_REG_WR_CDU_MPS, val); }
- /* Validate number of tags suppoted by device */ + /* Validate number of tags supported by device */ #define PCIE_REG_PCIER_TL_HDR_FC_ST 0x2980 val = REG_RD(sc, PCIE_REG_PCIER_TL_HDR_FC_ST); val &= 0xFF; @@ -714,7 +714,7 @@ static void ecore_ilt_client_init_op_ilt(struct bnx2x_softc *sc, for (i = ilt_cli->start; i <= ilt_cli->end; i++) ecore_ilt_line_init_op(sc, ilt, i, initop);
- /* init/clear the ILT boundries */ + /* init/clear the ILT boundaries */ ecore_ilt_boundary_init_op(sc, ilt_cli, ilt->start_line, initop); }
@@ -765,7 +765,7 @@ static void ecore_ilt_init_client_psz(struct bnx2x_softc *sc, int cli_num,
/* * called during init common stage, ilt clients should be initialized - * prioir to calling this function + * prior to calling this function */ static void ecore_ilt_init_page_size(struct bnx2x_softc *sc, uint8_t initop) { diff --git a/drivers/net/bnx2x/ecore_reg.h b/drivers/net/bnx2x/ecore_reg.h index bb92d131f8..6f7b0522f2 100644 --- a/drivers/net/bnx2x/ecore_reg.h +++ b/drivers/net/bnx2x/ecore_reg.h @@ -19,7 +19,7 @@ #define ATC_ATC_INT_STS_REG_ATC_RCPL_TO_EMPTY_CNT (0x1 << 3) #define ATC_ATC_INT_STS_REG_ATC_TCPL_ERROR (0x1 << 4) #define ATC_ATC_INT_STS_REG_ATC_TCPL_TO_NOT_PEND (0x1 << 1) -/* [R 1] ATC initalization done */ +/* [R 1] ATC initialization done */ #define ATC_REG_ATC_INIT_DONE 0x1100bc /* [RW 6] Interrupt mask register #0 read/write */ #define ATC_REG_ATC_INT_MASK 0x1101c8 @@ -56,7 +56,7 @@ #define BRB1_REG_PAUSE_HIGH_THRESHOLD_0 0x60078 /* [RW 10] Write client 0: Assert pause threshold. Not Functional */ #define BRB1_REG_PAUSE_LOW_THRESHOLD_0 0x60068 -/* [R 24] The number of full blocks occpied by port. */ +/* [R 24] The number of full blocks occupied by port. */ #define BRB1_REG_PORT_NUM_OCC_BLOCKS_0 0x60094 /* [R 5] Used to read the value of the XX protection CAM occupancy counter. */ #define CCM_REG_CAM_OCCUP 0xd0188 @@ -456,7 +456,7 @@ #define IGU_REG_PCI_PF_MSIX_FUNC_MASK 0x130148 #define IGU_REG_PCI_PF_MSI_EN 0x130140 /* [WB_R 32] Each bit represent the pending bits status for that SB. 0 = no - * pending; 1 = pending. Pendings means interrupt was asserted; and write + * pending; 1 = pending. Pending means interrupt was asserted; and write * done was not received. Data valid only in addresses 0-4. all the rest are * zero. */ @@ -1059,14 +1059,14 @@ /* [R 28] this field hold the last information that caused reserved * attention. bits [19:0] - address; [22:20] function; [23] reserved; * [27:24] the master that caused the attention - according to the following - * encodeing:1 = pxp; 2 = mcp; 3 = usdm; 4 = tsdm; 5 = xsdm; 6 = csdm; 7 = + * encoding:1 = pxp; 2 = mcp; 3 = usdm; 4 = tsdm; 5 = xsdm; 6 = csdm; 7 = * dbu; 8 = dmae */ #define MISC_REG_GRC_RSV_ATTN 0xa3c0 /* [R 28] this field hold the last information that caused timeout * attention. bits [19:0] - address; [22:20] function; [23] reserved; * [27:24] the master that caused the attention - according to the following - * encodeing:1 = pxp; 2 = mcp; 3 = usdm; 4 = tsdm; 5 = xsdm; 6 = csdm; 7 = + * encoding:1 = pxp; 2 = mcp; 3 = usdm; 4 = tsdm; 5 = xsdm; 6 = csdm; 7 = * dbu; 8 = dmae */ #define MISC_REG_GRC_TIMEOUT_ATTN 0xa3c4 @@ -1567,7 +1567,7 @@ * MAC DA 2. The reset default is set to mask out all parameters. */ #define NIG_REG_P0_LLH_PTP_PARAM_MASK 0x187a0 -/* [RW 14] Mask regiser for the rules used in detecting PTP packets. Set +/* [RW 14] Mask register for the rules used in detecting PTP packets. Set * each bit to 1 to mask out that particular rule. 0-{IPv4 DA 0; UDP DP 0} . * 1-{IPv4 DA 0; UDP DP 1} . 2-{IPv4 DA 1; UDP DP 0} . 3-{IPv4 DA 1; UDP DP * 1} . 4-{IPv6 DA 0; UDP DP 0} . 5-{IPv6 DA 0; UDP DP 1} . 6-{IPv6 DA 1; @@ -1672,7 +1672,7 @@ * MAC DA 2. The reset default is set to mask out all parameters. */ #define NIG_REG_P0_TLLH_PTP_PARAM_MASK 0x187f0 -/* [RW 14] Mask regiser for the rules used in detecting PTP packets. Set +/* [RW 14] Mask register for the rules used in detecting PTP packets. Set * each bit to 1 to mask out that particular rule. 0-{IPv4 DA 0; UDP DP 0} . * 1-{IPv4 DA 0; UDP DP 1} . 2-{IPv4 DA 1; UDP DP 0} . 3-{IPv4 DA 1; UDP DP * 1} . 4-{IPv6 DA 0; UDP DP 0} . 5-{IPv6 DA 0; UDP DP 1} . 6-{IPv6 DA 1; @@ -1839,7 +1839,7 @@ * MAC DA 2. The reset default is set to mask out all parameters. */ #define NIG_REG_P1_LLH_PTP_PARAM_MASK 0x187c8 -/* [RW 14] Mask regiser for the rules used in detecting PTP packets. Set +/* [RW 14] Mask register for the rules used in detecting PTP packets. Set * each bit to 1 to mask out that particular rule. 0-{IPv4 DA 0; UDP DP 0} . * 1-{IPv4 DA 0; UDP DP 1} . 2-{IPv4 DA 1; UDP DP 0} . 3-{IPv4 DA 1; UDP DP * 1} . 4-{IPv6 DA 0; UDP DP 0} . 5-{IPv6 DA 0; UDP DP 1} . 6-{IPv6 DA 1; @@ -1926,7 +1926,7 @@ * MAC DA 2. The reset default is set to mask out all parameters. */ #define NIG_REG_P1_TLLH_PTP_PARAM_MASK 0x187f8 -/* [RW 14] Mask regiser for the rules used in detecting PTP packets. Set +/* [RW 14] Mask register for the rules used in detecting PTP packets. Set * each bit to 1 to mask out that particular rule. 0-{IPv4 DA 0; UDP DP 0} . * 1-{IPv4 DA 0; UDP DP 1} . 2-{IPv4 DA 1; UDP DP 0} . 3-{IPv4 DA 1; UDP DP * 1} . 4-{IPv6 DA 0; UDP DP 0} . 5-{IPv6 DA 0; UDP DP 1} . 6-{IPv6 DA 1; @@ -2306,7 +2306,7 @@ #define PBF_REG_HDRS_AFTER_BASIC 0x15c0a8 /* [RW 6] Bit-map indicating which L2 hdrs may appear after L2 tag 0 */ #define PBF_REG_HDRS_AFTER_TAG_0 0x15c0b8 -/* [R 1] Removed for E3 B0 - Indicates which COS is conncted to the highest +/* [R 1] Removed for E3 B0 - Indicates which COS is connected to the highest * priority in the command arbiter. */ #define PBF_REG_HIGH_PRIORITY_COS_NUM 0x15c04c @@ -2366,7 +2366,7 @@ */ #define PBF_REG_NUM_STRICT_ARB_SLOTS 0x15c064 /* [R 11] Removed for E3 B0 - Port 0 threshold used by arbiter in 16 byte - * lines used when pause not suppoterd. + * lines used when pause not supported. */ #define PBF_REG_P0_ARB_THRSH 0x1400e4 /* [R 11] Removed for E3 B0 - Current credit for port 0 in the tx port @@ -3503,7 +3503,7 @@ * queues. */ #define QM_REG_OVFERROR 0x16805c -/* [RC 6] the Q were the qverflow occurs */ +/* [RC 6] the Q were the overflow occurs */ #define QM_REG_OVFQNUM 0x168058 /* [R 16] Pause state for physical queues 15-0 */ #define QM_REG_PAUSESTATE0 0x168410 @@ -4890,7 +4890,7 @@ if set, generate pcie_err_attn output when this error is seen. WC \ */ #define PXPCS_TL_FUNC345_STAT_ERR_MASTER_ABRT2 \ - (1 << 3) /* Receive UR Statusfor Function 2. If set, generate \ + (1 << 3) /* Receive UR Status for Function 2. If set, generate \ pcie_err_attn output when this error is seen. WC */ #define PXPCS_TL_FUNC345_STAT_ERR_CPL_TIMEOUT2 \ (1 << 2) /* Completer Timeout Status Status for Function 2, if \ @@ -4986,7 +4986,7 @@ if set, generate pcie_err_attn output when this error is seen. WC \ */ #define PXPCS_TL_FUNC678_STAT_ERR_MASTER_ABRT5 \ - (1 << 3) /* Receive UR Statusfor Function 5. If set, generate \ + (1 << 3) /* Receive UR Status for Function 5. If set, generate \ pcie_err_attn output when this error is seen. WC */ #define PXPCS_TL_FUNC678_STAT_ERR_CPL_TIMEOUT5 \ (1 << 2) /* Completer Timeout Status Status for Function 5, if \ diff --git a/drivers/net/bnx2x/ecore_sp.c b/drivers/net/bnx2x/ecore_sp.c index 0075422eee..c6c3857778 100644 --- a/drivers/net/bnx2x/ecore_sp.c +++ b/drivers/net/bnx2x/ecore_sp.c @@ -1338,7 +1338,7 @@ static int __ecore_vlan_mac_execute_step(struct bnx2x_softc *sc, if (rc != ECORE_SUCCESS) { __ecore_vlan_mac_h_pend(sc, o, *ramrod_flags);
- /** Calling function should not diffrentiate between this case + /** Calling function should not differentiate between this case * and the case in which there is already a pending ramrod */ rc = ECORE_PENDING; @@ -2246,7 +2246,7 @@ struct ecore_pending_mcast_cmd { union { ecore_list_t macs_head; uint32_t macs_num; /* Needed for DEL command */ - int next_bin; /* Needed for RESTORE flow with aprox match */ + int next_bin; /* Needed for RESTORE flow with approx match */ } data;
int done; /* set to TRUE, when the command has been handled, @@ -3424,7 +3424,7 @@ void ecore_init_mac_credit_pool(struct bnx2x_softc *sc, } else {
/* - * CAM credit is equaly divided between all active functions + * CAM credit is equally divided between all active functions * on the PATH. */ if (func_num > 0) { diff --git a/drivers/net/bnx2x/ecore_sp.h b/drivers/net/bnx2x/ecore_sp.h index d58072dac0..1f4d5a3ebe 100644 --- a/drivers/net/bnx2x/ecore_sp.h +++ b/drivers/net/bnx2x/ecore_sp.h @@ -430,7 +430,7 @@ enum { RAMROD_RESTORE, /* Execute the next command now */ RAMROD_EXEC, - /* Don't add a new command and continue execution of posponed + /* Don't add a new command and continue execution of postponed * commands. If not set a new command will be added to the * pending commands list. */ @@ -1173,7 +1173,7 @@ struct ecore_rss_config_obj { /* Last configured indirection table */ uint8_t ind_table[T_ETH_INDIRECTION_TABLE_SIZE];
- /* flags for enabling 4-tupple hash on UDP */ + /* flags for enabling 4-tuple hash on UDP */ uint8_t udp_rss_v4; uint8_t udp_rss_v6;
@@ -1285,7 +1285,7 @@ enum ecore_q_type { #define ECORE_MULTI_TX_COS_E3B0 3 #define ECORE_MULTI_TX_COS 3 /* Maximum possible */ #define MAC_PAD (ECORE_ALIGN(ETH_ALEN, sizeof(uint32_t)) - ETH_ALEN) -/* DMAE channel to be used by FW for timesync workaroun. A driver that sends +/* DMAE channel to be used by FW for timesync workaround. A driver that sends * timesync-related ramrods must not use this DMAE command ID. */ #define FW_DMAE_CMD_ID 6 diff --git a/drivers/net/bnx2x/elink.c b/drivers/net/bnx2x/elink.c index 2093d8f373..43fbf04ece 100644 --- a/drivers/net/bnx2x/elink.c +++ b/drivers/net/bnx2x/elink.c @@ -1460,7 +1460,7 @@ static void elink_ets_e3b0_pbf_disabled(const struct elink_params *params) } /****************************************************************************** * Description: - * E3B0 disable will return basicly the values to init values. + * E3B0 disable will return basically the values to init values. *. ******************************************************************************/ static elink_status_t elink_ets_e3b0_disabled(const struct elink_params *params, @@ -1483,7 +1483,7 @@ static elink_status_t elink_ets_e3b0_disabled(const struct elink_params *params,
/****************************************************************************** * Description: - * Disable will return basicly the values to init values. + * Disable will return basically the values to init values. * ******************************************************************************/ elink_status_t elink_ets_disabled(struct elink_params *params, @@ -1506,7 +1506,7 @@ elink_status_t elink_ets_disabled(struct elink_params *params,
/****************************************************************************** * Description - * Set the COS mappimg to SP and BW until this point all the COS are not + * Set the COS mapping to SP and BW until this point all the COS are not * set as SP or BW. ******************************************************************************/ static elink_status_t elink_ets_e3b0_cli_map(const struct elink_params *params, @@ -1652,7 +1652,7 @@ static elink_status_t elink_ets_e3b0_get_total_bw( } ELINK_DEBUG_P0(sc, "elink_ets_E3B0_config total BW should be 100"); - /* We can handle a case whre the BW isn't 100 this can happen + /* We can handle a case where the BW isn't 100 this can happen * if the TC are joined. */ } @@ -2608,7 +2608,7 @@ static elink_status_t elink_emac_enable(struct elink_params *params, REG_WR(sc, NIG_REG_EGRESS_EMAC0_PORT + port * 4, 1);
#ifdef ELINK_INCLUDE_EMUL - /* for paladium */ + /* for palladium */ if (CHIP_REV_IS_EMUL(sc)) { /* Use lane 1 (of lanes 0-3) */ REG_WR(sc, NIG_REG_XGXS_LANE_SEL_P0 + port * 4, 1); @@ -2850,7 +2850,7 @@ static void elink_update_pfc_bmac2(struct elink_params *params,
/* Set Time (based unit is 512 bit time) between automatic * re-sending of PP packets amd enable automatic re-send of - * Per-Priroity Packet as long as pp_gen is asserted and + * Per-Priority Packet as long as pp_gen is asserted and * pp_disable is low. */ val = 0x8000; @@ -3369,7 +3369,7 @@ static elink_status_t elink_pbf_update(struct elink_params *params, }
/** - * elink_get_emac_base - retrive emac base address + * elink_get_emac_base - retrieve emac base address * * @bp: driver handle * @mdc_mdio_access: access type @@ -4518,7 +4518,7 @@ static void elink_warpcore_enable_AN_KR2(struct elink_phy *phy, elink_cl45_write(sc, phy, reg_set[i].devad, reg_set[i].reg, reg_set[i].val);
- /* Start KR2 work-around timer which handles BNX2X8073 link-parner */ + /* Start KR2 work-around timer which handles BNX2X8073 link-partner */ params->link_attr_sync |= LINK_ATTR_SYNC_KR2_ENABLE; elink_update_link_attr(params, params->link_attr_sync); } @@ -7824,7 +7824,7 @@ elink_status_t elink_link_update(struct elink_params *params, * hence its link is expected to be down * - SECOND_PHY means that first phy should not be able * to link up by itself (using configuration) - * - DEFAULT should be overridden during initialiazation + * - DEFAULT should be overridden during initialization */ ELINK_DEBUG_P1(sc, "Invalid link indication" " mpc=0x%x. DISABLING LINK !!!", @@ -10991,7 +10991,7 @@ static elink_status_t elink_84858_cmd_hdlr(struct elink_phy *phy, ELINK_DEBUG_P0(sc, "FW cmd failed."); return ELINK_STATUS_ERROR; } - /* Step5: Once the command has completed, read the specficied DATA + /* Step5: Once the command has completed, read the specified DATA * registers for any saved results for the command, if applicable */
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index f53f8632fe..7bcf36c9cb 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -3727,7 +3727,7 @@ int bnxt_hwrm_allocate_pf_only(struct bnxt *bp) int rc;
if (!BNXT_PF(bp)) { - PMD_DRV_LOG(ERR, "Attempt to allcoate VFs on a VF!\n"); + PMD_DRV_LOG(ERR, "Attempt to allocate VFs on a VF!\n"); return -EINVAL; }
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c index a4b0934610..a967a9ccf2 100644 --- a/drivers/net/bnxt/tf_core/tfp.c +++ b/drivers/net/bnxt/tf_core/tfp.c @@ -52,7 +52,7 @@ tfp_send_msg_direct(struct bnxt *bp, }
/** - * Allocates zero'ed memory from the heap. + * Allocates zeroed memory from the heap. * * Returns success or failure code. */ diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h index dd0a347058..5a99c7a06e 100644 --- a/drivers/net/bnxt/tf_core/tfp.h +++ b/drivers/net/bnxt/tf_core/tfp.h @@ -150,7 +150,7 @@ tfp_msg_hwrm_oem_cmd(struct tf *tfp, uint32_t max_flows);
/** - * Allocates zero'ed memory from the heap. + * Allocates zeroed memory from the heap. * * NOTE: Also performs virt2phy address conversion by default thus is * can be expensive to invoke. diff --git a/drivers/net/bonding/eth_bond_8023ad_private.h b/drivers/net/bonding/eth_bond_8023ad_private.h index 9b5738afee..a5e1fffea1 100644 --- a/drivers/net/bonding/eth_bond_8023ad_private.h +++ b/drivers/net/bonding/eth_bond_8023ad_private.h @@ -20,7 +20,7 @@ /** Maximum number of LACP packets from one slave queued in TX ring. */ #define BOND_MODE_8023AX_SLAVE_TX_PKTS 1 /** - * Timeouts deffinitions (5.4.4 in 802.1AX documentation). + * Timeouts definitions (5.4.4 in 802.1AX documentation). */ #define BOND_8023AD_FAST_PERIODIC_MS 900 #define BOND_8023AD_SLOW_PERIODIC_MS 29000 diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h index 8b104b6391..9626b26d67 100644 --- a/drivers/net/bonding/eth_bond_private.h +++ b/drivers/net/bonding/eth_bond_private.h @@ -139,7 +139,7 @@ struct bond_dev_private {
uint16_t slave_count; /**< Number of bonded slaves */ struct bond_slave_details slaves[RTE_MAX_ETHPORTS]; - /**< Arary of bonded slaves details */ + /**< Array of bonded slaves details */
struct mode8023ad_private mode4; uint16_t tlb_slaves_order[RTE_MAX_ETHPORTS]; diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c index ca50583d62..b3cddd8a20 100644 --- a/drivers/net/bonding/rte_eth_bond_8023ad.c +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c @@ -243,7 +243,7 @@ record_default(struct port *port) { /* Record default parameters for partner. Partner admin parameters * are not implemented so set them to arbitrary default (last known) and - * mark actor that parner is in defaulted state. */ + * mark actor that partner is in defaulted state. */ port->partner_state = STATE_LACP_ACTIVE; ACTOR_STATE_SET(port, DEFAULTED); } @@ -300,7 +300,7 @@ rx_machine(struct bond_dev_private *internals, uint16_t slave_id, MODE4_DEBUG("LACP -> CURRENT\n"); BOND_PRINT_LACP(lacp); /* Update selected flag. If partner parameters are defaulted assume they - * are match. If not defaulted compare LACP actor with ports parner + * are match. If not defaulted compare LACP actor with ports partner * params. */ if (!ACTOR_STATE(port, DEFAULTED) && (ACTOR_STATE(port, AGGREGATION) != PARTNER_STATE(port, AGGREGATION) @@ -399,16 +399,16 @@ periodic_machine(struct bond_dev_private *internals, uint16_t slave_id) PARTNER_STATE(port, LACP_ACTIVE);
uint8_t is_partner_fast, was_partner_fast; - /* No periodic is on BEGIN, LACP DISABLE or when both sides are pasive */ + /* No periodic is on BEGIN, LACP DISABLE or when both sides are passive */ if (SM_FLAG(port, BEGIN) || !SM_FLAG(port, LACP_ENABLED) || !active) { timer_cancel(&port->periodic_timer); timer_force_expired(&port->tx_machine_timer); SM_FLAG_CLR(port, PARTNER_SHORT_TIMEOUT);
MODE4_DEBUG("-> NO_PERIODIC ( %s%s%s)\n", - SM_FLAG(port, BEGIN) ? "begind " : "", + SM_FLAG(port, BEGIN) ? "begin " : "", SM_FLAG(port, LACP_ENABLED) ? "" : "LACP disabled ", - active ? "LACP active " : "LACP pasive "); + active ? "LACP active " : "LACP passive "); return; }
@@ -495,10 +495,10 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id) if ((ACTOR_STATE(port, DISTRIBUTING) || ACTOR_STATE(port, COLLECTING)) && !PARTNER_STATE(port, SYNCHRONIZATION)) { /* If in COLLECTING or DISTRIBUTING state and partner becomes out of - * sync transit to ATACHED state. */ + * sync transit to ATTACHED state. */ ACTOR_STATE_CLR(port, DISTRIBUTING); ACTOR_STATE_CLR(port, COLLECTING); - /* Clear actor sync to activate transit ATACHED in condition bellow */ + /* Clear actor sync to activate transit ATTACHED in condition bellow */ ACTOR_STATE_CLR(port, SYNCHRONIZATION); MODE4_DEBUG("Out of sync -> ATTACHED\n"); } @@ -696,7 +696,7 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id) /* Search for aggregator suitable for this port */ for (i = 0; i < slaves_count; ++i) { agg = &bond_mode_8023ad_ports[slaves[i]]; - /* Skip ports that are not aggreagators */ + /* Skip ports that are not aggregators */ if (agg->aggregator_port_id != slaves[i]) continue;
@@ -921,7 +921,7 @@ bond_mode_8023ad_periodic_cb(void *arg)
SM_FLAG_SET(port, BEGIN);
- /* LACP is disabled on half duples or link is down */ + /* LACP is disabled on half duplex or link is down */ if (SM_FLAG(port, LACP_ENABLED)) { /* If port was enabled set it to BEGIN state */ SM_FLAG_CLR(port, LACP_ENABLED); @@ -1069,7 +1069,7 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, port->partner_state = STATE_LACP_ACTIVE | STATE_AGGREGATION; port->sm_flags = SM_FLAGS_BEGIN;
- /* use this port as agregator */ + /* use this port as aggregator */ port->aggregator_port_id = slave_id;
if (bond_mode_8023ad_register_lacp_mac(slave_id) < 0) { diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.h b/drivers/net/bonding/rte_eth_bond_8023ad.h index 11a71a55e5..7eb392f8c8 100644 --- a/drivers/net/bonding/rte_eth_bond_8023ad.h +++ b/drivers/net/bonding/rte_eth_bond_8023ad.h @@ -68,7 +68,7 @@ struct port_params { struct rte_ether_addr system; /**< System ID - Slave MAC address, same as bonding MAC address */ uint16_t key; - /**< Speed information (implementation dependednt) and duplex. */ + /**< Speed information (implementation dependent) and duplex. */ uint16_t port_priority; /**< Priority of this (unused in current implementation) */ uint16_t port_number; @@ -317,7 +317,7 @@ rte_eth_bond_8023ad_dedicated_queues_disable(uint16_t port_id); * @param port_id Bonding device id * * @return - * agregator mode on success, negative value otherwise + * aggregator mode on success, negative value otherwise */ int rte_eth_bond_8023ad_agg_selection_get(uint16_t port_id); diff --git a/drivers/net/bonding/rte_eth_bond_alb.h b/drivers/net/bonding/rte_eth_bond_alb.h index 386e70c594..4e9aeda9bc 100644 --- a/drivers/net/bonding/rte_eth_bond_alb.h +++ b/drivers/net/bonding/rte_eth_bond_alb.h @@ -96,7 +96,7 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset, * @param internals Bonding data. * * @return - * Index of slawe on which packet should be sent. + * Index of slave on which packet should be sent. */ uint16_t bond_mode_alb_arp_upd(struct client_data *client_info, diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c index 84943cffe2..2d5cac6c51 100644 --- a/drivers/net/bonding/rte_eth_bond_api.c +++ b/drivers/net/bonding/rte_eth_bond_api.c @@ -375,7 +375,7 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals, * value. Thus, the new internal value of default Rx queue offloads * has to be masked by rx_queue_offload_capa to make sure that only * commonly supported offloads are preserved from both the previous - * value and the value being inhereted from the new slave device. + * value and the value being inherited from the new slave device. */ rxconf_i->offloads = (rxconf_i->offloads | rxconf->offloads) & internals->rx_queue_offload_capa; @@ -413,7 +413,7 @@ eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals, * value. Thus, the new internal value of default Tx queue offloads * has to be masked by tx_queue_offload_capa to make sure that only * commonly supported offloads are preserved from both the previous - * value and the value being inhereted from the new slave device. + * value and the value being inherited from the new slave device. */ txconf_i->offloads = (txconf_i->offloads | txconf->offloads) & internals->tx_queue_offload_capa; diff --git a/drivers/net/cnxk/cn10k_ethdev.h b/drivers/net/cnxk/cn10k_ethdev.h index c2a46ad7ec..0982158c62 100644 --- a/drivers/net/cnxk/cn10k_ethdev.h +++ b/drivers/net/cnxk/cn10k_ethdev.h @@ -53,7 +53,7 @@ struct cn10k_outb_priv_data { void *userdata; /* Rlen computation data */ struct cnxk_ipsec_outb_rlens rlens; - /* Back pinter to eth sec session */ + /* Back pointer to eth sec session */ struct cnxk_eth_sec_sess *eth_sec; /* SA index */ uint32_t sa_idx; diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h index 873e1871f9..f3a282f429 100644 --- a/drivers/net/cnxk/cn10k_tx.h +++ b/drivers/net/cnxk/cn10k_tx.h @@ -736,7 +736,7 @@ cn10k_nix_xmit_prepare_tstamp(uintptr_t lmt_addr, const uint64_t *cmd, /* Retrieving the default desc values */ lmt[off] = cmd[2];
- /* Using compiler barier to avoid voilation of C + /* Using compiler barrier to avoid violation of C * aliasing rules. */ rte_compiler_barrier(); @@ -745,7 +745,7 @@ cn10k_nix_xmit_prepare_tstamp(uintptr_t lmt_addr, const uint64_t *cmd, /* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp * should not be recorded, hence changing the alg type to * NIX_SENDMEMALG_SET and also changing send mem addr field to - * next 8 bytes as it corrpt the actual tx tstamp registered + * next 8 bytes as it corrupts the actual Tx tstamp registered * address. */ send_mem->w0.subdc = NIX_SUBDC_MEM; @@ -2254,7 +2254,7 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, }
if (flags & NIX_TX_OFFLOAD_TSTAMP_F) { - /* Tx ol_flag for timestam. */ + /* Tx ol_flag for timestamp. */ const uint64x2_t olf = {RTE_MBUF_F_TX_IEEE1588_TMST, RTE_MBUF_F_TX_IEEE1588_TMST}; /* Set send mem alg to SUB. */ diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h index 435dde1317..070a7d9439 100644 --- a/drivers/net/cnxk/cn9k_tx.h +++ b/drivers/net/cnxk/cn9k_tx.h @@ -304,7 +304,7 @@ cn9k_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc, /* Retrieving the default desc values */ cmd[off] = send_mem_desc[6];
- /* Using compiler barier to avoid voilation of C + /* Using compiler barrier to avoid violation of C * aliasing rules. */ rte_compiler_barrier(); @@ -313,7 +313,7 @@ cn9k_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc, /* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp * should not be recorded, hence changing the alg type to * NIX_SENDMEMALG_SET and also changing send mem addr field to - * next 8 bytes as it corrpt the actual tx tstamp registered + * next 8 bytes as it corrupts the actual Tx tstamp registered * address. */ send_mem->w0.cn9k.alg = @@ -1531,7 +1531,7 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, }
if (flags & NIX_TX_OFFLOAD_TSTAMP_F) { - /* Tx ol_flag for timestam. */ + /* Tx ol_flag for timestamp. */ const uint64x2_t olf = {RTE_MBUF_F_TX_IEEE1588_TMST, RTE_MBUF_F_TX_IEEE1588_TMST}; /* Set send mem alg to SUB. */ diff --git a/drivers/net/cnxk/cnxk_ptp.c b/drivers/net/cnxk/cnxk_ptp.c index 139fea256c..359f9a30ae 100644 --- a/drivers/net/cnxk/cnxk_ptp.c +++ b/drivers/net/cnxk/cnxk_ptp.c @@ -12,7 +12,7 @@ cnxk_nix_read_clock(struct rte_eth_dev *eth_dev, uint64_t *clock) /* This API returns the raw PTP HI clock value. Since LFs do not * have direct access to PTP registers and it requires mbox msg * to AF for this value. In fastpath reading this value for every - * packet (which involes mbox call) becomes very expensive, hence + * packet (which involves mbox call) becomes very expensive, hence * we should be able to derive PTP HI clock value from tsc by * using freq_mult and clk_delta calculated during configure stage. */ diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c index edcbba9d7c..6e460dfe2e 100644 --- a/drivers/net/cxgbe/cxgbe_flow.c +++ b/drivers/net/cxgbe/cxgbe_flow.c @@ -1378,7 +1378,7 @@ cxgbe_flow_validate(struct rte_eth_dev *dev, }
/* - * @ret : > 0 filter destroyed succsesfully + * @ret : > 0 filter destroyed successfully * < 0 error destroying filter * == 1 filter not active / not found */ diff --git a/drivers/net/cxgbe/cxgbevf_main.c b/drivers/net/cxgbe/cxgbevf_main.c index f639612ae4..d0c93f8ac3 100644 --- a/drivers/net/cxgbe/cxgbevf_main.c +++ b/drivers/net/cxgbe/cxgbevf_main.c @@ -44,7 +44,7 @@ static void size_nports_qsets(struct adapter *adapter) */ pmask_nports = hweight32(adapter->params.vfres.pmask); if (pmask_nports < adapter->params.nports) { - dev_warn(adapter->pdev_dev, "only using %d of %d provissioned" + dev_warn(adapter->pdev_dev, "only using %d of %d provisioned" " virtual interfaces; limited by Port Access Rights" " mask %#x\n", pmask_nports, adapter->params.nports, adapter->params.vfres.pmask); diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c index f623f3e684..1c76b8e4d0 100644 --- a/drivers/net/cxgbe/sge.c +++ b/drivers/net/cxgbe/sge.c @@ -211,7 +211,7 @@ static inline unsigned int fl_cap(const struct sge_fl *fl) * @fl: the Free List * * Tests specified Free List to see whether the number of buffers - * available to the hardware has falled below our "starvation" + * available to the hardware has fallen below our "starvation" * threshold. */ static inline bool fl_starving(const struct adapter *adapter, @@ -678,7 +678,7 @@ static void write_sgl(struct rte_mbuf *mbuf, struct sge_txq *q, * @q: the Tx queue * @n: number of new descriptors to give to HW * - * Ring the doorbel for a Tx queue. + * Ring the doorbell for a Tx queue. */ static inline void ring_tx_db(struct adapter *adap, struct sge_txq *q) { @@ -877,7 +877,7 @@ static inline void ship_tx_pkt_coalesce_wr(struct adapter *adap, }
/** - * should_tx_packet_coalesce - decides wether to coalesce an mbuf or not + * should_tx_packet_coalesce - decides whether to coalesce an mbuf or not * @txq: tx queue where the mbuf is sent * @mbuf: mbuf to be sent * @nflits: return value for number of flits needed @@ -1846,7 +1846,7 @@ int t4_sge_alloc_rxq(struct adapter *adap, struct sge_rspq *iq, bool fwevtq, * for its status page) along with the associated software * descriptor ring. The free list size needs to be a multiple * of the Egress Queue Unit and at least 2 Egress Units larger - * than the SGE's Egress Congrestion Threshold + * than the SGE's Egress Congestion Threshold * (fl_starve_thres - 1). */ if (fl->size < s->fl_starve_thres - 1 + 2 * 8) diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index e49f765434..2c2c4e4ebb 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -1030,7 +1030,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, QM_FQCTRL_CTXASTASHING | QM_FQCTRL_PREFERINCACHE; opts.fqd.context_a.stashing.exclusive = 0; - /* In muticore scenario stashing becomes a bottleneck on LS1046. + /* In multicore scenario stashing becomes a bottleneck on LS1046. * So do not enable stashing in this case */ if (dpaa_svr_family != SVR_LS1046A_FAMILY) @@ -1866,7 +1866,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
dpaa_intf->name = dpaa_device->name;
- /* save fman_if & cfg in the interface struture */ + /* save fman_if & cfg in the interface structure */ eth_dev->process_private = fman_intf; dpaa_intf->ifid = dev_id; dpaa_intf->cfg = cfg; @@ -2169,7 +2169,7 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv, if (dpaa_svr_family == SVR_LS1043A_FAMILY) dpaa_push_mode_max_queue = 0;
- /* if push mode queues to be enabled. Currenly we are allowing + /* if push mode queues to be enabled. Currently we are allowing * only one queue per thread. */ if (getenv("DPAA_PUSH_QUEUES_NUMBER")) { diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c index ffac6ce3e2..956fe946fa 100644 --- a/drivers/net/dpaa/dpaa_rxtx.c +++ b/drivers/net/dpaa/dpaa_rxtx.c @@ -600,8 +600,8 @@ void dpaa_rx_cb_prepare(struct qm_dqrr_entry *dq, void **bufs) void *ptr = rte_dpaa_mem_ptov(qm_fd_addr(&dq->fd));
/* In case of LS1046, annotation stashing is disabled due to L2 cache - * being bottleneck in case of multicore scanario for this platform. - * So we prefetch the annoation beforehand, so that it is available + * being bottleneck in case of multicore scenario for this platform. + * So we prefetch the annotation beforehand, so that it is available * in cache when accessed. */ rte_prefetch0((void *)((uint8_t *)ptr + DEFAULT_RX_ICEOF)); diff --git a/drivers/net/dpaa/fmlib/fm_ext.h b/drivers/net/dpaa/fmlib/fm_ext.h index 27c9fb471e..8e7153bdaf 100644 --- a/drivers/net/dpaa/fmlib/fm_ext.h +++ b/drivers/net/dpaa/fmlib/fm_ext.h @@ -176,7 +176,7 @@ typedef struct t_fm_prs_result { #define FM_FD_ERR_PRS_HDR_ERR 0x00000020 /**< Header error was identified during parsing */ #define FM_FD_ERR_BLOCK_LIMIT_EXCEEDED 0x00000008 - /**< Frame parsed beyind 256 first bytes */ + /**< Frame parsed beyond 256 first bytes */
#define FM_FD_TX_STATUS_ERR_MASK (FM_FD_ERR_UNSUPPORTED_FORMAT | \ FM_FD_ERR_LENGTH | \ diff --git a/drivers/net/dpaa/fmlib/fm_pcd_ext.h b/drivers/net/dpaa/fmlib/fm_pcd_ext.h index 8be3885fbc..3802b42916 100644 --- a/drivers/net/dpaa/fmlib/fm_pcd_ext.h +++ b/drivers/net/dpaa/fmlib/fm_pcd_ext.h @@ -276,7 +276,7 @@ typedef struct ioc_fm_pcd_counters_params_t { } ioc_fm_pcd_counters_params_t;
/* - * @Description structure for FM exception definitios + * @Description structure for FM exception definitions */ typedef struct ioc_fm_pcd_exception_params_t { ioc_fm_pcd_exceptions exception; /**< The requested exception */ @@ -883,7 +883,7 @@ typedef enum ioc_fm_pcd_manip_hdr_rmv_specific_l2 { e_IOC_FM_PCD_MANIP_HDR_RMV_ETHERNET, /**< Ethernet/802.3 MAC */ e_IOC_FM_PCD_MANIP_HDR_RMV_STACKED_QTAGS, /**< stacked QTags */ e_IOC_FM_PCD_MANIP_HDR_RMV_ETHERNET_AND_MPLS, - /**< MPLS and Ethernet/802.3 MAC header unitl the header + /**< MPLS and Ethernet/802.3 MAC header until the header * which follows the MPLS header */ e_IOC_FM_PCD_MANIP_HDR_RMV_MPLS @@ -3293,7 +3293,7 @@ typedef struct ioc_fm_pcd_cc_tbl_get_stats_t { /* * @Function fm_pcd_net_env_characteristics_delete * - * @Description Deletes a set of Network Environment Charecteristics. + * @Description Deletes a set of Network Environment Characteristics. * * @Param[in] ioc_fm_obj_t The id of a Network Environment object. * @@ -3493,7 +3493,7 @@ typedef struct ioc_fm_pcd_cc_tbl_get_stats_t { * @Return 0 on success; Error code otherwise. * * @Cautions Allowed only following fm_pcd_match_table_set() not only of - * the relevnt node but also the node that points to this node. + * the relevant node but also the node that points to this node. */ #define FM_PCD_IOC_MATCH_TABLE_MODIFY_KEY_AND_NEXT_ENGINE \ _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(35), \ diff --git a/drivers/net/dpaa/fmlib/fm_port_ext.h b/drivers/net/dpaa/fmlib/fm_port_ext.h index 6f5479fbe1..bb2e00222e 100644 --- a/drivers/net/dpaa/fmlib/fm_port_ext.h +++ b/drivers/net/dpaa/fmlib/fm_port_ext.h @@ -498,7 +498,7 @@ typedef struct ioc_fm_port_pcd_prs_params_t { /**< Number of bytes from beginning of packet to start parsing */ ioc_net_header_type first_prs_hdr; - /**< The type of the first header axpected at 'parsing_offset' + /**< The type of the first header expected at 'parsing_offset' */ bool include_in_prs_statistics; /**< TRUE to include this port in the parser statistics */ @@ -524,7 +524,7 @@ typedef struct ioc_fm_port_pcd_prs_params_t { } ioc_fm_port_pcd_prs_params_t;
/* - * @Description A structure for defining coarse alassification parameters + * @Description A structure for defining coarse classification parameters * (Must match t_fm_portPcdCcParams defined in fm_port_ext.h) */ typedef struct ioc_fm_port_pcd_cc_params_t { @@ -602,7 +602,7 @@ typedef struct ioc_fm_pcd_prs_start_t { /**< Number of bytes from beginning of packet to start parsing */ ioc_net_header_type first_prs_hdr; - /**< The type of the first header axpected at 'parsing_offset' + /**< The type of the first header expected at 'parsing_offset' */ } ioc_fm_pcd_prs_start_t;
@@ -1356,7 +1356,7 @@ typedef uint32_t fm_port_frame_err_select_t; #define FM_PORT_FRM_ERR_PRS_HDR_ERR FM_FD_ERR_PRS_HDR_ERR /**< Header error was identified during parsing */ #define FM_PORT_FRM_ERR_BLOCK_LIMIT_EXCEEDED FM_FD_ERR_BLOCK_LIMIT_EXCEEDED - /**< Frame parsed beyind 256 first bytes */ + /**< Frame parsed beyond 256 first bytes */ #define FM_PORT_FRM_ERR_PROCESS_TIMEOUT 0x00000001 /**< FPM Frame Processing Timeout Exceeded */ /* @} */ @@ -1390,7 +1390,7 @@ typedef void (t_fm_port_exception_callback) (t_handle h_app, * @Param[in] length length of received data * @Param[in] status receive status and errors * @Param[in] position position of buffer in frame - * @Param[in] h_buf_context A handle of the user acossiated with this buffer + * @Param[in] h_buf_context A handle of the user associated with this buffer * * @Retval e_RX_STORE_RESPONSE_CONTINUE * order the driver to continue Rx operation for all ready data. @@ -1414,7 +1414,7 @@ typedef e_rx_store_response(t_fm_port_im_rx_store_callback) (t_handle h_app, * @Param[in] p_data A pointer to data received * @Param[in] status transmit status and errors * @Param[in] last_buffer is last buffer in frame - * @Param[in] h_buf_context A handle of the user acossiated with this buffer + * @Param[in] h_buf_context A handle of the user associated with this buffer */ typedef void (t_fm_port_im_tx_conf_callback) (t_handle h_app, uint8_t *p_data, @@ -2585,7 +2585,7 @@ typedef struct t_fm_port_congestion_grps { bool pfc_prio_enable[FM_NUM_CONG_GRPS][FM_MAX_PFC_PRIO]; /**< a matrix that represents the map between the CG ids * defined in 'congestion_grps_to_consider' to the - * priorties mapping array. + * priorities mapping array. */ } t_fm_port_congestion_grps;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index a3706439d5..2b04f14168 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -143,7 +143,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask) PMD_INIT_FUNC_TRACE();
if (mask & RTE_ETH_VLAN_FILTER_MASK) { - /* VLAN Filter not avaialble */ + /* VLAN Filter not available */ if (!priv->max_vlan_filters) { DPAA2_PMD_INFO("VLAN filter not available"); return -ENOTSUP; @@ -916,7 +916,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev, cong_notif_cfg.units = DPNI_CONGESTION_UNIT_FRAMES; cong_notif_cfg.threshold_entry = nb_tx_desc; /* Notify that the queue is not congested when the data in - * the queue is below this thershold.(90% of value) + * the queue is below this threshold.(90% of value) */ cong_notif_cfg.threshold_exit = (nb_tx_desc * 9) / 10; cong_notif_cfg.message_ctx = 0; @@ -1058,7 +1058,7 @@ dpaa2_supported_ptypes_get(struct rte_eth_dev *dev) * Dpaa2 link Interrupt handler * * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void @@ -2236,7 +2236,7 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev, ocfg.oa = 1; /* Late arrival window size disabled */ ocfg.olws = 0; - /* ORL resource exhaustaion advance NESN disabled */ + /* ORL resource exhaustion advance NESN disabled */ ocfg.oeane = 0; /* Loose ordering enabled */ ocfg.oloe = 1; @@ -2720,13 +2720,13 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev) } eth_dev->tx_pkt_burst = dpaa2_dev_tx;
- /*Init fields w.r.t. classficaition*/ + /* Init fields w.r.t. classification */ memset(&priv->extract.qos_key_extract, 0, sizeof(struct dpaa2_key_extract)); priv->extract.qos_extract_param = (size_t)rte_malloc(NULL, 256, 64); if (!priv->extract.qos_extract_param) { DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow " - " classificaiton ", ret); + " classification ", ret); goto init_err; } priv->extract.qos_key_extract.key_info.ipv4_src_offset = @@ -2744,7 +2744,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev) priv->extract.tc_extract_param[i] = (size_t)rte_malloc(NULL, 256, 64); if (!priv->extract.tc_extract_param[i]) { - DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow classificaiton", + DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow classification", ret); goto init_err; } diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h index c5e9267bf0..e27239e256 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.h +++ b/drivers/net/dpaa2/dpaa2_ethdev.h @@ -117,7 +117,7 @@ extern int dpaa2_timestamp_dynfield_offset;
#define DPAA2_FLOW_MAX_KEY_SIZE 16
-/*Externaly defined*/ +/* Externally defined */ extern const struct rte_flow_ops dpaa2_flow_ops;
extern const struct rte_tm_ops dpaa2_tm_ops; diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index 84fe37a7c0..bf55eb70a3 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -1451,7 +1451,7 @@ dpaa2_configure_flow_generic_ip( flow, pattern, &local_cfg, device_configured, group); if (ret) { - DPAA2_PMD_ERR("IP discrimation failed!"); + DPAA2_PMD_ERR("IP discrimination failed!"); return -1; }
@@ -3349,7 +3349,7 @@ dpaa2_flow_verify_action( (actions[j].conf); if (rss_conf->queue_num > priv->dist_queues) { DPAA2_PMD_ERR( - "RSS number exceeds the distrbution size"); + "RSS number exceeds the distribution size"); return -ENOTSUP; } for (i = 0; i < (int)rss_conf->queue_num; i++) { @@ -3596,7 +3596,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow, qos_cfg.keep_entries = true; qos_cfg.key_cfg_iova = (size_t)priv->extract.qos_extract_param; - /* QoS table is effecitive for multiple TCs.*/ + /* QoS table is effective for multiple TCs. */ if (priv->num_rx_tc > 1) { ret = dpni_set_qos_table(dpni, CMD_PRI_LOW, priv->token, &qos_cfg); @@ -3655,7 +3655,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow, 0, 0); if (ret < 0) { DPAA2_PMD_ERR( - "Error in addnig entry to QoS table(%d)", ret); + "Error in adding entry to QoS table(%d)", ret); return ret; } } diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c index d347f4df51..cd2f7b8aa5 100644 --- a/drivers/net/dpaa2/dpaa2_mux.c +++ b/drivers/net/dpaa2/dpaa2_mux.c @@ -95,7 +95,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id, mask_iova = (void *)((size_t)key_iova + DIST_PARAM_IOVA_SIZE);
/* Currently taking only IP protocol as an extract type. - * This can be exended to other fields using pattern->type. + * This can be extended to other fields using pattern->type. */ memset(&kg_cfg, 0, sizeof(struct dpkg_profile_cfg));
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c index c65589a5f3..90b971b4bf 100644 --- a/drivers/net/dpaa2/dpaa2_rxtx.c +++ b/drivers/net/dpaa2/dpaa2_rxtx.c @@ -714,7 +714,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) rte_prefetch0((void *)(size_t)(dq_storage + 1));
/* Prepare next pull descriptor. This will give space for the - * prefething done on DQRR entries + * prefetching done on DQRR entries */ q_storage->toggle ^= 1; dq_storage1 = q_storage->dq_storage[q_storage->toggle]; @@ -1510,7 +1510,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) if (*dpaa2_seqn(*bufs)) { /* Use only queue 0 for Tx in case of atomic/ * ordered packets as packets can get unordered - * when being tranmitted out from the interface + * when being transmitted out from the interface */ dpaa2_set_enqueue_descriptor(order_sendq, (*bufs), @@ -1738,7 +1738,7 @@ dpaa2_dev_loopback_rx(void *queue, rte_prefetch0((void *)(size_t)(dq_storage + 1));
/* Prepare next pull descriptor. This will give space for the - * prefething done on DQRR entries + * prefetching done on DQRR entries */ q_storage->toggle ^= 1; dq_storage1 = q_storage->dq_storage[q_storage->toggle]; diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h index 469ab9b3d4..3b9bffeed7 100644 --- a/drivers/net/dpaa2/mc/fsl_dpni.h +++ b/drivers/net/dpaa2/mc/fsl_dpni.h @@ -93,7 +93,7 @@ struct fsl_mc_io; */ #define DPNI_OPT_OPR_PER_TC 0x000080 /** - * All Tx traffic classes will use a single sender (ignore num_queueus for tx) + * All Tx traffic classes will use a single sender (ignore num_queues for tx) */ #define DPNI_OPT_SINGLE_SENDER 0x000100 /** @@ -617,7 +617,7 @@ int dpni_get_tx_data_offset(struct fsl_mc_io *mc_io, * @page_3.ceetm_reject_bytes: Cumulative count of the number of bytes in all * frames whose enqueue was rejected * @page_3.ceetm_reject_frames: Cumulative count of all frame enqueues rejected - * @page_4: congestion point drops for seleted TC + * @page_4: congestion point drops for selected TC * @page_4.cgr_reject_frames: number of rejected frames due to congestion point * @page_4.cgr_reject_bytes: number of rejected bytes due to congestion point * @page_5: policer statistics per TC @@ -1417,7 +1417,7 @@ int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io, * dpkg_prepare_key_cfg() * @discard_on_miss: Set to '1' to discard frames in case of no match (miss); * '0' to use the 'default_tc' in such cases - * @keep_entries: if set to one will not delele existing table entries. This + * @keep_entries: if set to one will not delete existing table entries. This * option will work properly only for dpni objects created with * DPNI_OPT_HAS_KEY_MASKING option. All previous QoS entries must * be compatible with new key composition rule. @@ -1516,7 +1516,7 @@ int dpni_clear_qos_table(struct fsl_mc_io *mc_io, * @flow_id: Identifies the Rx queue used for matching traffic. Supported * values are in range 0 to num_queue-1. * @redirect_obj_token: token that identifies the object where frame is - * redirected when this rule is hit. This paraneter is used only when one of the + * redirected when this rule is hit. This parameter is used only when one of the * flags DPNI_FS_OPT_REDIRECT_TO_DPNI_RX or DPNI_FS_OPT_REDIRECT_TO_DPNI_TX is * set. * The token is obtained using dpni_open() API call. The object must stay @@ -1797,7 +1797,7 @@ int dpni_load_sw_sequence(struct fsl_mc_io *mc_io, struct dpni_load_ss_cfg *cfg);
/** - * dpni_eanble_sw_sequence() - Enables a software sequence in the parser + * dpni_enable_sw_sequence() - Enables a software sequence in the parser * profile * corresponding to the ingress or egress of the DPNI. * @mc_io: Pointer to MC portal's I/O object diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h index a548ae2ccb..718a9746ed 100644 --- a/drivers/net/e1000/e1000_ethdev.h +++ b/drivers/net/e1000/e1000_ethdev.h @@ -103,7 +103,7 @@ * Maximum number of Ring Descriptors. * * Since RDLEN/TDLEN should be multiple of 128 bytes, the number of ring - * desscriptors should meet the following condition: + * descriptors should meet the following condition: * (num_ring_desc * sizeof(struct e1000_rx/tx_desc)) % 128 == 0 */ #define E1000_MIN_RING_DESC 32 @@ -252,7 +252,7 @@ struct igb_rte_flow_rss_conf { };
/* - * Structure to store filters'info. + * Structure to store filters' info. */ struct e1000_filter_info { uint8_t ethertype_mask; /* Bit mask for every used ethertype filter */ diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c index 31c4870086..794496abfc 100644 --- a/drivers/net/e1000/em_ethdev.c +++ b/drivers/net/e1000/em_ethdev.c @@ -1058,8 +1058,8 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
/* * Starting with 631xESB hw supports 2 TX/RX queues per port. - * Unfortunatelly, all these nics have just one TX context. - * So we have few choises for TX: + * Unfortunately, all these nics have just one TX context. + * So we have few choices for TX: * - Use just one TX queue. * - Allow cksum offload only for one TX queue. * - Don't allow TX cksum offload at all. @@ -1068,7 +1068,7 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) * (Multiple Receive Queues are mutually exclusive with UDP * fragmentation and are not supported when a legacy receive * descriptor format is used). - * Which means separate RX routinies - as legacy nics (82540, 82545) + * Which means separate RX routines - as legacy nics (82540, 82545) * don't support extended RXD. * To avoid it we support just one RX queue for now (no RSS). */ @@ -1558,7 +1558,7 @@ eth_em_interrupt_get_status(struct rte_eth_dev *dev) }
/* - * It executes link_update after knowing an interrupt is prsent. + * It executes link_update after knowing an interrupt is present. * * @param dev * Pointer to struct rte_eth_dev. @@ -1616,7 +1616,7 @@ eth_em_interrupt_action(struct rte_eth_dev *dev, * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c index 39262502bb..cea5b490ba 100644 --- a/drivers/net/e1000/em_rxtx.c +++ b/drivers/net/e1000/em_rxtx.c @@ -141,7 +141,7 @@ union em_vlan_macip { struct em_ctx_info { uint64_t flags; /**< ol_flags related to context build. */ uint32_t cmp_mask; /**< compare mask */ - union em_vlan_macip hdrlen; /**< L2 and L3 header lenghts */ + union em_vlan_macip hdrlen; /**< L2 and L3 header lengths */ };
/** @@ -829,7 +829,7 @@ eth_em_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * register. * Update the RDT with the value of the last processed RX descriptor * minus 1, to guarantee that the RDT register is never equal to the - * RDH register, which creates a "full" ring situtation from the + * RDH register, which creates a "full" ring situation from the * hardware point of view... */ nb_hold = (uint16_t) (nb_hold + rxq->nb_rx_hold); @@ -1074,7 +1074,7 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * register. * Update the RDT with the value of the last processed RX descriptor * minus 1, to guarantee that the RDT register is never equal to the - * RDH register, which creates a "full" ring situtation from the + * RDH register, which creates a "full" ring situation from the * hardware point of view... */ nb_hold = (uint16_t) (nb_hold + rxq->nb_rx_hold); diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c index 3ee16c15fe..a9c18b27e8 100644 --- a/drivers/net/e1000/igb_ethdev.c +++ b/drivers/net/e1000/igb_ethdev.c @@ -1149,7 +1149,7 @@ eth_igb_configure(struct rte_eth_dev *dev) if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
- /* multipe queue mode checking */ + /* multiple queue mode checking */ ret = igb_check_mq_mode(dev); if (ret != 0) { PMD_DRV_LOG(ERR, "igb_check_mq_mode fails with %d.", @@ -1265,7 +1265,7 @@ eth_igb_start(struct rte_eth_dev *dev) } }
- /* confiugre msix for rx interrupt */ + /* configure MSI-X for Rx interrupt */ eth_igb_configure_msix_intr(dev);
/* Configure for OS presence */ @@ -2819,7 +2819,7 @@ eth_igb_interrupt_get_status(struct rte_eth_dev *dev) }
/* - * It executes link_update after knowing an interrupt is prsent. + * It executes link_update after knowing an interrupt is present. * * @param dev * Pointer to struct rte_eth_dev. @@ -2889,7 +2889,7 @@ eth_igb_interrupt_action(struct rte_eth_dev *dev, * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void @@ -3787,7 +3787,7 @@ igb_inject_2uple_filter(struct rte_eth_dev *dev, * * @param * dev: Pointer to struct rte_eth_dev. - * ntuple_filter: ponter to the filter that will be added. + * ntuple_filter: pointer to the filter that will be added. * * @return * - On success, zero. @@ -3868,7 +3868,7 @@ igb_delete_2tuple_filter(struct rte_eth_dev *dev, * * @param * dev: Pointer to struct rte_eth_dev. - * ntuple_filter: ponter to the filter that will be removed. + * ntuple_filter: pointer to the filter that will be removed. * * @return * - On success, zero. @@ -4226,7 +4226,7 @@ igb_inject_5tuple_filter_82576(struct rte_eth_dev *dev, * * @param * dev: Pointer to struct rte_eth_dev. - * ntuple_filter: ponter to the filter that will be added. + * ntuple_filter: pointer to the filter that will be added. * * @return * - On success, zero. @@ -4313,7 +4313,7 @@ igb_delete_5tuple_filter_82576(struct rte_eth_dev *dev, * * @param * dev: Pointer to struct rte_eth_dev. - * ntuple_filter: ponter to the filter that will be removed. + * ntuple_filter: pointer to the filter that will be removed. * * @return * - On success, zero. @@ -4831,7 +4831,7 @@ igb_timesync_disable(struct rte_eth_dev *dev) /* Disable L2 filtering of IEEE1588/802.1AS Ethernet frame types. */ E1000_WRITE_REG(hw, E1000_ETQF(E1000_ETQF_FILTER_1588), 0);
- /* Stop incrementating the System Time registers. */ + /* Stop incrementing the System Time registers. */ E1000_WRITE_REG(hw, E1000_TIMINCA, 0);
return 0; diff --git a/drivers/net/e1000/igb_flow.c b/drivers/net/e1000/igb_flow.c index e72376f69c..e46697b6a1 100644 --- a/drivers/net/e1000/igb_flow.c +++ b/drivers/net/e1000/igb_flow.c @@ -57,7 +57,7 @@ struct igb_flex_filter_list igb_filter_flex_list; struct igb_rss_filter_list igb_filter_rss_list;
/** - * Please aware there's an asumption for all the parsers. + * Please be aware there's an assumption for all the parsers. * rte_flow_item is using big endian, rte_flow_attr and * rte_flow_action are using CPU order. * Because the pattern is used to describe the packets, @@ -1608,7 +1608,7 @@ igb_flow_create(struct rte_eth_dev *dev,
/** * Check if the flow rule is supported by igb. - * It only checkes the format. Don't guarantee the rule can be programmed into + * It only checks the format. Don't guarantee the rule can be programmed into * the HW. Because there can be no enough room for the rule. */ static int diff --git a/drivers/net/e1000/igb_pf.c b/drivers/net/e1000/igb_pf.c index fe355ef6b3..3f3fd0d61e 100644 --- a/drivers/net/e1000/igb_pf.c +++ b/drivers/net/e1000/igb_pf.c @@ -155,7 +155,7 @@ int igb_pf_host_configure(struct rte_eth_dev *eth_dev) else E1000_WRITE_REG(hw, E1000_DTXSWC, E1000_DTXSWC_VMDQ_LOOPBACK_EN);
- /* clear VMDq map to perment rar 0 */ + /* clear VMDq map to permanent rar 0 */ rah = E1000_READ_REG(hw, E1000_RAH(0)); rah &= ~ (0xFF << E1000_RAH_POOLSEL_SHIFT); E1000_WRITE_REG(hw, E1000_RAH(0), rah); diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c index 4a311a7b18..f32dee46df 100644 --- a/drivers/net/e1000/igb_rxtx.c +++ b/drivers/net/e1000/igb_rxtx.c @@ -150,7 +150,7 @@ union igb_tx_offload { (TX_MACIP_LEN_CMP_MASK | TX_TCP_LEN_CMP_MASK | TX_TSO_MSS_CMP_MASK)
/** - * Strucutre to check if new context need be built + * Structure to check if new context need be built */ struct igb_advctx_info { uint64_t flags; /**< ol_flags related to context build. */ @@ -967,7 +967,7 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * register. * Update the RDT with the value of the last processed RX descriptor * minus 1, to guarantee that the RDT register is never equal to the - * RDH register, which creates a "full" ring situtation from the + * RDH register, which creates a "full" ring situation from the * hardware point of view... */ nb_hold = (uint16_t) (nb_hold + rxq->nb_rx_hold); @@ -1229,7 +1229,7 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * register. * Update the RDT with the value of the last processed RX descriptor * minus 1, to guarantee that the RDT register is never equal to the - * RDH register, which creates a "full" ring situtation from the + * RDH register, which creates a "full" ring situation from the * hardware point of view... */ nb_hold = (uint16_t) (nb_hold + rxq->nb_rx_hold); @@ -1252,7 +1252,7 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * Maximum number of Ring Descriptors. * * Since RDLEN/TDLEN should be multiple of 128bytes, the number of ring - * desscriptors should meet the following condition: + * descriptors should meet the following condition: * (num_ring_desc * sizeof(struct e1000_rx/tx_desc)) % 128 == 0 */
@@ -1350,7 +1350,7 @@ igb_tx_done_cleanup(struct igb_tx_queue *txq, uint32_t free_cnt) sw_ring[tx_id].last_id = tx_id; }
- /* Move to next segemnt. */ + /* Move to next segment. */ tx_id = sw_ring[tx_id].next_id;
} while (tx_id != tx_next); @@ -1383,7 +1383,7 @@ igb_tx_done_cleanup(struct igb_tx_queue *txq, uint32_t free_cnt)
/* Walk the list and find the next mbuf, if any. */ do { - /* Move to next segemnt. */ + /* Move to next segment. */ tx_id = sw_ring[tx_id].next_id;
if (sw_ring[tx_id].mbuf) @@ -2146,7 +2146,7 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
igb_rss_disable(dev);
- /* RCTL: eanble VLAN filter */ + /* RCTL: enable VLAN filter */ rctl = E1000_READ_REG(hw, E1000_RCTL); rctl |= E1000_RCTL_VFE; E1000_WRITE_REG(hw, E1000_RCTL, rctl); diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 634c97acf6..dce26cfa48 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -1408,7 +1408,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) ++rxq->rx_stats.refill_partial; }
- /* When we submitted free recources to device... */ + /* When we submitted free resources to device... */ if (likely(i > 0)) { /* ...let HW know that it can fill buffers with data. */ ena_com_write_sq_doorbell(rxq->ena_com_io_sq); diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 865e1241e0..f99e4f3984 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -42,7 +42,7 @@
/* While processing submitted and completed descriptors (rx and tx path * respectively) in a loop it is desired to: - * - perform batch submissions while populating sumbissmion queue + * - perform batch submissions while populating submission queue * - avoid blocking transmission of other packets during cleanup phase * Hence the utilization ratio of 1/8 of a queue size or max value if the size * of the ring is very big - like 8k Rx rings. diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h index a300c6f8bc..c9400957f8 100644 --- a/drivers/net/enetfec/enet_regs.h +++ b/drivers/net/enetfec/enet_regs.h @@ -12,7 +12,7 @@ #define RX_BD_CR ((ushort)0x0004) /* CRC or Frame error */ #define RX_BD_SH ((ushort)0x0008) /* Reserved */ #define RX_BD_NO ((ushort)0x0010) /* Rcvd non-octet aligned frame */ -#define RX_BD_LG ((ushort)0x0020) /* Rcvd frame length voilation */ +#define RX_BD_LG ((ushort)0x0020) /* Rcvd frame length violation */ #define RX_BD_FIRST ((ushort)0x0400) /* Reserved */ #define RX_BD_LAST ((ushort)0x0800) /* last buffer in the frame */ #define RX_BD_INT 0x00800000 diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c index 33147169ba..cf51793cfe 100644 --- a/drivers/net/enic/enic_flow.c +++ b/drivers/net/enic/enic_flow.c @@ -405,7 +405,7 @@ enic_copy_item_ipv4_v1(struct copy_item_args *arg) return ENOTSUP; }
- /* check that the suppied mask exactly matches capabilty */ + /* check that the supplied mask exactly matches capability */ if (!mask_exact_match((const uint8_t *)&supported_mask, (const uint8_t *)item->mask, sizeof(*mask))) { ENICPMD_LOG(ERR, "IPv4 exact match mask"); @@ -443,7 +443,7 @@ enic_copy_item_udp_v1(struct copy_item_args *arg) return ENOTSUP; }
- /* check that the suppied mask exactly matches capabilty */ + /* check that the supplied mask exactly matches capability */ if (!mask_exact_match((const uint8_t *)&supported_mask, (const uint8_t *)item->mask, sizeof(*mask))) { ENICPMD_LOG(ERR, "UDP exact match mask"); @@ -482,7 +482,7 @@ enic_copy_item_tcp_v1(struct copy_item_args *arg) return ENOTSUP; }
- /* check that the suppied mask exactly matches capabilty */ + /* check that the supplied mask exactly matches capability */ if (!mask_exact_match((const uint8_t *)&supported_mask, (const uint8_t *)item->mask, sizeof(*mask))) { ENICPMD_LOG(ERR, "TCP exact match mask"); @@ -1044,14 +1044,14 @@ fixup_l5_layer(struct enic *enic, struct filter_generic_1 *gp, }
/** - * Build the intenal enic filter structure from the provided pattern. The + * Build the internal enic filter structure from the provided pattern. The * pattern is validated as the items are copied. * * @param pattern[in] * @param items_info[in] * Info about this NICs item support, like valid previous items. * @param enic_filter[out] - * NIC specfilc filters derived from the pattern. + * NIC specific filters derived from the pattern. * @param error[out] */ static int @@ -1123,12 +1123,12 @@ enic_copy_filter(const struct rte_flow_item pattern[], }
/** - * Build the intenal version 1 NIC action structure from the provided pattern. + * Build the internal version 1 NIC action structure from the provided pattern. * The pattern is validated as the items are copied. * * @param actions[in] * @param enic_action[out] - * NIC specfilc actions derived from the actions. + * NIC specific actions derived from the actions. * @param error[out] */ static int @@ -1170,12 +1170,12 @@ enic_copy_action_v1(__rte_unused struct enic *enic, }
/** - * Build the intenal version 2 NIC action structure from the provided pattern. + * Build the internal version 2 NIC action structure from the provided pattern. * The pattern is validated as the items are copied. * * @param actions[in] * @param enic_action[out] - * NIC specfilc actions derived from the actions. + * NIC specific actions derived from the actions. * @param error[out] */ static int diff --git a/drivers/net/enic/enic_fm_flow.c b/drivers/net/enic/enic_fm_flow.c index ae43f36bc0..bef842d460 100644 --- a/drivers/net/enic/enic_fm_flow.c +++ b/drivers/net/enic/enic_fm_flow.c @@ -721,7 +721,7 @@ enic_fm_copy_item_gtp(struct copy_item_args *arg) }
/* NIC does not support GTP tunnels. No Items are allowed after this. - * This prevents the specificaiton of further items. + * This prevents the specification of further items. */ arg->header_level = 0;
@@ -733,7 +733,7 @@ enic_fm_copy_item_gtp(struct copy_item_args *arg)
/* * Use the raw L4 buffer to match GTP as fm_header_set does not have - * GTP header. UDP dst port must be specifiec. Using the raw buffer + * GTP header. UDP dst port must be specific. Using the raw buffer * does not affect such UDP item, since we skip UDP in the raw buffer. */ fm_data->fk_header_select |= FKH_L4RAW; @@ -1846,7 +1846,7 @@ enic_fm_dump_tcam_actions(const struct fm_action *fm_action) /* Remove trailing comma */ if (buf[0]) *(bp - 1) = '\0'; - ENICPMD_LOG(DEBUG, " Acions: %s", buf); + ENICPMD_LOG(DEBUG, " Actions: %s", buf); }
static int @@ -2364,7 +2364,7 @@ enic_action_handle_get(struct enic_flowman *fm, struct fm_action *action_in, if (ret < 0 && ret != -ENOENT) return rte_flow_error_set(error, -ret, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "enic: rte_hash_lookup(aciton)"); + NULL, "enic: rte_hash_lookup(action)");
if (ret == -ENOENT) { /* Allocate a new action on the NIC. */ @@ -2435,7 +2435,7 @@ __enic_fm_flow_add_entry(struct enic_flowman *fm,
ENICPMD_FUNC_TRACE();
- /* Get or create an aciton handle. */ + /* Get or create an action handle. */ ret = enic_action_handle_get(fm, action_in, error, &ah); if (ret) return ret; diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c index 7f84b5f935..97d97ea793 100644 --- a/drivers/net/enic/enic_main.c +++ b/drivers/net/enic/enic_main.c @@ -1137,7 +1137,7 @@ int enic_disable(struct enic *enic) }
/* If we were using interrupts, set the interrupt vector to -1 - * to disable interrupts. We are not disabling link notifcations, + * to disable interrupts. We are not disabling link notifications, * though, as we want the polling of link status to continue working. */ if (enic->rte_dev->data->dev_conf.intr_conf.lsc) diff --git a/drivers/net/enic/enic_rxtx.c b/drivers/net/enic/enic_rxtx.c index c44715bfd0..33e96b480e 100644 --- a/drivers/net/enic/enic_rxtx.c +++ b/drivers/net/enic/enic_rxtx.c @@ -653,7 +653,7 @@ static void enqueue_simple_pkts(struct rte_mbuf **pkts, * The app should not send oversized * packets. tx_pkt_prepare includes a check as * well. But some apps ignore the device max size and - * tx_pkt_prepare. Oversized packets cause WQ errrors + * tx_pkt_prepare. Oversized packets cause WQ errors * and the NIC ends up disabling the whole WQ. So * truncate packets.. */ diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h index 7cfa29faa8..17a7056c45 100644 --- a/drivers/net/fm10k/fm10k.h +++ b/drivers/net/fm10k/fm10k.h @@ -44,7 +44,7 @@ #define FM10K_TX_MAX_MTU_SEG UINT8_MAX
/* - * byte aligment for HW RX data buffer + * byte alignment for HW RX data buffer * Datasheet requires RX buffer addresses shall either be 512-byte aligned or * be 8-byte aligned but without crossing host memory pages (4KB alignment * boundaries). Satisfy first option. diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c index 43e1d13431..8bbd8b445d 100644 --- a/drivers/net/fm10k/fm10k_ethdev.c +++ b/drivers/net/fm10k/fm10k_ethdev.c @@ -290,7 +290,7 @@ rx_queue_free(struct fm10k_rx_queue *q) }
/* - * disable RX queue, wait unitl HW finished necessary flush operation + * disable RX queue, wait until HW finished necessary flush operation */ static inline int rx_queue_disable(struct fm10k_hw *hw, uint16_t qnum) @@ -379,7 +379,7 @@ tx_queue_free(struct fm10k_tx_queue *q) }
/* - * disable TX queue, wait unitl HW finished necessary flush operation + * disable TX queue, wait until HW finished necessary flush operation */ static inline int tx_queue_disable(struct fm10k_hw *hw, uint16_t qnum) @@ -453,7 +453,7 @@ fm10k_dev_configure(struct rte_eth_dev *dev) if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
- /* multipe queue mode checking */ + /* multiple queue mode checking */ ret = fm10k_check_mq_mode(dev); if (ret != 0) { PMD_DRV_LOG(ERR, "fm10k_check_mq_mode fails with %d.", @@ -2553,7 +2553,7 @@ fm10k_dev_handle_fault(struct fm10k_hw *hw, uint32_t eicr) * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void @@ -2676,7 +2676,7 @@ fm10k_dev_interrupt_handler_pf(void *param) * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void @@ -3034,7 +3034,7 @@ fm10k_params_init(struct rte_eth_dev *dev) struct fm10k_dev_info *info = FM10K_DEV_PRIVATE_TO_INFO(dev->data->dev_private);
- /* Inialize bus info. Normally we would call fm10k_get_bus_info(), but + /* Initialize bus info. Normally we would call fm10k_get_bus_info(), but * there is no way to get link status without reading BAR4. Until this * works, assume we have maximum bandwidth. * @todo - fix bus info diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c index 1269250e23..10ce5a7582 100644 --- a/drivers/net/fm10k/fm10k_rxtx_vec.c +++ b/drivers/net/fm10k/fm10k_rxtx_vec.c @@ -212,7 +212,7 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev) struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
#ifndef RTE_FM10K_RX_OLFLAGS_ENABLE - /* whithout rx ol_flags, no VP flag report */ + /* without rx ol_flags, no VP flag report */ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) return -1; #endif @@ -239,7 +239,7 @@ fm10k_rxq_vec_setup(struct fm10k_rx_queue *rxq) struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
mb_def.nb_segs = 1; - /* data_off will be ajusted after new mbuf allocated for 512-byte + /* data_off will be adjusted after new mbuf allocated for 512-byte * alignment. */ mb_def.data_off = RTE_PKTMBUF_HEADROOM; @@ -410,7 +410,7 @@ fm10k_recv_raw_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, if (!(rxdp->d.staterr & FM10K_RXD_STATUS_DD)) return 0;
- /* Vecotr RX will process 4 packets at a time, strip the unaligned + /* Vector RX will process 4 packets at a time, strip the unaligned * tails in case it's not multiple of 4. */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, RTE_FM10K_DESCS_PER_LOOP); @@ -481,7 +481,7 @@ fm10k_recv_raw_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, _mm_storeu_si128((__m128i *)&rx_pkts[pos], mbp1);
#if defined(RTE_ARCH_X86_64) - /* B.1 load 2 64 bit mbuf poitns */ + /* B.1 load 2 64 bit mbuf points */ mbp2 = _mm_loadu_si128((__m128i *)&mbufp[pos+2]); #endif
@@ -573,7 +573,7 @@ fm10k_recv_raw_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
fm10k_desc_to_pktype_v(descs0, &rx_pkts[pos]);
- /* C.4 calc avaialbe number of desc */ + /* C.4 calc available number of desc */ var = __builtin_popcountll(_mm_cvtsi128_si64(staterr)); nb_pkts_recd += var; if (likely(var != RTE_FM10K_DESCS_PER_LOOP)) diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c index 1853511c3b..e8d9aaba84 100644 --- a/drivers/net/hinic/hinic_pmd_ethdev.c +++ b/drivers/net/hinic/hinic_pmd_ethdev.c @@ -255,7 +255,7 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask); * Interrupt handler triggered by NIC for handling * specific event. * - * @param: The address of parameter (struct rte_eth_dev *) regsitered before. + * @param: The address of parameter (struct rte_eth_dev *) registered before. */ static void hinic_dev_interrupt_handler(void *param) { @@ -336,7 +336,7 @@ static int hinic_dev_configure(struct rte_eth_dev *dev) return err; }
- /* init vlan offoad */ + /* init VLAN offload */ err = hinic_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK); if (err) { diff --git a/drivers/net/hinic/hinic_pmd_ethdev.h b/drivers/net/hinic/hinic_pmd_ethdev.h index 5eca8b10b9..8e6251f69f 100644 --- a/drivers/net/hinic/hinic_pmd_ethdev.h +++ b/drivers/net/hinic/hinic_pmd_ethdev.h @@ -170,7 +170,7 @@ struct tag_tcam_key_mem { /* * tunnel packet, mask must be 0xff, spec value is 1; * normal packet, mask must be 0, spec value is 0; - * if tunnal packet, ucode use + * if tunnel packet, ucode use * sip/dip/protocol/src_port/dst_dport from inner packet */ u32 tunnel_flag:8; diff --git a/drivers/net/hinic/hinic_pmd_flow.c b/drivers/net/hinic/hinic_pmd_flow.c index d71a42afbd..2cf24ebcf6 100644 --- a/drivers/net/hinic/hinic_pmd_flow.c +++ b/drivers/net/hinic/hinic_pmd_flow.c @@ -734,7 +734,7 @@ static int hinic_check_ntuple_item_ele(const struct rte_flow_item *item, * END * other members in mask and spec should set to 0x00. * item->last should be NULL. - * Please aware there's an asumption for all the parsers. + * Please be aware there's an assumption for all the parsers. * rte_flow_item is using big endian, rte_flow_attr and * rte_flow_action are using CPU order. * Because the pattern is used to describe the packets, @@ -1630,7 +1630,7 @@ static int hinic_parse_fdir_filter(struct rte_eth_dev *dev,
/** * Check if the flow rule is supported by nic. - * It only checkes the format. Don't guarantee the rule can be programmed into + * It only checks the format. Don't guarantee the rule can be programmed into * the HW. Because there can be no enough room for the rule. */ static int hinic_flow_validate(struct rte_eth_dev *dev, diff --git a/drivers/net/hinic/hinic_pmd_tx.c b/drivers/net/hinic/hinic_pmd_tx.c index 2688817f37..f09b1a6e1e 100644 --- a/drivers/net/hinic/hinic_pmd_tx.c +++ b/drivers/net/hinic/hinic_pmd_tx.c @@ -1144,7 +1144,7 @@ u16 hinic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) mbuf_pkt = *tx_pkts++; queue_info = 0;
- /* 1. parse sge and tx offlod info from mbuf */ + /* 1. parse sge and tx offload info from mbuf */ if (unlikely(!hinic_get_sge_txoff_info(mbuf_pkt, &sqe_info, &off_info))) { txq->txq_stats.off_errs++; diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c index 50cb3eabb1..bdfc85f934 100644 --- a/drivers/net/hns3/hns3_cmd.c +++ b/drivers/net/hns3/hns3_cmd.c @@ -462,7 +462,7 @@ hns3_mask_capability(struct hns3_hw *hw, for (i = 0; i < MAX_CAPS_BIT; i++) { if (!(caps_masked & BIT_ULL(i))) continue; - hns3_info(hw, "mask capabiliy: id-%u, name-%s.", + hns3_info(hw, "mask capability: id-%u, name-%s.", i, hns3_get_caps_name(i)); } } @@ -699,7 +699,7 @@ hns3_cmd_init(struct hns3_hw *hw) return 0;
/* - * Requiring firmware to enable some features, firber port can still + * Requiring firmware to enable some features, fiber port can still * work without it, but copper port can't work because the firmware * fails to take over the PHY. */ diff --git a/drivers/net/hns3/hns3_common.c b/drivers/net/hns3/hns3_common.c index 716cebbcec..2ebcf5695b 100644 --- a/drivers/net/hns3/hns3_common.c +++ b/drivers/net/hns3/hns3_common.c @@ -663,7 +663,7 @@ hns3_init_ring_with_vector(struct hns3_hw *hw) hw->intr_tqps_num = RTE_MIN(vec, hw->tqps_num); for (i = 0; i < hw->intr_tqps_num; i++) { /* - * Set gap limiter/rate limiter/quanity limiter algorithm + * Set gap limiter/rate limiter/quantity limiter algorithm * configuration for interrupt coalesce of queue's interrupt. */ hns3_set_queue_intr_gl(hw, i, HNS3_RING_GL_RX, diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c index b22f618e0a..af045b22f7 100644 --- a/drivers/net/hns3/hns3_dcb.c +++ b/drivers/net/hns3/hns3_dcb.c @@ -25,7 +25,7 @@ * IR(Mbps) = ------------------------- * CLOCK(1000Mbps) * Tick * (2 ^ IR_s) * - * @return: 0: calculate sucessful, negative: fail + * @return: 0: calculate successful, negative: fail */ static int hns3_shaper_para_calc(struct hns3_hw *hw, uint32_t ir, uint8_t shaper_level, @@ -36,8 +36,8 @@ hns3_shaper_para_calc(struct hns3_hw *hw, uint32_t ir, uint8_t shaper_level, #define DIVISOR_IR_B_126 (126 * DIVISOR_CLK)
const uint16_t tick_array[HNS3_SHAPER_LVL_CNT] = { - 6 * 256, /* Prioriy level */ - 6 * 32, /* Prioriy group level */ + 6 * 256, /* Priority level */ + 6 * 32, /* Priority group level */ 6 * 8, /* Port level */ 6 * 256 /* Qset level */ }; @@ -1521,7 +1521,7 @@ hns3_dcb_hw_configure(struct hns3_adapter *hns)
ret = hns3_dcb_schd_setup_hw(hw); if (ret) { - hns3_err(hw, "dcb schdule configure failed! ret = %d", ret); + hns3_err(hw, "dcb schedule configure failed! ret = %d", ret); return ret; }
@@ -1726,7 +1726,7 @@ hns3_get_fc_mode(struct hns3_hw *hw, enum rte_eth_fc_mode mode) * hns3_dcb_pfc_enable - Enable priority flow control * @dev: pointer to ethernet device * - * Configures the pfc settings for one porority. + * Configures the pfc settings for one priority. */ int hns3_dcb_pfc_enable(struct rte_eth_dev *dev, struct rte_eth_pfc_conf *pfc_conf) diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index f83cff4d98..25f9c9fab1 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -568,7 +568,7 @@ hns3_set_vlan_rx_offload_cfg(struct hns3_adapter *hns, hns3_set_bit(req->vport_vlan_cfg, HNS3_SHOW_TAG2_EN_B, vcfg->vlan2_vlan_prionly ? 1 : 0);
- /* firmwall will ignore this configuration for PCI_REVISION_ID_HIP08 */ + /* firmware will ignore this configuration for PCI_REVISION_ID_HIP08 */ hns3_set_bit(req->vport_vlan_cfg, HNS3_DISCARD_TAG1_EN_B, vcfg->strip_tag1_discard_en ? 1 : 0); hns3_set_bit(req->vport_vlan_cfg, HNS3_DISCARD_TAG2_EN_B, @@ -763,7 +763,7 @@ hns3_set_vlan_tx_offload_cfg(struct hns3_adapter *hns, vcfg->insert_tag2_en ? 1 : 0); hns3_set_bit(req->vport_vlan_cfg, HNS3_CFG_NIC_ROCE_SEL_B, 0);
- /* firmwall will ignore this configuration for PCI_REVISION_ID_HIP08 */ + /* firmware will ignore this configuration for PCI_REVISION_ID_HIP08 */ hns3_set_bit(req->vport_vlan_cfg, HNS3_TAG_SHIFT_MODE_EN_B, vcfg->tag_shift_mode_en ? 1 : 0);
@@ -3385,7 +3385,7 @@ hns3_only_alloc_priv_buff(struct hns3_hw *hw, * hns3_rx_buffer_calc: calculate the rx private buffer size for all TCs * @hw: pointer to struct hns3_hw * @buf_alloc: pointer to buffer calculation data - * @return: 0: calculate sucessful, negative: fail + * @return: 0: calculate successful, negative: fail */ static int hns3_rx_buffer_calc(struct hns3_hw *hw, struct hns3_pkt_buf_alloc *buf_alloc) @@ -4518,14 +4518,14 @@ hns3_set_firber_default_support_speed(struct hns3_hw *hw) }
/* - * Validity of supported_speed for firber and copper media type can be + * Validity of supported_speed for fiber and copper media type can be * guaranteed by the following policy: * Copper: * Although the initialization of the phy in the firmware may not be * completed, the firmware can guarantees that the supported_speed is * an valid value. * Firber: - * If the version of firmware supports the acitive query way of the + * If the version of firmware supports the active query way of the * HNS3_OPC_GET_SFP_INFO opcode, the supported_speed can be obtained * through it. If unsupported, use the SFP's speed as the value of the * supported_speed. @@ -5285,7 +5285,7 @@ hns3_get_autoneg_fc_mode(struct hns3_hw *hw)
/* * Flow control auto-negotiation is not supported for fiber and - * backpalne media type. + * backplane media type. */ case HNS3_MEDIA_TYPE_FIBER: case HNS3_MEDIA_TYPE_BACKPLANE: @@ -6152,7 +6152,7 @@ hns3_fec_get_internal(struct hns3_hw *hw, uint32_t *fec_capa) }
/* - * FEC mode order defined in hns3 hardware is inconsistend with + * FEC mode order defined in hns3 hardware is inconsistent with * that defined in the ethdev library. So the sequence needs * to be converted. */ diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h index 4406611fe9..2457754b3d 100644 --- a/drivers/net/hns3/hns3_ethdev.h +++ b/drivers/net/hns3/hns3_ethdev.h @@ -125,7 +125,7 @@ struct hns3_tc_info { uint8_t tc_sch_mode; /* 0: sp; 1: dwrr */ uint8_t pgid; uint32_t bw_limit; - uint8_t up_to_tc_map; /* user priority maping on the TC */ + uint8_t up_to_tc_map; /* user priority mapping on the TC */ };
struct hns3_dcb_info { @@ -572,12 +572,12 @@ struct hns3_hw { /* * vlan mode. * value range: - * HNS3_SW_SHIFT_AND_DISCARD_MODE/HNS3_HW_SHFIT_AND_DISCARD_MODE + * HNS3_SW_SHIFT_AND_DISCARD_MODE/HNS3_HW_SHIFT_AND_DISCARD_MODE * * - HNS3_SW_SHIFT_AND_DISCARD_MODE * For some versions of hardware network engine, because of the * hardware limitation, PMD needs to detect the PVID status - * to work with haredware to implement PVID-related functions. + * to work with hardware to implement PVID-related functions. * For example, driver need discard the stripped PVID tag to ensure * the PVID will not report to mbuf and shift the inserted VLAN tag * to avoid port based VLAN covering it. @@ -725,7 +725,7 @@ enum hns3_mp_req_type { HNS3_MP_REQ_MAX };
-/* Pameters for IPC. */ +/* Parameters for IPC. */ struct hns3_mp_param { enum hns3_mp_req_type type; int port_id; diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 1022b02697..de44b07691 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -318,7 +318,7 @@ hns3vf_set_promisc_mode(struct hns3_hw *hw, bool en_bc_pmc, * 1. The promiscuous/allmulticast mode can be configured successfully * only based on the trusted VF device. If based on the non trusted * VF device, configuring promiscuous/allmulticast mode will fail. - * The hns3 VF device can be confiruged as trusted device by hns3 PF + * The hns3 VF device can be configured as trusted device by hns3 PF * kernel ethdev driver on the host by the following command: * "ip link set <eth num> vf <vf id> turst on" * 2. After the promiscuous mode is configured successfully, hns3 VF PMD @@ -330,7 +330,7 @@ hns3vf_set_promisc_mode(struct hns3_hw *hw, bool en_bc_pmc, * filter is still effective even in promiscuous mode. If upper * applications don't call rte_eth_dev_vlan_filter API function to * set vlan based on VF device, hns3 VF PMD will can't receive - * the packets with vlan tag in promiscuoue mode. + * the packets with vlan tag in promiscuous mode. */ hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false); req->msg[0] = HNS3_MBX_SET_PROMISC_MODE; diff --git a/drivers/net/hns3/hns3_fdir.h b/drivers/net/hns3/hns3_fdir.h index de2422e12f..ce70a534dc 100644 --- a/drivers/net/hns3/hns3_fdir.h +++ b/drivers/net/hns3/hns3_fdir.h @@ -144,7 +144,7 @@ struct hns3_fdir_rule { uint32_t flags; uint32_t fd_id; /* APP marked unique value for this rule. */ uint8_t action; - /* VF id, avaiblable when flags with HNS3_RULE_FLAG_VF_ID. */ + /* VF id, available when flags with HNS3_RULE_FLAG_VF_ID. */ uint8_t vf_id; /* * equal 0 when action is drop. diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c index 1aee965e4a..a2c1589c39 100644 --- a/drivers/net/hns3/hns3_flow.c +++ b/drivers/net/hns3/hns3_flow.c @@ -369,7 +369,7 @@ hns3_handle_action_indirect(struct rte_eth_dev *dev, * * @param actions[in] * @param rule[out] - * NIC specfilc actions derived from the actions. + * NIC specific actions derived from the actions. * @param error[out] */ static int @@ -400,7 +400,7 @@ hns3_handle_actions(struct rte_eth_dev *dev, * Queue region is implemented by FDIR + RSS in hns3 hardware, * the FDIR's action is one queue region (start_queue_id and * queue_num), then RSS spread packets to the queue region by - * RSS algorigthm. + * RSS algorithm. */ case RTE_FLOW_ACTION_TYPE_RSS: ret = hns3_handle_action_queue_region(dev, actions, @@ -978,7 +978,7 @@ hns3_parse_nvgre(const struct rte_flow_item *item, struct hns3_fdir_rule *rule, if (nvgre_mask->protocol || nvgre_mask->c_k_s_rsvd0_ver) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_MASK, item, - "Ver/protocal is not supported in NVGRE"); + "Ver/protocol is not supported in NVGRE");
/* TNI must be totally masked or not. */ if (memcmp(nvgre_mask->tni, full_mask, VNI_OR_TNI_LEN) && @@ -1023,7 +1023,7 @@ hns3_parse_geneve(const struct rte_flow_item *item, struct hns3_fdir_rule *rule, if (geneve_mask->ver_opt_len_o_c_rsvd0 || geneve_mask->protocol) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_MASK, item, - "Ver/protocal is not supported in GENEVE"); + "Ver/protocol is not supported in GENEVE"); /* VNI must be totally masked or not. */ if (memcmp(geneve_mask->vni, full_mask, VNI_OR_TNI_LEN) && memcmp(geneve_mask->vni, zero_mask, VNI_OR_TNI_LEN)) @@ -1354,7 +1354,7 @@ hns3_rss_input_tuple_supported(struct hns3_hw *hw, }
/* - * This function is used to parse rss action validatation. + * This function is used to parse rss action validation. */ static int hns3_parse_rss_filter(struct rte_eth_dev *dev, @@ -1722,7 +1722,7 @@ hns3_flow_args_check(const struct rte_flow_attr *attr,
/* * Check if the flow rule is supported by hns3. - * It only checkes the format. Don't guarantee the rule can be programmed into + * It only checks the format. Don't guarantee the rule can be programmed into * the HW. Because there can be no enough room for the rule. */ static int diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index 9a05f0d1ee..8e0a58aa02 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -78,14 +78,14 @@ hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode, mbx_time_limit = (uint32_t)hns->mbx_time_limit_ms * US_PER_MS; while (wait_time < mbx_time_limit) { if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED)) { - hns3_err(hw, "Don't wait for mbx respone because of " + hns3_err(hw, "Don't wait for mbx response because of " "disable_cmd"); return -EBUSY; }
if (is_reset_pending(hns)) { hw->mbx_resp.req_msg_data = 0; - hns3_err(hw, "Don't wait for mbx respone because of " + hns3_err(hw, "Don't wait for mbx response because of " "reset pending"); return -EIO; } diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index c71f43238c..c378783c6c 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -26,7 +26,7 @@ enum HNS3_MBX_OPCODE { HNS3_MBX_GET_RETA, /* (VF -> PF) get RETA */ HNS3_MBX_GET_RSS_KEY, /* (VF -> PF) get RSS key */ HNS3_MBX_GET_MAC_ADDR, /* (VF -> PF) get MAC addr */ - HNS3_MBX_PF_VF_RESP, /* (PF -> VF) generate respone to VF */ + HNS3_MBX_PF_VF_RESP, /* (PF -> VF) generate response to VF */ HNS3_MBX_GET_BDNUM, /* (VF -> PF) get BD num */ HNS3_MBX_GET_BUFSIZE, /* (VF -> PF) get buffer size */ HNS3_MBX_GET_STREAMID, /* (VF -> PF) get stream id */ diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h index 9471e7039d..8e8b056f4e 100644 --- a/drivers/net/hns3/hns3_rss.h +++ b/drivers/net/hns3/hns3_rss.h @@ -40,7 +40,7 @@ struct hns3_rss_conf { /* RSS parameters :algorithm, flow_types, key, queue */ struct rte_flow_action_rss conf; - uint8_t hash_algo; /* hash function type definited by hardware */ + uint8_t hash_algo; /* hash function type defined by hardware */ uint8_t key[HNS3_RSS_KEY_SIZE]; /* Hash key */ uint16_t rss_indirection_tbl[HNS3_RSS_IND_TBL_SIZE_MAX]; uint16_t queue[HNS3_RSS_QUEUES_BUFFER_NUM]; /* Queues indices to use */ diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index 3c02fd54e1..9a597e032e 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -1897,7 +1897,7 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc, * For hns3 VF device, whether it needs to process PVID depends * on the configuration of PF kernel mode netdevice driver. And the * related PF configuration is delivered through the mailbox and finally - * reflectd in port_base_vlan_cfg. + * reflected in port_base_vlan_cfg. */ if (hns->is_vf || hw->vlan_mode == HNS3_SW_SHIFT_AND_DISCARD_MODE) rxq->pvid_sw_discard_en = hw->port_base_vlan_cfg.state == @@ -3038,7 +3038,7 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc, * For hns3 VF device, whether it needs to process PVID depends * on the configuration of PF kernel mode netdev driver. And the * related PF configuration is delivered through the mailbox and finally - * reflectd in port_base_vlan_cfg. + * reflected in port_base_vlan_cfg. */ if (hns->is_vf || hw->vlan_mode == HNS3_SW_SHIFT_AND_DISCARD_MODE) txq->pvid_sw_shift_en = hw->port_base_vlan_cfg.state == @@ -3192,7 +3192,7 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc, * in Tx direction based on hns3 network engine. So when the number of * VLANs in the packets represented by rxm plus the number of VLAN * offload by hardware such as PVID etc, exceeds two, the packets will - * be discarded or the original VLAN of the packets will be overwitted + * be discarded or the original VLAN of the packets will be overwritten * by hardware. When the PF PVID is enabled by calling the API function * named rte_eth_dev_set_vlan_pvid or the VF PVID is enabled by the hns3 * PF kernel ether driver, the outer VLAN tag will always be the PVID. @@ -3377,7 +3377,7 @@ hns3_parse_inner_params(struct rte_mbuf *m, uint32_t *ol_type_vlan_len_msec, /* * The inner l2 length of mbuf is the sum of outer l4 length, * tunneling header length and inner l2 length for a tunnel - * packect. But in hns3 tx descriptor, the tunneling header + * packet. But in hns3 tx descriptor, the tunneling header * length is contained in the field of outer L4 length. * Therefore, driver need to calculate the outer L4 length and * inner L2 length. @@ -3393,7 +3393,7 @@ hns3_parse_inner_params(struct rte_mbuf *m, uint32_t *ol_type_vlan_len_msec, tmp_outer |= hns3_gen_field_val(HNS3_TXD_TUNTYPE_M, HNS3_TXD_TUNTYPE_S, HNS3_TUN_NVGRE); /* - * For NVGRE tunnel packect, the outer L4 is empty. So only + * For NVGRE tunnel packet, the outer L4 is empty. So only * fill the NVGRE header length to the outer L4 field. */ tmp_outer |= hns3_gen_field_val(HNS3_TXD_L4LEN_M, @@ -3436,7 +3436,7 @@ hns3_parse_tunneling_params(struct hns3_tx_queue *txq, struct rte_mbuf *m, * mbuf, but for hns3 descriptor, it is contained in the outer L4. So, * there is a need that switching between them. To avoid multiple * calculations, the length of the L2 header include the outer and - * inner, will be filled during the parsing of tunnel packects. + * inner, will be filled during the parsing of tunnel packets. */ if (!(ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) { /* @@ -3616,7 +3616,7 @@ hns3_outer_ipv4_cksum_prepared(struct rte_mbuf *m, uint64_t ol_flags, if (ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM) { struct rte_udp_hdr *udp_hdr; /* - * If OUTER_UDP_CKSUM is support, HW can caclulate the pseudo + * If OUTER_UDP_CKSUM is support, HW can calculate the pseudo * header for TSO packets */ if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) @@ -3641,7 +3641,7 @@ hns3_outer_ipv6_cksum_prepared(struct rte_mbuf *m, uint64_t ol_flags, if (ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM) { struct rte_udp_hdr *udp_hdr; /* - * If OUTER_UDP_CKSUM is support, HW can caclulate the pseudo + * If OUTER_UDP_CKSUM is support, HW can calculate the pseudo * header for TSO packets */ if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h index e633b336b1..ea1a805491 100644 --- a/drivers/net/hns3/hns3_rxtx.h +++ b/drivers/net/hns3/hns3_rxtx.h @@ -618,7 +618,7 @@ hns3_handle_bdinfo(struct hns3_rx_queue *rxq, struct rte_mbuf *rxm,
/* * If packet len bigger than mtu when recv with no-scattered algorithm, - * the first n bd will without FE bit, we need process this sisution. + * the first n bd will without FE bit, we need process this situation. * Note: we don't need add statistic counter because latest BD which * with FE bit will mark HNS3_RXD_L2E_B bit. */ diff --git a/drivers/net/hns3/hns3_stats.c b/drivers/net/hns3/hns3_stats.c index 552ae9d30c..bad65fcbed 100644 --- a/drivers/net/hns3/hns3_stats.c +++ b/drivers/net/hns3/hns3_stats.c @@ -1286,7 +1286,7 @@ hns3_dev_xstats_get_names(struct rte_eth_dev *dev, * A pointer to an ids array passed by application. This tells which * statistics values function should retrieve. This parameter * can be set to NULL if size is 0. In this case function will retrieve - * all avalible statistics. + * all available statistics. * @param values * A pointer to a table to be filled with device statistics values. * @param size diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index c0bfff43ee..1d417dbf8a 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -2483,7 +2483,7 @@ i40e_dev_start(struct rte_eth_dev *dev) if (ret != I40E_SUCCESS) PMD_DRV_LOG(WARNING, "Fail to set phy mask");
- /* Call get_link_info aq commond to enable/disable LSE */ + /* Call get_link_info aq command to enable/disable LSE */ i40e_dev_link_update(dev, 0); }
@@ -3555,7 +3555,7 @@ static int i40e_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, count++; }
- /* Get individiual stats from i40e_hw_port struct */ + /* Get individual stats from i40e_hw_port struct */ for (i = 0; i < I40E_NB_HW_PORT_XSTATS; i++) { strlcpy(xstats_names[count].name, rte_i40e_hw_port_strings[i].name, @@ -3613,7 +3613,7 @@ i40e_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, count++; }
- /* Get individiual stats from i40e_hw_port struct */ + /* Get individual stats from i40e_hw_port struct */ for (i = 0; i < I40E_NB_HW_PORT_XSTATS; i++) { xstats[count].value = *(uint64_t *)(((char *)hw_stats) + rte_i40e_hw_port_strings[i].offset); @@ -5544,7 +5544,7 @@ i40e_vsi_get_bw_config(struct i40e_vsi *vsi) &ets_sla_config, NULL); if (ret != I40E_SUCCESS) { PMD_DRV_LOG(ERR, - "VSI failed to get TC bandwdith configuration %u", + "VSI failed to get TC bandwidth configuration %u", hw->aq.asq_last_status); return ret; } @@ -6822,7 +6822,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev) * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void @@ -9719,7 +9719,7 @@ i40e_ethertype_filter_convert(const struct rte_eth_ethertype_filter *input, return 0; }
-/* Check if there exists the ehtertype filter */ +/* Check if there exists the ethertype filter */ struct i40e_ethertype_filter * i40e_sw_ethertype_filter_lookup(struct i40e_ethertype_rule *ethertype_rule, const struct i40e_ethertype_filter_input *input) diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h index 2d182f8000..a1ebdc093c 100644 --- a/drivers/net/i40e/i40e_ethdev.h +++ b/drivers/net/i40e/i40e_ethdev.h @@ -897,7 +897,7 @@ struct i40e_tunnel_filter { TAILQ_ENTRY(i40e_tunnel_filter) rules; struct i40e_tunnel_filter_input input; uint8_t is_to_vf; /* 0 - to PF, 1 - to VF */ - uint16_t vf_id; /* VF id, avaiblable when is_to_vf is 1. */ + uint16_t vf_id; /* VF id, available when is_to_vf is 1. */ uint16_t queue; /* Queue assigned to when match */ };
@@ -966,7 +966,7 @@ struct i40e_tunnel_filter_conf { uint32_t tenant_id; /**< Tenant ID to match. VNI, GRE key... */ uint16_t queue_id; /**< Queue assigned to if match. */ uint8_t is_to_vf; /**< 0 - to PF, 1 - to VF */ - uint16_t vf_id; /**< VF id, avaiblable when is_to_vf is 1. */ + uint16_t vf_id; /**< VF id, available when is_to_vf is 1. */ };
TAILQ_HEAD(i40e_flow_list, rte_flow); @@ -1100,7 +1100,7 @@ struct i40e_vf_msg_cfg { /* * If message statistics from a VF exceed the maximal limitation, * the PF will ignore any new message from that VF for - * 'ignor_second' time. + * 'ignore_second' time. */ uint32_t ignore_second; }; @@ -1257,7 +1257,7 @@ struct i40e_adapter { };
/** - * Strucute to store private data for each VF representor instance + * Structure to store private data for each VF representor instance */ struct i40e_vf_representor { uint16_t switch_domain_id; @@ -1265,7 +1265,7 @@ struct i40e_vf_representor { uint16_t vf_id; /**< Virtual Function ID */ struct i40e_adapter *adapter; - /**< Private data store of assocaiated physical function */ + /**< Private data store of associated physical function */ struct i40e_eth_stats stats_offset; /**< Zero-point of VF statistics*/ }; diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c index df2a5aaecc..8caedea14e 100644 --- a/drivers/net/i40e/i40e_fdir.c +++ b/drivers/net/i40e/i40e_fdir.c @@ -142,7 +142,7 @@ i40e_fdir_rx_queue_init(struct i40e_rx_queue *rxq) I40E_QRX_TAIL(rxq->vsi->base_queue);
rte_wmb(); - /* Init the RX tail regieter. */ + /* Init the RX tail register. */ I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
return err; @@ -430,7 +430,7 @@ i40e_check_fdir_flex_payload(const struct rte_eth_flex_payload_cfg *flex_cfg)
for (i = 0; i < I40E_FDIR_MAX_FLEX_LEN; i++) { if (flex_cfg->src_offset[i] >= I40E_MAX_FLX_SOURCE_OFF) { - PMD_DRV_LOG(ERR, "exceeds maxmial payload limit."); + PMD_DRV_LOG(ERR, "exceeds maximal payload limit."); return -EINVAL; } } @@ -438,7 +438,7 @@ i40e_check_fdir_flex_payload(const struct rte_eth_flex_payload_cfg *flex_cfg) memset(flex_pit, 0, sizeof(flex_pit)); num = i40e_srcoff_to_flx_pit(flex_cfg->src_offset, flex_pit); if (num > I40E_MAX_FLXPLD_FIED) { - PMD_DRV_LOG(ERR, "exceeds maxmial number of flex fields."); + PMD_DRV_LOG(ERR, "exceeds maximal number of flex fields."); return -EINVAL; } for (i = 0; i < num; i++) { @@ -948,7 +948,7 @@ i40e_flow_fdir_construct_pkt(struct i40e_pf *pf, uint8_t pctype = fdir_input->pctype; struct i40e_customized_pctype *cus_pctype;
- /* raw pcket template - just copy contents of the raw packet */ + /* raw packet template - just copy contents of the raw packet */ if (fdir_input->flow_ext.pkt_template) { memcpy(raw_pkt, fdir_input->flow.raw_flow.packet, fdir_input->flow.raw_flow.length); @@ -1831,7 +1831,7 @@ i40e_flow_add_del_fdir_filter(struct rte_eth_dev *dev, &check_filter.fdir.input); if (!node) { PMD_DRV_LOG(ERR, - "There's no corresponding flow firector filter!"); + "There's no corresponding flow director filter!"); return -EINVAL; }
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c index c9676caab5..e0cf996200 100644 --- a/drivers/net/i40e/i40e_flow.c +++ b/drivers/net/i40e/i40e_flow.c @@ -3043,7 +3043,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev, rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, - "Exceeds maxmial payload limit."); + "Exceeds maximal payload limit."); return -rte_errno; }
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c index ccb3924a5f..2435a8a070 100644 --- a/drivers/net/i40e/i40e_pf.c +++ b/drivers/net/i40e/i40e_pf.c @@ -343,7 +343,7 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg, vf->request_caps = *(uint32_t *)msg;
/* enable all RSS by default, - * doesn't support hena setting by virtchnnl yet. + * doesn't support hena setting by virtchnl yet. */ if (vf->request_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) { I40E_WRITE_REG(hw, I40E_VFQF_HENA1(0, vf->vf_idx), @@ -725,7 +725,7 @@ i40e_pf_host_process_cmd_config_irq_map(struct i40e_pf_vf *vf, if ((map->rxq_map < qbit_max) && (map->txq_map < qbit_max)) { i40e_pf_config_irq_link_list(vf, map); } else { - /* configured queue size excceed limit */ + /* configured queue size exceed limit */ ret = I40E_ERR_PARAM; goto send_msg; } diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index e4cb33dc3c..9a00a9b71e 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -609,7 +609,7 @@ i40e_rx_alloc_bufs(struct i40e_rx_queue *rxq) rxdp[i].read.pkt_addr = dma_addr; }
- /* Update rx tail regsiter */ + /* Update rx tail register */ I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->rx_free_trigger);
rxq->rx_free_trigger = @@ -995,7 +995,7 @@ i40e_recv_scattered_pkts(void *rx_queue, * threshold of the queue, advance the Receive Descriptor Tail (RDT) * register. Update the RDT with the value of the last processed RX * descriptor minus 1, to guarantee that the RDT register is never - * equal to the RDH register, which creates a "full" ring situtation + * equal to the RDH register, which creates a "full" ring situation * from the hardware point of view. */ nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold); @@ -1467,7 +1467,7 @@ tx_xmit_pkts(struct i40e_tx_queue *txq, i40e_tx_fill_hw_ring(txq, tx_pkts + n, (uint16_t)(nb_pkts - n)); txq->tx_tail = (uint16_t)(txq->tx_tail + (nb_pkts - n));
- /* Determin if RS bit needs to be set */ + /* Determine if RS bit needs to be set */ if (txq->tx_tail > txq->tx_next_rs) { txr[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) << @@ -1697,7 +1697,7 @@ i40e_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) }
if (rxq->rx_deferred_start) - PMD_DRV_LOG(WARNING, "RX queue %u is deferrd start", + PMD_DRV_LOG(WARNING, "RX queue %u is deferred start", rx_queue_id);
err = i40e_alloc_rx_queue_mbufs(rxq); @@ -1706,7 +1706,7 @@ i40e_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) return err; }
- /* Init the RX tail regieter. */ + /* Init the RX tail register. */ I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
err = i40e_switch_rx_queue(hw, rxq->reg_idx, TRUE); @@ -1771,7 +1771,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) }
if (txq->tx_deferred_start) - PMD_DRV_LOG(WARNING, "TX queue %u is deferrd start", + PMD_DRV_LOG(WARNING, "TX queue %u is deferred start", tx_queue_id);
/* @@ -1930,7 +1930,7 @@ i40e_dev_rx_queue_setup_runtime(struct rte_eth_dev *dev, PMD_DRV_LOG(ERR, "Can't use default burst."); return -EINVAL; } - /* check scatterred conflict */ + /* check scattered conflict */ if (!dev->data->scattered_rx && use_scattered_rx) { PMD_DRV_LOG(ERR, "Scattered rx is required."); return -EINVAL; @@ -2014,7 +2014,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev, rxq->rx_deferred_start = rx_conf->rx_deferred_start; rxq->offloads = offloads;
- /* Allocate the maximun number of RX ring hardware descriptor. */ + /* Allocate the maximum number of RX ring hardware descriptor. */ len = I40E_MAX_RING_DESC;
/** @@ -2322,7 +2322,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev, */ tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ? tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH); - /* force tx_rs_thresh to adapt an aggresive tx_free_thresh */ + /* force tx_rs_thresh to adapt an aggressive tx_free_thresh */ tx_rs_thresh = (DEFAULT_TX_RS_THRESH + tx_free_thresh > nb_desc) ? nb_desc - tx_free_thresh : DEFAULT_TX_RS_THRESH; if (tx_conf->tx_rs_thresh > 0) @@ -2991,7 +2991,7 @@ i40e_rx_queue_init(struct i40e_rx_queue *rxq) if (rxq->max_pkt_len > buf_size) dev_data->scattered_rx = 1;
- /* Init the RX tail regieter. */ + /* Init the RX tail register. */ I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
return 0; diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c index d0bf86dfba..f78ba994f7 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c +++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c @@ -430,7 +430,7 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, desc_to_ptype_v(descs, &rx_pkts[pos], ptype_tbl); desc_to_olflags_v(descs, &rx_pkts[pos]);
- /* C.4 calc avaialbe number of desc */ + /* C.4 calc available number of desc */ var = __builtin_popcountll((vec_ld(0, (vector unsigned long *)&staterr)[0])); nb_pkts_recd += var; diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c index b951ea2dc3..507468531f 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_neon.c +++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c @@ -151,7 +151,7 @@ desc_to_olflags_v(struct i40e_rx_queue *rxq, uint64x2_t descs[4], vreinterpretq_u8_u32(l3_l4e))); /* then we shift left 1 bit */ l3_l4e = vshlq_n_u32(l3_l4e, 1); - /* we need to mask out the reduntant bits */ + /* we need to mask out the redundant bits */ l3_l4e = vandq_u32(l3_l4e, cksum_mask);
vlan0 = vorrq_u32(vlan0, rss); @@ -416,7 +416,7 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *__rte_restrict rxq, I40E_UINT16_BIT - 1)); stat = ~vgetq_lane_u64(vreinterpretq_u64_u16(staterr), 0);
- /* C.4 calc avaialbe number of desc */ + /* C.4 calc available number of desc */ if (unlikely(stat == 0)) { nb_pkts_recd += RTE_I40E_DESCS_PER_LOOP; } else { diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c index 497b2404c6..3782e8052f 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_sse.c +++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c @@ -282,7 +282,7 @@ desc_to_olflags_v(struct i40e_rx_queue *rxq, volatile union i40e_rx_desc *rxdp, l3_l4e = _mm_shuffle_epi8(l3_l4e_flags, l3_l4e); /* then we shift left 1 bit */ l3_l4e = _mm_slli_epi32(l3_l4e, 1); - /* we need to mask out the reduntant bits */ + /* we need to mask out the redundant bits */ l3_l4e = _mm_and_si128(l3_l4e, cksum_mask);
vlan0 = _mm_or_si128(vlan0, rss); @@ -297,7 +297,7 @@ desc_to_olflags_v(struct i40e_rx_queue *rxq, volatile union i40e_rx_desc *rxdp, __m128i v_fdir_ol_flags = descs_to_fdir_16b(desc_fltstat, descs, rx_pkts); #endif - /* OR in ol_flag bits after descriptor speicific extraction */ + /* OR in ol_flag bits after descriptor specific extraction */ vlan0 = _mm_or_si128(vlan0, v_fdir_ol_flags); }
@@ -577,7 +577,7 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, _mm_storeu_si128((void *)&rx_pkts[pos]->rx_descriptor_fields1, pkt_mb1); desc_to_ptype_v(descs, &rx_pkts[pos], ptype_tbl); - /* C.4 calc avaialbe number of desc */ + /* C.4 calc available number of desc */ var = __builtin_popcountll(_mm_cvtsi128_si64(staterr)); nb_pkts_recd += var; if (likely(var != RTE_I40E_DESCS_PER_LOOP)) diff --git a/drivers/net/i40e/rte_pmd_i40e.c b/drivers/net/i40e/rte_pmd_i40e.c index a492959b75..35829a1eea 100644 --- a/drivers/net/i40e/rte_pmd_i40e.c +++ b/drivers/net/i40e/rte_pmd_i40e.c @@ -1427,7 +1427,7 @@ rte_pmd_i40e_set_tc_strict_prio(uint16_t port, uint8_t tc_map) /* Get all TCs' bandwidth. */ for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { if (veb->enabled_tc & BIT_ULL(i)) { - /* For rubust, if bandwidth is 0, use 1 instead. */ + /* For robust, if bandwidth is 0, use 1 instead. */ if (veb->bw_info.bw_ets_share_credits[i]) ets_data.tc_bw_share_credits[i] = veb->bw_info.bw_ets_share_credits[i]; diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 377d7bc7a6..79397f15e5 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -516,7 +516,7 @@ iavf_init_rss(struct iavf_adapter *adapter) j = 0; vf->rss_lut[i] = j; } - /* send virtchnnl ops to configure rss*/ + /* send virtchnl ops to configure RSS */ ret = iavf_configure_rss_lut(adapter); if (ret) return ret; @@ -831,7 +831,7 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev, "vector %u are mapping to all Rx queues", vf->msix_base); } else { - /* If Rx interrupt is reuquired, and we can use + /* If Rx interrupt is required, and we can use * multi interrupts, then the vec is from 1 */ vf->nb_msix = @@ -1420,7 +1420,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev, }
rte_memcpy(vf->rss_lut, lut, reta_size); - /* send virtchnnl ops to configure rss*/ + /* send virtchnl ops to configure RSS */ ret = iavf_configure_rss_lut(adapter); if (ret) /* revert back */ rte_memcpy(vf->rss_lut, lut, reta_size); diff --git a/drivers/net/iavf/iavf_ipsec_crypto.c b/drivers/net/iavf/iavf_ipsec_crypto.c index 884169e061..adf101ab8a 100644 --- a/drivers/net/iavf/iavf_ipsec_crypto.c +++ b/drivers/net/iavf/iavf_ipsec_crypto.c @@ -69,7 +69,7 @@ struct iavf_security_session { * 16B - 3 * * but we also need the IV Length for TSO to correctly calculate the total - * header length so placing it in the upper 6-bits here for easier reterival. + * header length so placing it in the upper 6-bits here for easier retrieval. */ static inline uint8_t calc_ipsec_desc_iv_len_field(uint16_t iv_sz) @@ -448,7 +448,7 @@ sa_add_set_auth_params(struct virtchnl_ipsec_crypto_cfg_item *cfg, /** * Send SA add virtual channel request to Inline IPsec driver. * - * Inline IPsec driver expects SPI and destination IP adderss to be in host + * Inline IPsec driver expects SPI and destination IP address to be in host * order, but DPDK APIs are network order, therefore we need to do a htonl * conversion of these parameters. */ @@ -726,7 +726,7 @@ iavf_ipsec_crypto_action_valid(struct rte_eth_dev *ethdev, /** * Send virtual channel security policy add request to IES driver. * - * IES driver expects SPI and destination IP adderss to be in host + * IES driver expects SPI and destination IP address to be in host * order, but DPDK APIs are network order, therefore we need to do a htonl * conversion of these parameters. */ @@ -994,7 +994,7 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter, request->req_id = (uint16_t)0xDEADBEEF;
/** - * SA delete supports deletetion of 1-8 specified SA's or if the flag + * SA delete supports deletion of 1-8 specified SA's or if the flag * field is zero, all SA's associated with VF will be deleted. */ if (sess) { @@ -1147,7 +1147,7 @@ iavf_ipsec_crypto_pkt_metadata_set(void *device, md = RTE_MBUF_DYNFIELD(m, iavf_sctx->pkt_md_offset, struct iavf_ipsec_crypto_pkt_metadata *);
- /* Set immutatable metadata values from session template */ + /* Set immutable metadata values from session template */ memcpy(md, &iavf_sess->pkt_metadata_template, sizeof(struct iavf_ipsec_crypto_pkt_metadata));
@@ -1355,7 +1355,7 @@ iavf_ipsec_crypto_set_security_capabililites(struct iavf_security_ctx capabilities[number_of_capabilities].op = RTE_CRYPTO_OP_TYPE_UNDEFINED;
/** - * Iterate over each virtchl crypto capability by crypto type and + * Iterate over each virtchnl crypto capability by crypto type and * algorithm. */ for (i = 0; i < VIRTCHNL_IPSEC_MAX_CRYPTO_CAP_NUM; i++) { @@ -1454,7 +1454,7 @@ iavf_ipsec_crypto_capabilities_get(void *device) /** * Update the security capabilities struct with the runtime discovered * crypto capabilities, except for last element of the array which is - * the null terminatation + * the null termination */ for (i = 0; i < ((sizeof(iavf_security_capabilities) / sizeof(iavf_security_capabilities[0])) - 1); i++) { diff --git a/drivers/net/iavf/iavf_ipsec_crypto.h b/drivers/net/iavf/iavf_ipsec_crypto.h index 4e4c8798ec..687541077a 100644 --- a/drivers/net/iavf/iavf_ipsec_crypto.h +++ b/drivers/net/iavf/iavf_ipsec_crypto.h @@ -73,7 +73,7 @@ enum iavf_ipsec_iv_len { };
-/* IPsec Crypto Packet Metaday offload flags */ +/* IPsec Crypto Packet Metadata offload flags */ #define IAVF_IPSEC_CRYPTO_OL_FLAGS_IS_TUN (0x1 << 0) #define IAVF_IPSEC_CRYPTO_OL_FLAGS_ESN (0x1 << 1) #define IAVF_IPSEC_CRYPTO_OL_FLAGS_IPV6_EXT_HDRS (0x1 << 2) diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index 154472c50f..59623ac820 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -648,8 +648,8 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return -ENOMEM; }
- /* Allocate the maximun number of RX ring hardware descriptor with - * a liitle more to support bulk allocate. + /* Allocate the maximum number of RX ring hardware descriptor with + * a little more to support bulk allocate. */ len = IAVF_MAX_RING_DESC + IAVF_RX_MAX_BURST; ring_size = RTE_ALIGN(len * sizeof(union iavf_rx_desc), diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 1bac59bf0e..d582a36326 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -159,7 +159,7 @@ desc_to_olflags_v(struct iavf_rx_queue *rxq, __m128i descs[4], l3_l4e = _mm_shuffle_epi8(l3_l4e_flags, l3_l4e); /* then we shift left 1 bit */ l3_l4e = _mm_slli_epi32(l3_l4e, 1); - /* we need to mask out the reduntant bits */ + /* we need to mask out the redundant bits */ l3_l4e = _mm_and_si128(l3_l4e, cksum_mask);
vlan0 = _mm_or_si128(vlan0, rss); @@ -613,7 +613,7 @@ _recv_raw_pkts_vec(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_pkts, _mm_storeu_si128((void *)&rx_pkts[pos]->rx_descriptor_fields1, pkt_mb1); desc_to_ptype_v(descs, &rx_pkts[pos], ptype_tbl); - /* C.4 calc avaialbe number of desc */ + /* C.4 calc available number of desc */ var = __builtin_popcountll(_mm_cvtsi128_si64(staterr)); nb_pkts_recd += var; if (likely(var != IAVF_VPMD_DESCS_PER_LOOP)) diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index 145b059837..7602691649 100644 --- a/drivers/net/iavf/iavf_vchnl.c +++ b/drivers/net/iavf/iavf_vchnl.c @@ -461,7 +461,7 @@ iavf_check_api_version(struct iavf_adapter *adapter) (vf->virtchnl_version.major == VIRTCHNL_VERSION_MAJOR_START && vf->virtchnl_version.minor < VIRTCHNL_VERSION_MINOR_START)) { PMD_INIT_LOG(ERR, "VIRTCHNL API version should not be lower" - " than (%u.%u) to support Adapative VF", + " than (%u.%u) to support Adaptive VF", VIRTCHNL_VERSION_MAJOR_START, VIRTCHNL_VERSION_MAJOR_START); return -1; @@ -1487,7 +1487,7 @@ iavf_fdir_check(struct iavf_adapter *adapter,
err = iavf_execute_vf_cmd(adapter, &args, 0); if (err) { - PMD_DRV_LOG(ERR, "fail to check flow direcotor rule"); + PMD_DRV_LOG(ERR, "fail to check flow director rule"); return err; }
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index cca1d7bf46..7f0c074b01 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -864,7 +864,7 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw) j = 0; hw->rss_lut[i] = j; } - /* send virtchnnl ops to configure rss*/ + /* send virtchnl ops to configure RSS */ ret = ice_dcf_configure_rss_lut(hw); if (ret) return ret; diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 28f7f7fb72..164d834a18 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -203,7 +203,7 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev, "vector %u are mapping to all Rx queues", hw->msix_base); } else { - /* If Rx interrupt is reuquired, and we can use + /* If Rx interrupt is required, and we can use * multi interrupts, then the vec is from 1 */ hw->nb_msix = RTE_MIN(hw->vf_res->max_vectors, diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 13a7a9702a..c9fd3de2bd 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -1264,7 +1264,7 @@ ice_handle_aq_msg(struct rte_eth_dev *dev) * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void @@ -1627,7 +1627,7 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type) }
/* At the beginning, only TC0. */ - /* What we need here is the maximam number of the TX queues. + /* What we need here is the maximum number of the TX queues. * Currently vsi->nb_qps means it. * Correct it if any change. */ @@ -3576,7 +3576,7 @@ ice_dev_start(struct rte_eth_dev *dev) goto rx_err; }
- /* enable Rx interrput and mapping Rx queue to interrupt vector */ + /* enable Rx interrupt and mapping Rx queue to interrupt vector */ if (ice_rxq_intr_setup(dev)) return -EIO;
@@ -3603,7 +3603,7 @@ ice_dev_start(struct rte_eth_dev *dev)
ice_dev_set_link_up(dev);
- /* Call get_link_info aq commond to enable/disable LSE */ + /* Call get_link_info aq command to enable/disable LSE */ ice_link_update(dev, 0);
pf->adapter_stopped = false; @@ -5395,7 +5395,7 @@ ice_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, count++; }
- /* Get individiual stats from ice_hw_port struct */ + /* Get individual stats from ice_hw_port struct */ for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) { xstats[count].value = *(uint64_t *)((char *)hw_stats + @@ -5426,7 +5426,7 @@ static int ice_xstats_get_names(__rte_unused struct rte_eth_dev *dev, count++; }
- /* Get individiual stats from ice_hw_port struct */ + /* Get individual stats from ice_hw_port struct */ for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) { strlcpy(xstats_names[count].name, ice_hw_port_strings[i].name, sizeof(xstats_names[count].name)); diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index f6d8564ab8..c80d86915e 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -1118,7 +1118,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev, rxq->proto_xtr = pf->proto_xtr != NULL ? pf->proto_xtr[queue_idx] : PROTO_XTR_NONE;
- /* Allocate the maximun number of RX ring hardware descriptor. */ + /* Allocate the maximum number of RX ring hardware descriptor. */ len = ICE_MAX_RING_DESC;
/** @@ -1248,7 +1248,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev, tx_free_thresh = (uint16_t)(tx_conf->tx_free_thresh ? tx_conf->tx_free_thresh : ICE_DEFAULT_TX_FREE_THRESH); - /* force tx_rs_thresh to adapt an aggresive tx_free_thresh */ + /* force tx_rs_thresh to adapt an aggressive tx_free_thresh */ tx_rs_thresh = (ICE_DEFAULT_TX_RSBIT_THRESH + tx_free_thresh > nb_desc) ? nb_desc - tx_free_thresh : ICE_DEFAULT_TX_RSBIT_THRESH; @@ -1714,7 +1714,7 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq) rxdp[i].read.pkt_addr = dma_addr; }
- /* Update rx tail regsiter */ + /* Update Rx tail register */ ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->rx_free_trigger);
rxq->rx_free_trigger = @@ -1976,7 +1976,7 @@ ice_recv_scattered_pkts(void *rx_queue, * threshold of the queue, advance the Receive Descriptor Tail (RDT) * register. Update the RDT with the value of the last processed RX * descriptor minus 1, to guarantee that the RDT register is never - * equal to the RDH register, which creates a "full" ring situtation + * equal to the RDH register, which creates a "full" ring situation * from the hardware point of view. */ nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold); @@ -3117,7 +3117,7 @@ tx_xmit_pkts(struct ice_tx_queue *txq, ice_tx_fill_hw_ring(txq, tx_pkts + n, (uint16_t)(nb_pkts - n)); txq->tx_tail = (uint16_t)(txq->tx_tail + (nb_pkts - n));
- /* Determin if RS bit needs to be set */ + /* Determine if RS bit needs to be set */ if (txq->tx_tail > txq->tx_next_rs) { txr[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) << diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c index 6cd44c5847..fd94cedde3 100644 --- a/drivers/net/ice/ice_rxtx_vec_sse.c +++ b/drivers/net/ice/ice_rxtx_vec_sse.c @@ -202,7 +202,7 @@ ice_rx_desc_to_olflags_v(struct ice_rx_queue *rxq, __m128i descs[4], __m128i l3_l4_mask = _mm_set_epi32(~0x6, ~0x6, ~0x6, ~0x6); __m128i l3_l4_flags = _mm_and_si128(flags, l3_l4_mask); flags = _mm_or_si128(l3_l4_flags, l4_outer_flags); - /* we need to mask out the reduntant bits introduced by RSS or + /* we need to mask out the redundant bits introduced by RSS or * VLAN fields. */ flags = _mm_and_si128(flags, cksum_mask); @@ -566,7 +566,7 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts, _mm_storeu_si128((void *)&rx_pkts[pos]->rx_descriptor_fields1, pkt_mb0); ice_rx_desc_to_ptype_v(descs, &rx_pkts[pos], ptype_tbl); - /* C.4 calc avaialbe number of desc */ + /* C.4 calc available number of desc */ var = __builtin_popcountll(_mm_cvtsi128_si64(staterr)); nb_pkts_recd += var; if (likely(var != ICE_DESCS_PER_LOOP)) diff --git a/drivers/net/igc/igc_filter.c b/drivers/net/igc/igc_filter.c index 51fcabfb59..bff98df200 100644 --- a/drivers/net/igc/igc_filter.c +++ b/drivers/net/igc/igc_filter.c @@ -167,7 +167,7 @@ igc_tuple_filter_lookup(const struct igc_adapter *igc, /* search the filter array */ for (; i < IGC_MAX_NTUPLE_FILTERS; i++) { if (igc->ntuple_filters[i].hash_val) { - /* compare the hase value */ + /* compare the hash value */ if (ntuple->hash_val == igc->ntuple_filters[i].hash_val) /* filter be found, return index */ diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c index 339b0c9aa1..e48d5df11a 100644 --- a/drivers/net/igc/igc_txrx.c +++ b/drivers/net/igc/igc_txrx.c @@ -2099,7 +2099,7 @@ eth_igc_tx_done_cleanup(void *txqueue, uint32_t free_cnt) sw_ring[tx_id].mbuf = NULL; sw_ring[tx_id].last_id = tx_id;
- /* Move to next segemnt. */ + /* Move to next segment. */ tx_id = sw_ring[tx_id].next_id; } while (tx_id != tx_next);
@@ -2133,7 +2133,7 @@ eth_igc_tx_done_cleanup(void *txqueue, uint32_t free_cnt) * Walk the list and find the next mbuf, if any. */ do { - /* Move to next segemnt. */ + /* Move to next segment. */ tx_id = sw_ring[tx_id].next_id;
if (sw_ring[tx_id].mbuf) diff --git a/drivers/net/ionic/ionic_if.h b/drivers/net/ionic/ionic_if.h index 693b44d764..45bad9b040 100644 --- a/drivers/net/ionic/ionic_if.h +++ b/drivers/net/ionic/ionic_if.h @@ -2068,7 +2068,7 @@ typedef struct ionic_admin_comp ionic_fw_download_comp; * enum ionic_fw_control_oper - FW control operations * @IONIC_FW_RESET: Reset firmware * @IONIC_FW_INSTALL: Install firmware - * @IONIC_FW_ACTIVATE: Acticate firmware + * @IONIC_FW_ACTIVATE: Activate firmware */ enum ionic_fw_control_oper { IONIC_FW_RESET = 0, @@ -2091,7 +2091,7 @@ struct ionic_fw_control_cmd { };
/** - * struct ionic_fw_control_comp - Firmware control copletion + * struct ionic_fw_control_comp - Firmware control completion * @status: Status of the command (enum ionic_status_code) * @comp_index: Index in the descriptor ring for which this is the completion * @slot: Slot where the firmware was installed @@ -2878,7 +2878,7 @@ struct ionic_doorbell { * and @identity->intr_coal_div to convert from * usecs to device units: * - * coal_init = coal_usecs * coal_mutl / coal_div + * coal_init = coal_usecs * coal_mult / coal_div * * When an interrupt is sent the interrupt * coalescing timer current value diff --git a/drivers/net/ipn3ke/ipn3ke_ethdev.c b/drivers/net/ipn3ke/ipn3ke_ethdev.c index 964506c6db..014e438dd5 100644 --- a/drivers/net/ipn3ke/ipn3ke_ethdev.c +++ b/drivers/net/ipn3ke/ipn3ke_ethdev.c @@ -483,7 +483,7 @@ static int ipn3ke_vswitch_probe(struct rte_afu_device *afu_dev) RTE_CACHE_LINE_SIZE, afu_dev->device.numa_node); if (!hw) { - IPN3KE_AFU_PMD_ERR("failed to allocate hardwart data"); + IPN3KE_AFU_PMD_ERR("failed to allocate hardware data"); retval = -ENOMEM; return -ENOMEM; } diff --git a/drivers/net/ipn3ke/ipn3ke_ethdev.h b/drivers/net/ipn3ke/ipn3ke_ethdev.h index 041f13d9c3..58fcc50c57 100644 --- a/drivers/net/ipn3ke/ipn3ke_ethdev.h +++ b/drivers/net/ipn3ke/ipn3ke_ethdev.h @@ -223,7 +223,7 @@ struct ipn3ke_hw_cap { };
/** - * Strucute to store private data for each representor instance + * Structure to store private data for each representor instance */ struct ipn3ke_rpst { TAILQ_ENTRY(ipn3ke_rpst) next; /**< Next in device list. */ @@ -237,7 +237,7 @@ struct ipn3ke_rpst { uint16_t i40e_pf_eth_port_id; struct rte_eth_link ori_linfo; struct ipn3ke_tm_internals tm; - /**< Private data store of assocaiated physical function */ + /**< Private data store of associated physical function */ struct rte_ether_addr mac_addr; };
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c index f5867ca055..66ae31a5a9 100644 --- a/drivers/net/ipn3ke/ipn3ke_flow.c +++ b/drivers/net/ipn3ke/ipn3ke_flow.c @@ -1299,7 +1299,7 @@ int ipn3ke_flow_init(void *dev) IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_LKUP_ENABLE: %x\n", data);
- /* configure rx parse config, settings associatied with VxLAN */ + /* configure rx parse config, settings associated with VxLAN */ IPN3KE_MASK_WRITE_REG(hw, IPN3KE_CLF_RX_PARSE_CFG, 0, diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c index 8139e13a23..abbecfdf2e 100644 --- a/drivers/net/ipn3ke/ipn3ke_representor.c +++ b/drivers/net/ipn3ke/ipn3ke_representor.c @@ -2279,7 +2279,7 @@ ipn3ke_rpst_xstats_get count++; }
- /* Get individiual stats from ipn3ke_rpst_hw_port */ + /* Get individual stats from ipn3ke_rpst_hw_port */ for (i = 0; i < IPN3KE_RPST_HW_PORT_XSTATS_CNT; i++) { xstats[count].value = *(uint64_t *)(((char *)(&hw_stats)) + ipn3ke_rpst_hw_port_strings[i].offset); @@ -2287,7 +2287,7 @@ ipn3ke_rpst_xstats_get count++; }
- /* Get individiual stats from ipn3ke_rpst_rxq_pri */ + /* Get individual stats from ipn3ke_rpst_rxq_pri */ for (i = 0; i < IPN3KE_RPST_RXQ_PRIO_XSTATS_CNT; i++) { for (prio = 0; prio < IPN3KE_RPST_PRIO_XSTATS_CNT; prio++) { xstats[count].value = @@ -2299,7 +2299,7 @@ ipn3ke_rpst_xstats_get } }
- /* Get individiual stats from ipn3ke_rpst_txq_prio */ + /* Get individual stats from ipn3ke_rpst_txq_prio */ for (i = 0; i < IPN3KE_RPST_TXQ_PRIO_XSTATS_CNT; i++) { for (prio = 0; prio < IPN3KE_RPST_PRIO_XSTATS_CNT; prio++) { xstats[count].value = @@ -2337,7 +2337,7 @@ __rte_unused unsigned int limit) count++; }
- /* Get individiual stats from ipn3ke_rpst_hw_port */ + /* Get individual stats from ipn3ke_rpst_hw_port */ for (i = 0; i < IPN3KE_RPST_HW_PORT_XSTATS_CNT; i++) { snprintf(xstats_names[count].name, sizeof(xstats_names[count].name), @@ -2346,7 +2346,7 @@ __rte_unused unsigned int limit) count++; }
- /* Get individiual stats from ipn3ke_rpst_rxq_pri */ + /* Get individual stats from ipn3ke_rpst_rxq_pri */ for (i = 0; i < IPN3KE_RPST_RXQ_PRIO_XSTATS_CNT; i++) { for (prio = 0; prio < 8; prio++) { snprintf(xstats_names[count].name, @@ -2358,7 +2358,7 @@ __rte_unused unsigned int limit) } }
- /* Get individiual stats from ipn3ke_rpst_txq_prio */ + /* Get individual stats from ipn3ke_rpst_txq_prio */ for (i = 0; i < IPN3KE_RPST_TXQ_PRIO_XSTATS_CNT; i++) { for (prio = 0; prio < 8; prio++) { snprintf(xstats_names[count].name, diff --git a/drivers/net/ipn3ke/meson.build b/drivers/net/ipn3ke/meson.build index 4bf739809e..104d2f58e5 100644 --- a/drivers/net/ipn3ke/meson.build +++ b/drivers/net/ipn3ke/meson.build @@ -8,7 +8,7 @@ if is_windows endif
# -# Add the experimenatal APIs called from this PMD +# Add the experimental APIs called from this PMD # rte_eth_switch_domain_alloc() # rte_eth_dev_create() # rte_eth_dev_destroy() diff --git a/drivers/net/ixgbe/ixgbe_bypass.c b/drivers/net/ixgbe/ixgbe_bypass.c index 67ced6c723..94f34a2996 100644 --- a/drivers/net/ixgbe/ixgbe_bypass.c +++ b/drivers/net/ixgbe/ixgbe_bypass.c @@ -11,7 +11,7 @@
#define BYPASS_STATUS_OFF_MASK 3
-/* Macros to check for invlaid function pointers. */ +/* Macros to check for invalid function pointers. */ #define FUNC_PTR_OR_ERR_RET(func, retval) do { \ if ((func) == NULL) { \ PMD_DRV_LOG(ERR, "%s:%d function not supported", \ diff --git a/drivers/net/ixgbe/ixgbe_bypass_api.h b/drivers/net/ixgbe/ixgbe_bypass_api.h index 8eb773391b..6ef965dbb6 100644 --- a/drivers/net/ixgbe/ixgbe_bypass_api.h +++ b/drivers/net/ixgbe/ixgbe_bypass_api.h @@ -135,7 +135,7 @@ static s32 ixgbe_bypass_rw_generic(struct ixgbe_hw *hw, u32 cmd, u32 *status) * ixgbe_bypass_valid_rd_generic - Verify valid return from bit-bang. * * If we send a write we can't be sure it took until we can read back - * that same register. It can be a problem as some of the feilds may + * that same register. It can be a problem as some of the fields may * for valid reasons change between the time wrote the register and * we read it again to verify. So this function check everything we * can check and then assumes it worked. @@ -189,7 +189,7 @@ static bool ixgbe_bypass_valid_rd_generic(u32 in_reg, u32 out_reg) }
/** - * ixgbe_bypass_set_generic - Set a bypass field in the FW CTRL Regiter. + * ixgbe_bypass_set_generic - Set a bypass field in the FW CTRL Register. * * @hw: pointer to hardware structure * @cmd: The control word we are setting. diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index fe61dba81d..c8f0460440 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -2375,7 +2375,7 @@ ixgbe_dev_configure(struct rte_eth_dev *dev) if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
- /* multipe queue mode checking */ + /* multiple queue mode checking */ ret = ixgbe_check_mq_mode(dev); if (ret != 0) { PMD_DRV_LOG(ERR, "ixgbe_check_mq_mode fails with %d.", @@ -2603,7 +2603,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev) } }
- /* confiugre msix for sleep until rx interrupt */ + /* configure MSI-X for sleep until Rx interrupt */ ixgbe_configure_msix(dev);
/* initialize transmission unit */ @@ -2907,7 +2907,7 @@ ixgbe_dev_set_link_up(struct rte_eth_dev *dev) if (hw->mac.type == ixgbe_mac_82599EB) { #ifdef RTE_LIBRTE_IXGBE_BYPASS if (hw->device_id == IXGBE_DEV_ID_82599_BYPASS) { - /* Not suported in bypass mode */ + /* Not supported in bypass mode */ PMD_INIT_LOG(ERR, "Set link up is not supported " "by device id 0x%x", hw->device_id); return -ENOTSUP; @@ -2938,7 +2938,7 @@ ixgbe_dev_set_link_down(struct rte_eth_dev *dev) if (hw->mac.type == ixgbe_mac_82599EB) { #ifdef RTE_LIBRTE_IXGBE_BYPASS if (hw->device_id == IXGBE_DEV_ID_82599_BYPASS) { - /* Not suported in bypass mode */ + /* Not supported in bypass mode */ PMD_INIT_LOG(ERR, "Set link down is not supported " "by device id 0x%x", hw->device_id); return -ENOTSUP; @@ -4603,7 +4603,7 @@ ixgbe_dev_interrupt_action(struct rte_eth_dev *dev) * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void @@ -4659,7 +4659,7 @@ ixgbe_dev_interrupt_delayed_handler(void *param) * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void @@ -5921,7 +5921,7 @@ ixgbevf_configure_msix(struct rte_eth_dev *dev) /* Configure all RX queues of VF */ for (q_idx = 0; q_idx < dev->data->nb_rx_queues; q_idx++) { /* Force all queue use vector 0, - * as IXGBE_VF_MAXMSIVECOTR = 1 + * as IXGBE_VF_MAXMSIVECTOR = 1 */ ixgbevf_set_ivar_map(hw, 0, q_idx, vector_idx); rte_intr_vec_list_index_set(intr_handle, q_idx, @@ -6256,7 +6256,7 @@ ixgbe_inject_5tuple_filter(struct rte_eth_dev *dev, * @param * dev: Pointer to struct rte_eth_dev. * index: the index the filter allocates. - * filter: ponter to the filter that will be added. + * filter: pointer to the filter that will be added. * rx_queue: the queue id the filter assigned to. * * @return @@ -6872,7 +6872,7 @@ ixgbe_timesync_disable(struct rte_eth_dev *dev) /* Disable L2 filtering of IEEE1588/802.1AS Ethernet frame types. */ IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_1588), 0);
- /* Stop incrementating the System Time registers. */ + /* Stop incrementing the System Time registers. */ IXGBE_WRITE_REG(hw, IXGBE_TIMINCA, 0);
return 0; diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h index 83e8b5e56a..69e0e82a5b 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.h +++ b/drivers/net/ixgbe/ixgbe_ethdev.h @@ -68,7 +68,7 @@ #define IXGBE_LPBK_NONE 0x0 /* Default value. Loopback is disabled. */ #define IXGBE_LPBK_TX_RX 0x1 /* Tx->Rx loopback operation is enabled. */ /* X540-X550 specific loopback operations */ -#define IXGBE_MII_AUTONEG_ENABLE 0x1000 /* Auto-negociation enable (default = 1) */ +#define IXGBE_MII_AUTONEG_ENABLE 0x1000 /* Auto-negotiation enable (default = 1) */
#define IXGBE_MAX_JUMBO_FRAME_SIZE 0x2600 /* Maximum Jumbo frame size. */
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c index 7894047829..834c1b3f51 100644 --- a/drivers/net/ixgbe/ixgbe_fdir.c +++ b/drivers/net/ixgbe/ixgbe_fdir.c @@ -390,7 +390,7 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev)
switch (info->mask.tunnel_type_mask) { case 0: - /* Mask turnnel type */ + /* Mask tunnel type */ fdiripv6m |= IXGBE_FDIRIP6M_TUNNEL_TYPE; break; case 1: diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c index bdc9d4796c..368342872a 100644 --- a/drivers/net/ixgbe/ixgbe_flow.c +++ b/drivers/net/ixgbe/ixgbe_flow.c @@ -135,7 +135,7 @@ const struct rte_flow_action *next_no_void_action( }
/** - * Please aware there's an asumption for all the parsers. + * Please be aware there's an assumption for all the parsers. * rte_flow_item is using big endian, rte_flow_attr and * rte_flow_action are using CPU order. * Because the pattern is used to describe the packets, @@ -3261,7 +3261,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
/** * Check if the flow rule is supported by ixgbe. - * It only checkes the format. Don't guarantee the rule can be programmed into + * It only checks the format. Don't guarantee the rule can be programmed into * the HW. Because there can be no enough room for the rule. */ static int diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c index 944c9f2380..c353ae33b4 100644 --- a/drivers/net/ixgbe/ixgbe_ipsec.c +++ b/drivers/net/ixgbe/ixgbe_ipsec.c @@ -310,7 +310,7 @@ ixgbe_crypto_remove_sa(struct rte_eth_dev *dev, return -1; }
- /* Disable and clear Rx SPI and key table table entryes*/ + /* Disable and clear Rx SPI and key table table entries*/ reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | (sa_index << 3); IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, 0); IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, 0); diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c index 9f1bd0a62b..c73833b7ae 100644 --- a/drivers/net/ixgbe/ixgbe_pf.c +++ b/drivers/net/ixgbe/ixgbe_pf.c @@ -242,7 +242,7 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev) /* PFDMA Tx General Switch Control Enables VMDQ loopback */ IXGBE_WRITE_REG(hw, IXGBE_PFDTXGSWC, IXGBE_PFDTXGSWC_VT_LBEN);
- /* clear VMDq map to perment rar 0 */ + /* clear VMDq map to permanent rar 0 */ hw->mac.ops.clear_vmdq(hw, 0, IXGBE_CLEAR_VMDQ_ALL);
/* clear VMDq map to scan rar 127 */ diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index d7c80d4242..99e928a2a9 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -1954,7 +1954,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * register. * Update the RDT with the value of the last processed RX descriptor * minus 1, to guarantee that the RDT register is never equal to the - * RDH register, which creates a "full" ring situtation from the + * RDH register, which creates a "full" ring situation from the * hardware point of view... */ nb_hold = (uint16_t) (nb_hold + rxq->nb_rx_hold); @@ -2303,7 +2303,7 @@ ixgbe_recv_pkts_lro(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, * register. * Update the RDT with the value of the last processed RX descriptor * minus 1, to guarantee that the RDT register is never equal to the - * RDH register, which creates a "full" ring situtation from the + * RDH register, which creates a "full" ring situation from the * hardware point of view... */ if (!bulk_alloc && nb_hold > rxq->rx_free_thresh) { @@ -2666,7 +2666,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, */ tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ? tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH); - /* force tx_rs_thresh to adapt an aggresive tx_free_thresh */ + /* force tx_rs_thresh to adapt an aggressive tx_free_thresh */ tx_rs_thresh = (DEFAULT_TX_RS_THRESH + tx_free_thresh > nb_desc) ? nb_desc - tx_free_thresh : DEFAULT_TX_RS_THRESH; if (tx_conf->tx_rs_thresh > 0) @@ -4831,7 +4831,7 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev) dev->data->port_id); dev->rx_pkt_burst = ixgbe_recv_pkts_lro_bulk_alloc; } else { - PMD_INIT_LOG(DEBUG, "Using Regualr (non-vector, " + PMD_INIT_LOG(DEBUG, "Using Regular (non-vector, " "single allocation) " "Scattered Rx callback " "(port=%d).", @@ -5170,7 +5170,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev) /* * Setup the Checksum Register. * Disable Full-Packet Checksum which is mutually exclusive with RSS. - * Enable IP/L4 checkum computation by hardware if requested to do so. + * Enable IP/L4 checksum computation by hardware if requested to do so. */ rxcsum = IXGBE_READ_REG(hw, IXGBE_RXCSUM); rxcsum |= IXGBE_RXCSUM_PCSD; diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c index 1eed949495..c56f76b368 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c @@ -562,7 +562,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
desc_to_ptype_v(descs, rxq->pkt_type_mask, &rx_pkts[pos]);
- /* C.4 calc avaialbe number of desc */ + /* C.4 calc available number of desc */ var = __builtin_popcountll(_mm_cvtsi128_si64(staterr)); nb_pkts_recd += var; if (likely(var != RTE_IXGBE_DESCS_PER_LOOP)) diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c index 079cf01269..42f48a68a1 100644 --- a/drivers/net/memif/memif_socket.c +++ b/drivers/net/memif/memif_socket.c @@ -726,7 +726,7 @@ memif_msg_receive(struct memif_control_channel *cc) break; case MEMIF_MSG_TYPE_INIT: /* - * This cc does not have an interface asociated with it. + * This cc does not have an interface associated with it. * If suitable interface is found it will be assigned here. */ ret = memif_msg_receive_init(cc, &msg); diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c index e3d523af57..59cb5a82a2 100644 --- a/drivers/net/memif/rte_eth_memif.c +++ b/drivers/net/memif/rte_eth_memif.c @@ -1026,7 +1026,7 @@ memif_regions_init(struct rte_eth_dev *dev) if (ret < 0) return ret; } else { - /* create one memory region contaning rings and buffers */ + /* create one memory region containing rings and buffers */ ret = memif_region_init_shm(dev, /* has buffers */ 1); if (ret < 0) return ret; diff --git a/drivers/net/mlx4/mlx4.h b/drivers/net/mlx4/mlx4.h index 2d0c512f79..4023a47602 100644 --- a/drivers/net/mlx4/mlx4.h +++ b/drivers/net/mlx4/mlx4.h @@ -74,7 +74,7 @@ enum mlx4_mp_req_type { MLX4_MP_REQ_STOP_RXTX, };
-/* Pameters for IPC. */ +/* Parameters for IPC. */ struct mlx4_mp_param { enum mlx4_mp_req_type type; int port_id; diff --git a/drivers/net/mlx4/mlx4_ethdev.c b/drivers/net/mlx4/mlx4_ethdev.c index d606ec8ca7..ce74c51ce2 100644 --- a/drivers/net/mlx4/mlx4_ethdev.c +++ b/drivers/net/mlx4/mlx4_ethdev.c @@ -752,7 +752,7 @@ mlx4_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) * Pointer to Ethernet device structure. * * @return - * alwasy 0 on success + * always 0 on success */ int mlx4_stats_reset(struct rte_eth_dev *dev) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index c29fe3d92b..36f0fbf04a 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -112,7 +112,7 @@ static struct mlx5_indexed_pool_config icfg[] = { * Pointer to RQ channel object, which includes the channel fd * * @param[out] fd - * The file descriptor (representing the intetrrupt) used in this channel. + * The file descriptor (representing the interrupt) used in this channel. * * @return * 0 on successfully setting the fd to non-blocking, non-zero otherwise. @@ -1743,7 +1743,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, priv->drop_queue.hrxq = mlx5_drop_action_create(eth_dev); if (!priv->drop_queue.hrxq) goto error; - /* Port representor shares the same max prioirity with pf port. */ + /* Port representor shares the same max priority with pf port. */ if (!priv->sh->flow_priority_check_flag) { /* Supported Verbs flow priority number detection. */ err = mlx5_flow_discover_priorities(eth_dev); @@ -2300,7 +2300,7 @@ mlx5_os_pci_probe_pf(struct mlx5_common_device *cdev, /* * Force standalone bonding * device for ROCE LAG - * confgiurations. + * configurations. */ list[ns].info.master = 0; list[ns].info.representor = 0; @@ -2637,7 +2637,7 @@ mlx5_os_pci_probe(struct mlx5_common_device *cdev) } if (ret) { DRV_LOG(ERR, "Probe of PCI device " PCI_PRI_FMT " " - "aborted due to proding failure of PF %u", + "aborted due to prodding failure of PF %u", pci_dev->addr.domain, pci_dev->addr.bus, pci_dev->addr.devid, pci_dev->addr.function, eth_da.ports[p]); diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index aa5f313c1a..8b4387d6b4 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1642,7 +1642,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) /* * Free the shared context in last turn, because the cleanup * routines above may use some shared fields, like - * mlx5_os_mac_addr_flush() uses ibdev_path for retrieveing + * mlx5_os_mac_addr_flush() uses ibdev_path for retrieving * ifindex if Netlink fails. */ mlx5_free_shared_dev_ctx(priv->sh); @@ -1962,7 +1962,7 @@ mlx5_args_check(const char *key, const char *val, void *opaque) if (tmp != MLX5_RCM_NONE && tmp != MLX5_RCM_LIGHT && tmp != MLX5_RCM_AGGR) { - DRV_LOG(ERR, "Unrecognize %s: "%s"", key, val); + DRV_LOG(ERR, "Unrecognized %s: "%s"", key, val); rte_errno = EINVAL; return -rte_errno; } @@ -2177,17 +2177,17 @@ mlx5_set_metadata_mask(struct rte_eth_dev *dev) break; } if (sh->dv_mark_mask && sh->dv_mark_mask != mark) - DRV_LOG(WARNING, "metadata MARK mask mismatche %08X:%08X", + DRV_LOG(WARNING, "metadata MARK mask mismatch %08X:%08X", sh->dv_mark_mask, mark); else sh->dv_mark_mask = mark; if (sh->dv_meta_mask && sh->dv_meta_mask != meta) - DRV_LOG(WARNING, "metadata META mask mismatche %08X:%08X", + DRV_LOG(WARNING, "metadata META mask mismatch %08X:%08X", sh->dv_meta_mask, meta); else sh->dv_meta_mask = meta; if (sh->dv_regc0_mask && sh->dv_regc0_mask != reg_c0) - DRV_LOG(WARNING, "metadata reg_c0 mask mismatche %08X:%08X", + DRV_LOG(WARNING, "metadata reg_c0 mask mismatch %08X:%08X", sh->dv_meta_mask, reg_c0); else sh->dv_regc0_mask = reg_c0; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 8466531060..b55f5816af 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -977,7 +977,7 @@ struct mlx5_flow_id_pool { uint32_t base_index; /**< The next index that can be used without any free elements. */ uint32_t *curr; /**< Pointer to the index to pop. */ - uint32_t *last; /**< Pointer to the last element in the empty arrray. */ + uint32_t *last; /**< Pointer to the last element in the empty array. */ uint32_t max_id; /**< Maximum id can be allocated from the pool. */ };
@@ -1014,7 +1014,7 @@ struct mlx5_dev_txpp { void *pp; /* Packet pacing context. */ uint16_t pp_id; /* Packet pacing context index. */ uint16_t ts_n; /* Number of captured timestamps. */ - uint16_t ts_p; /* Pointer to statisticks timestamp. */ + uint16_t ts_p; /* Pointer to statistics timestamp. */ struct mlx5_txpp_ts *tsa; /* Timestamps sliding window stats. */ struct mlx5_txpp_ts ts; /* Cached completion id/timestamp. */ uint32_t sync_lost:1; /* ci/timestamp synchronization lost. */ @@ -1118,7 +1118,7 @@ struct mlx5_flex_parser_devx { uint32_t sample_ids[MLX5_GRAPH_NODE_SAMPLE_NUM]; };
-/* Pattern field dscriptor - how to translate flex pattern into samples. */ +/* Pattern field descriptor - how to translate flex pattern into samples. */ __extension__ struct mlx5_flex_pattern_field { uint16_t width:6; @@ -1169,7 +1169,7 @@ struct mlx5_dev_ctx_shared { /* Shared DV/DR flow data section. */ uint32_t dv_meta_mask; /* flow META metadata supported mask. */ uint32_t dv_mark_mask; /* flow MARK metadata supported mask. */ - uint32_t dv_regc0_mask; /* available bits of metatada reg_c[0]. */ + uint32_t dv_regc0_mask; /* available bits of metadata reg_c[0]. */ void *fdb_domain; /* FDB Direct Rules name space handle. */ void *rx_domain; /* RX Direct Rules name space handle. */ void *tx_domain; /* TX Direct Rules name space handle. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index f34e4b88aa..b7cf4143d5 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1206,7 +1206,7 @@ flow_rxq_tunnel_ptype_update(struct mlx5_rxq_ctrl *rxq_ctrl) }
/** - * Set the Rx queue flags (Mark/Flag and Tunnel Ptypes) according to the devive + * Set the Rx queue flags (Mark/Flag and Tunnel Ptypes) according to the device * flow. * * @param[in] dev @@ -3008,7 +3008,7 @@ mlx5_flow_validate_item_geneve_opt(const struct rte_flow_item *item, if ((uint32_t)spec->option_len > MLX5_GENEVE_OPTLEN_MASK) return rte_flow_error_set (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, - "Geneve TLV opt length exceeeds the limit (31)"); + "Geneve TLV opt length exceeds the limit (31)"); /* Check if class type and length masks are full. */ if (full_mask.option_class != mask->option_class || full_mask.option_type != mask->option_type || @@ -3957,7 +3957,7 @@ find_graph_root(uint32_t rss_level) * subflow. * * @param[in] dev_flow - * Pointer the created preifx subflow. + * Pointer the created prefix subflow. * * @return * The layers get from prefix subflow. @@ -4284,7 +4284,7 @@ flow_dv_mreg_create_cb(void *tool_ctx, void *cb_ctx) [3] = { .type = RTE_FLOW_ACTION_TYPE_END, }, };
- /* Fill the register fileds in the flow. */ + /* Fill the register fields in the flow. */ ret = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, error); if (ret < 0) return NULL; @@ -4353,7 +4353,7 @@ flow_dv_mreg_create_cb(void *tool_ctx, void *cb_ctx) /* * The copy Flows are not included in any list. There * ones are referenced from other Flows and can not - * be applied, removed, deleted in ardbitrary order + * be applied, removed, deleted in arbitrary order * by list traversing. */ mcp_res->rix_flow = flow_list_create(dev, MLX5_FLOW_TYPE_MCP, @@ -4810,7 +4810,7 @@ flow_create_split_inner(struct rte_eth_dev *dev, /* * If dev_flow is as one of the suffix flow, some actions in suffix * flow may need some user defined item layer flags, and pass the - * Metadate rxq mark flag to suffix flow as well. + * Metadata rxq mark flag to suffix flow as well. */ if (flow_split_info->prefix_layers) dev_flow->handle->layers = flow_split_info->prefix_layers; @@ -5359,7 +5359,7 @@ flow_mreg_split_qrss_prep(struct rte_eth_dev *dev, * @param[out] error * Perform verbose error reporting if not NULL. * @param[in] encap_idx - * The encap action inndex. + * The encap action index. * * @return * 0 on success, negative value otherwise @@ -6884,7 +6884,7 @@ flow_list_destroy(struct rte_eth_dev *dev, enum mlx5_flow_type type, * @param type * Flow type to be flushed. * @param active - * If flushing is called avtively. + * If flushing is called actively. */ void mlx5_flow_list_flush(struct rte_eth_dev *dev, enum mlx5_flow_type type, @@ -8531,7 +8531,7 @@ mlx5_flow_dev_dump_sh_all(struct rte_eth_dev *dev, * Perform verbose error reporting if not NULL. PMDs initialize this * structure in case of error only. * @return - * 0 on success, a nagative value otherwise. + * 0 on success, a negative value otherwise. */ int mlx5_flow_dev_dump(struct rte_eth_dev *dev, struct rte_flow *flow_idx, @@ -9009,7 +9009,7 @@ mlx5_get_tof(const struct rte_flow_item *item, }
/** - * tunnel offload functionalilty is defined for DV environment only + * tunnel offload functionality is defined for DV environment only */ #ifdef HAVE_IBV_FLOW_DV_SUPPORT __extension__ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 1f54649c69..8c131d61ae 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -598,7 +598,7 @@ struct mlx5_flow_tbl_data_entry { const struct mlx5_flow_tunnel *tunnel; uint32_t group_id; uint32_t external:1; - uint32_t tunnel_offload:1; /* Tunnel offlod table or not. */ + uint32_t tunnel_offload:1; /* Tunnel offload table or not. */ uint32_t is_egress:1; /**< Egress table. */ uint32_t is_transfer:1; /**< Transfer table. */ uint32_t dummy:1; /**< DR table. */ @@ -696,8 +696,8 @@ struct mlx5_flow_handle { /**< Bit-fields of present layers, see MLX5_FLOW_LAYER_*. */ void *drv_flow; /**< pointer to driver flow object. */ uint32_t split_flow_id:27; /**< Sub flow unique match flow id. */ - uint32_t is_meter_flow_id:1; /**< Indate if flow_id is for meter. */ - uint32_t mark:1; /**< Metadate rxq mark flag. */ + uint32_t is_meter_flow_id:1; /**< Indicate if flow_id is for meter. */ + uint32_t mark:1; /**< Metadata rxq mark flag. */ uint32_t fate_action:3; /**< Fate action type. */ uint32_t flex_item; /**< referenced Flex Item bitmask. */ union { diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 3da122cbb9..8022d7d11f 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -2032,7 +2032,7 @@ flow_dv_validate_item_meta(struct rte_eth_dev *dev __rte_unused, if (reg == REG_NON) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, - "unavalable extended metadata register"); + "unavailable extended metadata register"); if (reg == REG_B) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -3205,7 +3205,7 @@ flow_dv_validate_action_set_meta(struct rte_eth_dev *dev, if (reg == REG_NON) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, action, - "unavalable extended metadata register"); + "unavailable extended metadata register"); if (reg != REG_A && reg != REG_B) { struct mlx5_priv *priv = dev->data->dev_private;
@@ -5145,7 +5145,7 @@ flow_dv_modify_hdr_action_max(struct rte_eth_dev *dev __rte_unused, * Pointer to error structure. * * @return - * 0 on success, a negative errno value otherwise and rte_ernno is set. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_flow_validate_action_meter(struct rte_eth_dev *dev, @@ -7858,7 +7858,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, * - Explicit decap action is prohibited by the tunnel offload API. * - Drop action in tunnel steer rule is prohibited by the API. * - Application cannot use MARK action because it's value can mask - * tunnel default miss nitification. + * tunnel default miss notification. * - JUMP in tunnel match rule has no support in current PMD * implementation. * - TAG & META are reserved for future uses. @@ -9184,7 +9184,7 @@ flow_dev_geneve_tlv_option_resource_register(struct rte_eth_dev *dev, geneve_opt_v->option_type && geneve_opt_resource->length == geneve_opt_v->option_len) { - /* We already have GENVE TLV option obj allocated. */ + /* We already have GENEVE TLV option obj allocated. */ __atomic_fetch_add(&geneve_opt_resource->refcnt, 1, __ATOMIC_RELAXED); } else { @@ -10226,7 +10226,7 @@ __flow_dv_adjust_buf_size(size_t *size, uint8_t match_criteria) * Check flow matching criteria first, subtract misc5/4 length if flow * doesn't own misc5/4 parameters. In some old rdma-core releases, * misc5/4 are not supported, and matcher creation failure is expected - * w/o subtration. If misc5 is provided, misc4 must be counted in since + * w/o subtraction. If misc5 is provided, misc4 must be counted in since * misc5 is right after misc4. */ if (!(match_criteria & (1 << MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT))) { @@ -11425,7 +11425,7 @@ flow_dv_dest_array_create_cb(void *tool_ctx __rte_unused, void *cb_ctx) goto error; } } - /* create a dest array actioin */ + /* create a dest array action */ ret = mlx5_os_flow_dr_create_flow_action_dest_array (domain, resource->num_of_dest, diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c index 64867dc9e2..9413d4d817 100644 --- a/drivers/net/mlx5/mlx5_flow_flex.c +++ b/drivers/net/mlx5/mlx5_flow_flex.c @@ -205,7 +205,7 @@ mlx5_flex_set_match_sample(void *misc4_m, void *misc4_v, * @param dev * Ethernet device to translate flex item on. * @param[in, out] matcher - * Flow matcher to confgiure + * Flow matcher to configure * @param[in, out] key * Flow matcher value. * @param[in] item @@ -457,7 +457,7 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr, if (field->offset_shift > 15 || field->offset_shift < 0) return rte_flow_error_set (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "header length field shift exceeeds limit"); + "header length field shift exceeds limit"); node->header_length_field_shift = field->offset_shift; node->header_length_field_offset = field->offset_base; } diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c index f4a7b697e6..0e4e6ac3d5 100644 --- a/drivers/net/mlx5/mlx5_flow_meter.c +++ b/drivers/net/mlx5/mlx5_flow_meter.c @@ -251,7 +251,7 @@ mlx5_flow_meter_xir_man_exp_calc(int64_t xir, uint8_t *man, uint8_t *exp) uint8_t _exp = 0; uint64_t m, e;
- /* Special case xir == 0 ? both exp and matissa are 0. */ + /* Special case xir == 0 ? both exp and mantissa are 0. */ if (xir == 0) { *man = 0; *exp = 0; @@ -287,7 +287,7 @@ mlx5_flow_meter_xbs_man_exp_calc(uint64_t xbs, uint8_t *man, uint8_t *exp) int _exp; double _man;
- /* Special case xbs == 0 ? both exp and matissa are 0. */ + /* Special case xbs == 0 ? both exp and mantissa are 0. */ if (xbs == 0) { *man = 0; *exp = 0; @@ -305,7 +305,7 @@ mlx5_flow_meter_xbs_man_exp_calc(uint64_t xbs, uint8_t *man, uint8_t *exp) * Fill the prm meter parameter. * * @param[in,out] fmp - * Pointer to meter profie to be converted. + * Pointer to meter profile to be converted. * @param[out] error * Pointer to the error structure. * @@ -1101,7 +1101,7 @@ mlx5_flow_meter_action_modify(struct mlx5_priv *priv, if (ret) return ret; } - /* Update succeedded modify meter parameters. */ + /* Update succeeded modify meter parameters. */ if (modify_bits & MLX5_FLOW_METER_OBJ_MODIFY_FIELD_ACTIVE) fm->active_state = !!active_state; } @@ -1615,7 +1615,7 @@ mlx5_flow_meter_profile_update(struct rte_eth_dev *dev, return -rte_mtr_error_set(error, -ret, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL, "Failed to update meter" - " parmeters in hardware."); + " parameters in hardware."); } old_fmp->ref_cnt--; fmp->ref_cnt++; diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index cc98cd1ea5..11214f6c41 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -178,7 +178,7 @@ mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, * Pointer to the device structure. * * @param rx_queue_id - * Rx queue identificatior. + * Rx queue identification. * * @param mode * Pointer to the burts mode information. diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index f77d42dedf..be5f4da1e5 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -2152,7 +2152,7 @@ mlx5_rxq_get_hairpin_conf(struct rte_eth_dev *dev, uint16_t idx) * Number of queues in the array. * * @return - * 1 if all queues in indirection table match 0 othrwise. + * 1 if all queues in indirection table match 0 otherwise. */ static int mlx5_ind_table_obj_match_queues(const struct mlx5_ind_table_obj *ind_tbl, @@ -2586,7 +2586,7 @@ mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hrxq_idx, if (hrxq->standalone) { /* * Replacement of indirection table unsupported for - * stanalone hrxq objects (used by shared RSS). + * standalone hrxq objects (used by shared RSS). */ rte_errno = ENOTSUP; return -rte_errno; diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h index 423e229508..f6e434c165 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h +++ b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h @@ -1230,7 +1230,7 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq, uint32_t mask = rxq->flow_meta_port_mask; uint32_t metadata;
- /* This code is subject for futher optimization. */ + /* This code is subject for further optimization. */ metadata = rte_be_to_cpu_32 (cq[pos].flow_table_metadata) & mask; *RTE_MBUF_DYNFIELD(pkts[pos], offs, uint32_t *) = diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h index b1d16baa61..f7bbde4e0e 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h +++ b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h @@ -839,7 +839,7 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq, } } if (rxq->dynf_meta) { - /* This code is subject for futher optimization. */ + /* This code is subject for further optimization. */ int32_t offs = rxq->flow_meta_offset; uint32_t mask = rxq->flow_meta_port_mask;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h index f3d838389e..185d2695db 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h +++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h @@ -772,7 +772,7 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq, } } if (rxq->dynf_meta) { - /* This code is subject for futher optimization. */ + /* This code is subject for further optimization. */ int32_t offs = rxq->flow_meta_offset; uint32_t mask = rxq->flow_meta_port_mask;
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c index 5492d64cae..fd2cf20967 100644 --- a/drivers/net/mlx5/mlx5_tx.c +++ b/drivers/net/mlx5/mlx5_tx.c @@ -728,7 +728,7 @@ mlx5_txq_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id, * Pointer to the device structure. * * @param tx_queue_id - * Tx queue identificatior. + * Tx queue identification. * * @param mode * Pointer to the burts mode information. diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index cf3db89403..e2dcbafc0a 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -55,7 +55,7 @@ extern int mlx5_logtype;
/* * For the case which data is linked with sequence increased index, the - * array table will be more efficiect than hash table once need to serarch + * array table will be more efficient than hash table once need to search * one data entry in large numbers of entries. Since the traditional hash * tables has fixed table size, when huge numbers of data saved to the hash * table, it also comes lots of hash conflict. diff --git a/drivers/net/mlx5/windows/mlx5_flow_os.c b/drivers/net/mlx5/windows/mlx5_flow_os.c index c4d5790726..7bb4c4590a 100644 --- a/drivers/net/mlx5/windows/mlx5_flow_os.c +++ b/drivers/net/mlx5/windows/mlx5_flow_os.c @@ -400,7 +400,7 @@ mlx5_flow_os_set_specific_workspace(struct mlx5_flow_workspace *data) /* * set_specific_workspace when current value is NULL * can happen only once per thread, mark this thread in - * linked list to be able to release reasorces later on. + * linked list to be able to release resources later on. */ err = mlx5_add_workspace_to_list(data); if (err) { diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index dec4b923d0..f143724990 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -226,7 +226,7 @@ mlx5_os_free_shared_dr(struct mlx5_priv *priv) * Pointer to RQ channel object, which includes the channel fd * * @param[out] fd - * The file descriptor (representing the intetrrupt) used in this channel. + * The file descriptor (representing the interrupt) used in this channel. * * @return * 0 on successfully setting the fd to non-blocking, non-zero otherwise. diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c index 10fe6d828c..eef016aa0b 100644 --- a/drivers/net/mvneta/mvneta_ethdev.c +++ b/drivers/net/mvneta/mvneta_ethdev.c @@ -247,7 +247,7 @@ mvneta_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) (mru + MRVL_NETA_PKT_OFFS > mbuf_data_size)) { mru = mbuf_data_size - MRVL_NETA_PKT_OFFS; mtu = MRVL_NETA_MRU_TO_MTU(mru); - MVNETA_LOG(WARNING, "MTU too big, max MTU possible limitted by" + MVNETA_LOG(WARNING, "MTU too big, max MTU possible limited by" " current mbuf size: %u. Set MTU to %u, MRU to %u", mbuf_data_size, mtu, mru); } diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c index 2a8fb6cbce..735efb6cfc 100644 --- a/drivers/net/mvpp2/mrvl_ethdev.c +++ b/drivers/net/mvpp2/mrvl_ethdev.c @@ -579,7 +579,7 @@ mrvl_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) if (mru - RTE_ETHER_CRC_LEN + MRVL_PKT_OFFS > mbuf_data_size) { mru = mbuf_data_size + RTE_ETHER_CRC_LEN - MRVL_PKT_OFFS; mtu = MRVL_PP2_MRU_TO_MTU(mru); - MRVL_LOG(WARNING, "MTU too big, max MTU possible limitted " + MRVL_LOG(WARNING, "MTU too big, max MTU possible limited " "by current mbuf size: %u. Set MTU to %u, MRU to %u", mbuf_data_size, mtu, mru); } diff --git a/drivers/net/mvpp2/mrvl_qos.c b/drivers/net/mvpp2/mrvl_qos.c index dbfc3b5d20..99f0ee56d1 100644 --- a/drivers/net/mvpp2/mrvl_qos.c +++ b/drivers/net/mvpp2/mrvl_qos.c @@ -301,7 +301,7 @@ get_entry_values(const char *entry, uint8_t *tab, }
/** - * Parse Traffic Class'es mapping configuration. + * Parse Traffic Classes mapping configuration. * * @param file Config file handle. * @param port Which port to look for. @@ -736,7 +736,7 @@ mrvl_get_cfg(const char *key __rte_unused, const char *path, void *extra_args)
/* MRVL_TOK_START_HDR replaces MRVL_TOK_DSA_MODE parameter. * MRVL_TOK_DSA_MODE will be supported for backward - * compatibillity. + * compatibility. */ entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_START_HDR); diff --git a/drivers/net/netvsc/hn_nvs.c b/drivers/net/netvsc/hn_nvs.c index 89dbba6cd9..a29ac18ff4 100644 --- a/drivers/net/netvsc/hn_nvs.c +++ b/drivers/net/netvsc/hn_nvs.c @@ -229,7 +229,7 @@ hn_nvs_conn_rxbuf(struct hn_data *hv) hv->rxbuf_section_cnt = resp.nvs_sect[0].slotcnt;
/* - * Pimary queue's rxbuf_info is not allocated at creation time. + * Primary queue's rxbuf_info is not allocated at creation time. * Now we can allocate it after we figure out the slotcnt. */ hv->primary->rxbuf_info = rte_calloc("HN_RXBUF_INFO", diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c index 028f176c7e..50ca1710ef 100644 --- a/drivers/net/netvsc/hn_rxtx.c +++ b/drivers/net/netvsc/hn_rxtx.c @@ -578,7 +578,7 @@ static void hn_rxpkt(struct hn_rx_queue *rxq, struct hn_rx_bufinfo *rxb, rte_iova_t iova;
/* - * Build an external mbuf that points to recveive area. + * Build an external mbuf that points to receive area. * Use refcount to handle multiple packets in same * receive buffer section. */ @@ -1031,7 +1031,7 @@ hn_dev_rx_queue_count(void *rx_queue) * returns: * - -EINVAL - offset outside of ring * - RTE_ETH_RX_DESC_AVAIL - no data available yet - * - RTE_ETH_RX_DESC_DONE - data is waiting in stagin ring + * - RTE_ETH_RX_DESC_DONE - data is waiting in staging ring */ int hn_dev_rx_queue_status(void *arg, uint16_t offset) { diff --git a/drivers/net/netvsc/hn_vf.c b/drivers/net/netvsc/hn_vf.c index fead8eba5d..ebb9c60147 100644 --- a/drivers/net/netvsc/hn_vf.c +++ b/drivers/net/netvsc/hn_vf.c @@ -103,7 +103,7 @@ static void hn_remove_delayed(void *args) struct rte_device *dev = rte_eth_devices[port_id].device; int ret;
- /* Tell VSP to switch data path to synthentic */ + /* Tell VSP to switch data path to synthetic */ hn_vf_remove(hv);
PMD_DRV_LOG(NOTICE, "Start to remove port %d", port_id); diff --git a/drivers/net/nfp/nfpcore/nfp-common/nfp_resid.h b/drivers/net/nfp/nfpcore/nfp-common/nfp_resid.h index 0e03948ec7..394a7628e0 100644 --- a/drivers/net/nfp/nfpcore/nfp-common/nfp_resid.h +++ b/drivers/net/nfp/nfpcore/nfp-common/nfp_resid.h @@ -63,7 +63,7 @@ * Wildcard indicating a CPP read or write action * * The action used will be either read or write depending on whether a read or - * write instruction/call is performed on the NFP_CPP_ID. It is recomended that + * write instruction/call is performed on the NFP_CPP_ID. It is recommended that * the RW action is used even if all actions to be performed on a NFP_CPP_ID are * known to be only reads or writes. Doing so will in many cases save NFP CPP * internal software resources. @@ -405,7 +405,7 @@ int nfp_idstr2meid(int chip_family, const char *s, const char **endptr); * @param chip_family Chip family ID * @param s A string of format "iX.anything" or "iX" * @param endptr If non-NULL, *endptr will point to the trailing - * striong after the ME ID part of the string, which + * string after the ME ID part of the string, which * is either an empty string or the first character * after the separating period. * @return The island ID on succes, -1 on error. @@ -425,7 +425,7 @@ int nfp_idstr2island(int chip_family, const char *s, const char **endptr); * @param chip_family Chip family ID * @param s A string of format "meX.anything" or "meX" * @param endptr If non-NULL, *endptr will point to the trailing - * striong after the ME ID part of the string, which + * string after the ME ID part of the string, which * is either an empty string or the first character * after the separating period. * @return The ME number on succes, -1 on error. diff --git a/drivers/net/nfp/nfpcore/nfp_cppcore.c b/drivers/net/nfp/nfpcore/nfp_cppcore.c index f91049383e..37799af558 100644 --- a/drivers/net/nfp/nfpcore/nfp_cppcore.c +++ b/drivers/net/nfp/nfpcore/nfp_cppcore.c @@ -202,7 +202,7 @@ nfp_cpp_area_alloc(struct nfp_cpp *cpp, uint32_t dest, * @address: start address on CPP target * @size: size of area * - * Allocate and initilizae a CPP area structure, and lock it down so + * Allocate and initialize a CPP area structure, and lock it down so * that it can be accessed directly. * * NOTE: @address and @size must be 32-bit aligned values. diff --git a/drivers/net/nfp/nfpcore/nfp_nsp.h b/drivers/net/nfp/nfpcore/nfp_nsp.h index c9c7b0d0fb..e74cdeb191 100644 --- a/drivers/net/nfp/nfpcore/nfp_nsp.h +++ b/drivers/net/nfp/nfpcore/nfp_nsp.h @@ -272,7 +272,7 @@ int __nfp_eth_set_split(struct nfp_nsp *nsp, unsigned int lanes); * @br_primary: branch id of primary bootloader * @br_secondary: branch id of secondary bootloader * @br_nsp: branch id of NSP - * @primary: version of primarary bootloader + * @primary: version of primary bootloader * @secondary: version id of secondary bootloader * @nsp: version id of NSP * @sensor_mask: mask of present sensors available on NIC diff --git a/drivers/net/nfp/nfpcore/nfp_resource.c b/drivers/net/nfp/nfpcore/nfp_resource.c index dd41fa4de4..7b5630fd86 100644 --- a/drivers/net/nfp/nfpcore/nfp_resource.c +++ b/drivers/net/nfp/nfpcore/nfp_resource.c @@ -207,7 +207,7 @@ nfp_resource_acquire(struct nfp_cpp *cpp, const char *name) * nfp_resource_release() - Release a NFP Resource handle * @res: NFP Resource handle * - * NOTE: This function implictly unlocks the resource handle + * NOTE: This function implicitly unlocks the resource handle */ void nfp_resource_release(struct nfp_resource *res) diff --git a/drivers/net/nfp/nfpcore/nfp_rtsym.c b/drivers/net/nfp/nfpcore/nfp_rtsym.c index cb7d83db51..2feca2ed81 100644 --- a/drivers/net/nfp/nfpcore/nfp_rtsym.c +++ b/drivers/net/nfp/nfpcore/nfp_rtsym.c @@ -236,7 +236,7 @@ nfp_rtsym_lookup(struct nfp_rtsym_table *rtbl, const char *name) * nfp_rtsym_read_le() - Read a simple unsigned scalar value from symbol * @rtbl: NFP RTsym table * @name: Symbol name - * @error: Poniter to error code (optional) + * @error: Pointer to error code (optional) * * Lookup a symbol, map, read it and return it's value. Value of the symbol * will be interpreted as a simple little-endian unsigned value. Symbol can diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 981592f7f4..0d66c32551 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -983,7 +983,7 @@ ngbe_dev_start(struct rte_eth_dev *dev) } }
- /* confiugre MSI-X for sleep until Rx interrupt */ + /* configure MSI-X for sleep until Rx interrupt */ ngbe_configure_msix(dev);
/* initialize transmission unit */ @@ -2641,7 +2641,7 @@ ngbe_set_ivar_map(struct ngbe_hw *hw, int8_t direction, wr32(hw, NGBE_IVARMISC, tmp); } else { /* rx or tx causes */ - /* Workround for ICR lost */ + /* Workaround for ICR lost */ idx = ((16 * (queue & 1)) + (8 * direction)); tmp = rd32(hw, NGBE_IVAR(queue >> 1)); tmp &= ~(0xFF << idx); @@ -2893,7 +2893,7 @@ ngbe_timesync_disable(struct rte_eth_dev *dev) /* Disable L2 filtering of IEEE1588/802.1AS Ethernet frame types. */ wr32(hw, NGBE_ETFLT(NGBE_ETF_ID_1588), 0);
- /* Stop incrementating the System Time registers. */ + /* Stop incrementing the System Time registers. */ wr32(hw, NGBE_TSTIMEINC, 0);
return 0; diff --git a/drivers/net/ngbe/ngbe_pf.c b/drivers/net/ngbe/ngbe_pf.c index 7f9c04fb0e..12a18de31d 100644 --- a/drivers/net/ngbe/ngbe_pf.c +++ b/drivers/net/ngbe/ngbe_pf.c @@ -163,7 +163,7 @@ int ngbe_pf_host_configure(struct rte_eth_dev *eth_dev)
wr32(hw, NGBE_PSRCTL, NGBE_PSRCTL_LBENA);
- /* clear VMDq map to perment rar 0 */ + /* clear VMDq map to permanent rar 0 */ hw->mac.clear_vmdq(hw, 0, BIT_MASK32);
/* clear VMDq map to scan rar 31 */ diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c index 4f1e368c61..b47472ebbd 100644 --- a/drivers/net/octeontx/octeontx_ethdev.c +++ b/drivers/net/octeontx/octeontx_ethdev.c @@ -1090,7 +1090,7 @@ octeontx_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
/* Verify queue index */ if (qidx >= dev->data->nb_rx_queues) { - octeontx_log_err("QID %d not supporteded (0 - %d available)\n", + octeontx_log_err("QID %d not supported (0 - %d available)\n", qidx, (dev->data->nb_rx_queues - 1)); return -ENOTSUP; } diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c index cc573bb2e8..f56d5b2a38 100644 --- a/drivers/net/octeontx2/otx2_ethdev_irq.c +++ b/drivers/net/octeontx2/otx2_ethdev_irq.c @@ -369,7 +369,7 @@ oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev) "rc=%d", rc); return rc; } - /* VFIO vector zero is resereved for misc interrupt so + /* VFIO vector zero is reserved for misc interrupt so * doing required adjustment. (b13bfab4cd) */ if (rte_intr_vec_list_index_set(handle, q, diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c index abb2130587..974018f97e 100644 --- a/drivers/net/octeontx2/otx2_ptp.c +++ b/drivers/net/octeontx2/otx2_ptp.c @@ -440,7 +440,7 @@ otx2_nix_read_clock(struct rte_eth_dev *eth_dev, uint64_t *clock) /* This API returns the raw PTP HI clock value. Since LFs doesn't * have direct access to PTP registers and it requires mbox msg * to AF for this value. In fastpath reading this value for every - * packet (which involes mbox call) becomes very expensive, hence + * packet (which involves mbox call) becomes very expensive, hence * we should be able to derive PTP HI clock value from tsc by * using freq_mult and clk_delta calculated during configure stage. */ diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h index 4bbd5a390f..a2fb7ce3cb 100644 --- a/drivers/net/octeontx2/otx2_tx.h +++ b/drivers/net/octeontx2/otx2_tx.h @@ -61,7 +61,7 @@ otx2_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc, /* Retrieving the default desc values */ cmd[off] = send_mem_desc[6];
- /* Using compiler barier to avoid voilation of C + /* Using compiler barrier to avoid violation of C * aliasing rules. */ rte_compiler_barrier(); @@ -70,7 +70,7 @@ otx2_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc, /* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp * should not be recorded, hence changing the alg type to * NIX_SENDMEMALG_SET and also changing send mem addr field to - * next 8 bytes as it corrpt the actual tx tstamp registered + * next 8 bytes as it corrupts the actual tx tstamp registered * address. */ send_mem->alg = NIX_SENDMEMALG_SETTSTMP - (is_ol_tstamp); diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c index cce643b7b5..359680de5c 100644 --- a/drivers/net/octeontx2/otx2_vlan.c +++ b/drivers/net/octeontx2/otx2_vlan.c @@ -953,7 +953,7 @@ static void nix_vlan_reinstall_vlan_filters(struct rte_eth_dev *eth_dev) struct vlan_entry *entry; int rc;
- /* VLAN filters can't be set without setting filtern on */ + /* VLAN filters can't be set without setting filters on */ rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true, true); if (rc) { otx2_err("Failed to reinstall vlan filters"); diff --git a/drivers/net/octeontx_ep/otx2_ep_vf.c b/drivers/net/octeontx_ep/otx2_ep_vf.c index 0716beb9b1..85e14a998f 100644 --- a/drivers/net/octeontx_ep/otx2_ep_vf.c +++ b/drivers/net/octeontx_ep/otx2_ep_vf.c @@ -104,7 +104,7 @@ otx2_vf_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no) iq->inst_cnt_reg = (uint8_t *)otx_ep->hw_addr + SDP_VF_R_IN_CNTS(iq_no);
- otx_ep_dbg("InstQ[%d]:dbell reg @ 0x%p instcnt_reg @ 0x%p", + otx_ep_dbg("InstQ[%d]:dbell reg @ 0x%p inst_cnt_reg @ 0x%p", iq_no, iq->doorbell_reg, iq->inst_cnt_reg);
do { diff --git a/drivers/net/octeontx_ep/otx_ep_vf.c b/drivers/net/octeontx_ep/otx_ep_vf.c index c9b91fef9e..96366b2a7f 100644 --- a/drivers/net/octeontx_ep/otx_ep_vf.c +++ b/drivers/net/octeontx_ep/otx_ep_vf.c @@ -117,7 +117,7 @@ otx_ep_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no) iq->inst_cnt_reg = (uint8_t *)otx_ep->hw_addr + OTX_EP_R_IN_CNTS(iq_no);
- otx_ep_dbg("InstQ[%d]:dbell reg @ 0x%p instcnt_reg @ 0x%p\n", + otx_ep_dbg("InstQ[%d]:dbell reg @ 0x%p inst_cnt_reg @ 0x%p\n", iq_no, iq->doorbell_reg, iq->inst_cnt_reg);
do { diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c index 047010e15e..ebb5d1ae0e 100644 --- a/drivers/net/pfe/pfe_ethdev.c +++ b/drivers/net/pfe/pfe_ethdev.c @@ -769,7 +769,7 @@ pfe_eth_init(struct rte_vdev_device *vdev, struct pfe *pfe, int id) if (eth_dev == NULL) return -ENOMEM;
- /* Extract pltform data */ + /* Extract platform data */ pfe_info = (struct ls1012a_pfe_platform_data *)&pfe->platform_data; if (!pfe_info) { PFE_PMD_ERR("pfe missing additional platform data"); diff --git a/drivers/net/pfe/pfe_hal.c b/drivers/net/pfe/pfe_hal.c index 41d783dbff..6431dec47e 100644 --- a/drivers/net/pfe/pfe_hal.c +++ b/drivers/net/pfe/pfe_hal.c @@ -187,7 +187,7 @@ gemac_set_mode(void *base, __rte_unused int mode) { u32 val = readl(base + EMAC_RCNTRL_REG);
- /*Remove loopbank*/ + /* Remove loopback */ val &= ~EMAC_RCNTRL_LOOP;
/*Enable flow control and MII mode*/ diff --git a/drivers/net/pfe/pfe_hif.c b/drivers/net/pfe/pfe_hif.c index c4a7154ba7..69b1d0edde 100644 --- a/drivers/net/pfe/pfe_hif.c +++ b/drivers/net/pfe/pfe_hif.c @@ -114,9 +114,9 @@ pfe_hif_init_buffers(struct pfe_hif *hif) * results, eth id, queue id from PFE block along with data. * so we have to provide additional memory for each packet to * HIF rx rings so that PFE block can write its headers. - * so, we are giving the data pointor to HIF rings whose + * so, we are giving the data pointer to HIF rings whose * calculation is as below: - * mbuf->data_pointor - Required_header_size + * mbuf->data_pointer - Required_header_size * * We are utilizing the HEADROOM area to receive the PFE * block headers. On packet reception, HIF driver will use diff --git a/drivers/net/pfe/pfe_hif.h b/drivers/net/pfe/pfe_hif.h index 6aaf904bb1..e8d5ba10e1 100644 --- a/drivers/net/pfe/pfe_hif.h +++ b/drivers/net/pfe/pfe_hif.h @@ -8,7 +8,7 @@ #define HIF_CLIENT_QUEUES_MAX 16 #define HIF_RX_PKT_MIN_SIZE RTE_CACHE_LINE_SIZE /* - * HIF_TX_DESC_NT value should be always greter than 4, + * HIF_TX_DESC_NT value should be always greater than 4, * Otherwise HIF_TX_POLL_MARK will become zero. */ #define HIF_RX_DESC_NT 64 diff --git a/drivers/net/pfe/pfe_hif_lib.c b/drivers/net/pfe/pfe_hif_lib.c index 799050dce3..6fe6d33d23 100644 --- a/drivers/net/pfe/pfe_hif_lib.c +++ b/drivers/net/pfe/pfe_hif_lib.c @@ -38,7 +38,7 @@ pfe_hif_shm_clean(struct hif_shm *hif_shm) * This function should be called before initializing HIF driver. * * @param[in] hif_shm Shared memory address location in DDR - * @rerurn 0 - on succes, <0 on fail to initialize + * @return 0 - on succes, <0 on fail to initialize */ int pfe_hif_shm_init(struct hif_shm *hif_shm, struct rte_mempool *mb_pool) @@ -109,9 +109,9 @@ hif_lib_client_release_rx_buffers(struct hif_client_s *client) for (ii = 0; ii < client->rx_q[qno].size; ii++) { buf = (void *)desc->data; if (buf) { - /* Data pointor to mbuf pointor calculation: + /* Data pointer to mbuf pointer calculation: * "Data - User private data - headroom - mbufsize" - * Actual data pointor given to HIF BDs was + * Actual data pointer given to HIF BDs was * "mbuf->data_offset - PFE_PKT_HEADER_SZ" */ buf = buf + PFE_PKT_HEADER_SZ @@ -477,7 +477,7 @@ hif_hdr_write(struct hif_hdr *pkt_hdr, unsigned int client_id, unsigned int qno, u32 client_ctrl) { - /* Optimize the write since the destinaton may be non-cacheable */ + /* Optimize the write since the destination may be non-cacheable */ if (!((unsigned long)pkt_hdr & 0x3)) { ((u32 *)pkt_hdr)[0] = (client_ctrl << 16) | (qno << 8) | client_id; diff --git a/drivers/net/qede/qede_debug.c b/drivers/net/qede/qede_debug.c index 2297d245c4..af86bcc692 100644 --- a/drivers/net/qede/qede_debug.c +++ b/drivers/net/qede/qede_debug.c @@ -5983,7 +5983,7 @@ static char *qed_get_buf_ptr(void *buf, u32 offset) /* Reads a param from the specified buffer. Returns the number of dwords read. * If the returned str_param is NULL, the param is numeric and its value is * returned in num_param. - * Otheriwise, the param is a string and its pointer is returned in str_param. + * Otherwise, the param is a string and its pointer is returned in str_param. */ static u32 qed_read_param(u32 *dump_buf, const char **param_name, @@ -7558,7 +7558,7 @@ static enum dbg_status format_feature(struct ecore_hwfn *p_hwfn, text_buf[i] = '\n';
- /* Free the old dump_buf and point the dump_buf to the newly allocagted + /* Free the old dump_buf and point the dump_buf to the newly allocated * and formatted text buffer. */ OSAL_VFREE(p_hwfn, feature->dump_buf); diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c index 3e9aaeecd3..a1122a297e 100644 --- a/drivers/net/qede/qede_ethdev.c +++ b/drivers/net/qede/qede_ethdev.c @@ -2338,7 +2338,7 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) if (fp->rxq != NULL) { bufsz = (uint16_t)rte_pktmbuf_data_room_size( fp->rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; - /* cache align the mbuf size to simplfy rx_buf_size + /* cache align the mbuf size to simplify rx_buf_size * calculation */ bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz); diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c index c0eeea896e..7088c57b50 100644 --- a/drivers/net/qede/qede_rxtx.c +++ b/drivers/net/qede/qede_rxtx.c @@ -90,7 +90,7 @@ static inline int qede_alloc_rx_bulk_mbufs(struct qede_rx_queue *rxq, int count) * (MTU + Maximum L2 Header Size + 2) / ETH_RX_MAX_BUFF_PER_PKT * 3) In regular mode - minimum rx_buf_size should be * (MTU + Maximum L2 Header Size + 2) - * In above cases +2 corrosponds to 2 bytes padding in front of L2 + * In above cases +2 corresponds to 2 bytes padding in front of L2 * header. * 4) rx_buf_size should be cacheline-size aligned. So considering * criteria 1, we need to adjust the size to floor instead of ceil, @@ -106,7 +106,7 @@ qede_calc_rx_buf_size(struct rte_eth_dev *dev, uint16_t mbufsz,
if (dev->data->scattered_rx) { /* per HW limitation, only ETH_RX_MAX_BUFF_PER_PKT number of - * bufferes can be used for single packet. So need to make sure + * buffers can be used for single packet. So need to make sure * mbuf size is sufficient enough for this. */ if ((mbufsz * ETH_RX_MAX_BUFF_PER_PKT) < @@ -247,7 +247,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
/* Fix up RX buffer size */ bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM; - /* cache align the mbuf size to simplfy rx_buf_size calculation */ + /* cache align the mbuf size to simplify rx_buf_size calculation */ bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz); if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) || (max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) { @@ -1745,7 +1745,7 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } }
- /* Request number of bufferes to be allocated in next loop */ + /* Request number of buffers to be allocated in next loop */ rxq->rx_alloc_count = rx_alloc_count;
rxq->rcv_pkts += rx_pkt; @@ -2042,7 +2042,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } }
- /* Request number of bufferes to be allocated in next loop */ + /* Request number of buffers to be allocated in next loop */ rxq->rx_alloc_count = rx_alloc_count;
rxq->rcv_pkts += rx_pkt; @@ -2506,7 +2506,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* Inner L2 header size in two byte words */ inner_l2_hdr_size = (mbuf->l2_len - MPLSINUDP_HDR_SIZE) / 2; - /* Inner L4 header offset from the beggining + /* Inner L4 header offset from the beginning * of inner packet in two byte words */ inner_l4_hdr_offset = (mbuf->l2_len - diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h index 754efe793f..11ed1d9b9c 100644 --- a/drivers/net/qede/qede_rxtx.h +++ b/drivers/net/qede/qede_rxtx.h @@ -225,7 +225,7 @@ struct qede_fastpath { struct qede_tx_queue *txq; };
-/* This structure holds the inforation of fast path queues +/* This structure holds the information of fast path queues * belonging to individual engines in CMT mode. */ struct qede_fastpath_cmt { diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c index ed714fe02f..2cead4e045 100644 --- a/drivers/net/sfc/sfc.c +++ b/drivers/net/sfc/sfc.c @@ -371,7 +371,7 @@ sfc_set_drv_limits(struct sfc_adapter *sa)
/* * Limits are strict since take into account initial estimation. - * Resource allocation stategy is described in + * Resource allocation strategy is described in * sfc_estimate_resource_limits(). */ lim.edl_min_evq_count = lim.edl_max_evq_count = diff --git a/drivers/net/sfc/sfc_dp.c b/drivers/net/sfc/sfc_dp.c index d4cd162541..da2d1603cf 100644 --- a/drivers/net/sfc/sfc_dp.c +++ b/drivers/net/sfc/sfc_dp.c @@ -68,7 +68,7 @@ sfc_dp_register(struct sfc_dp_list *head, struct sfc_dp *entry) { if (sfc_dp_find_by_name(head, entry->type, entry->name) != NULL) { SFC_GENERIC_LOG(ERR, - "sfc %s dapapath '%s' already registered", + "sfc %s datapath '%s' already registered", entry->type == SFC_DP_RX ? "Rx" : entry->type == SFC_DP_TX ? "Tx" : "unknown", diff --git a/drivers/net/sfc/sfc_dp_rx.h b/drivers/net/sfc/sfc_dp_rx.h index 760540ba22..246adbd87c 100644 --- a/drivers/net/sfc/sfc_dp_rx.h +++ b/drivers/net/sfc/sfc_dp_rx.h @@ -158,7 +158,7 @@ typedef int (sfc_dp_rx_qcreate_t)(uint16_t port_id, uint16_t queue_id, struct sfc_dp_rxq **dp_rxqp);
/** - * Free resources allocated for datapath recevie queue. + * Free resources allocated for datapath receive queue. */ typedef void (sfc_dp_rx_qdestroy_t)(struct sfc_dp_rxq *dp_rxq);
@@ -191,7 +191,7 @@ typedef bool (sfc_dp_rx_qrx_ps_ev_t)(struct sfc_dp_rxq *dp_rxq, /** * Receive queue purge function called after queue flush. * - * Should be used to free unused recevie buffers. + * Should be used to free unused receive buffers. */ typedef void (sfc_dp_rx_qpurge_t)(struct sfc_dp_rxq *dp_rxq);
diff --git a/drivers/net/sfc/sfc_ef100.h b/drivers/net/sfc/sfc_ef100.h index 5e2052d142..e81847e75a 100644 --- a/drivers/net/sfc/sfc_ef100.h +++ b/drivers/net/sfc/sfc_ef100.h @@ -19,7 +19,7 @@ extern "C" { * * @param evq_prime Global address of the prime register * @param evq_hw_index Event queue index - * @param evq_read_ptr Masked event qeueu read pointer + * @param evq_read_ptr Masked event queue read pointer */ static inline void sfc_ef100_evq_prime(volatile void *evq_prime, unsigned int evq_hw_index, diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c index 5d16bf281d..45253ed7dc 100644 --- a/drivers/net/sfc/sfc_ef100_rx.c +++ b/drivers/net/sfc/sfc_ef100_rx.c @@ -851,7 +851,7 @@ sfc_ef100_rx_qstart(struct sfc_dp_rxq *dp_rxq, unsigned int evq_read_ptr, unsup_rx_prefix_fields = efx_rx_prefix_layout_check(pinfo, &sfc_ef100_rx_prefix_layout);
- /* LENGTH and CLASS filds must always be present */ + /* LENGTH and CLASS fields must always be present */ if ((unsup_rx_prefix_fields & ((1U << EFX_RX_PREFIX_FIELD_LENGTH) | (1U << EFX_RX_PREFIX_FIELD_CLASS))) != 0) diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c index 712c207617..78bd430363 100644 --- a/drivers/net/sfc/sfc_ef10_essb_rx.c +++ b/drivers/net/sfc/sfc_ef10_essb_rx.c @@ -630,7 +630,7 @@ sfc_ef10_essb_rx_qcreate(uint16_t port_id, uint16_t queue_id, rxq->block_size, rxq->buf_stride); sfc_ef10_essb_rx_info(&rxq->dp.dpq, "max fill level is %u descs (%u bufs), " - "refill threashold %u descs (%u bufs)", + "refill threshold %u descs (%u bufs)", rxq->max_fill_level, rxq->max_fill_level * rxq->block_size, rxq->refill_threshold, diff --git a/drivers/net/sfc/sfc_ef10_rx_ev.h b/drivers/net/sfc/sfc_ef10_rx_ev.h index 821e2227bb..412254e3d7 100644 --- a/drivers/net/sfc/sfc_ef10_rx_ev.h +++ b/drivers/net/sfc/sfc_ef10_rx_ev.h @@ -40,7 +40,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m, rte_cpu_to_le_64((1ull << ESF_DZ_RX_ECC_ERR_LBN) | (1ull << ESF_DZ_RX_ECRC_ERR_LBN) | (1ull << ESF_DZ_RX_PARSE_INCOMPLETE_LBN)))) { - /* Zero packet type is used as a marker to dicard bad packets */ + /* Zero packet type is used as a marker to discard bad packets */ goto done; }
diff --git a/drivers/net/sfc/sfc_intr.c b/drivers/net/sfc/sfc_intr.c index ab67aa9237..ddddefad7b 100644 --- a/drivers/net/sfc/sfc_intr.c +++ b/drivers/net/sfc/sfc_intr.c @@ -8,7 +8,7 @@ */
/* - * At the momemt of writing DPDK v16.07 has notion of two types of + * At the moment of writing DPDK v16.07 has notion of two types of * interrupts: LSC (link status change) and RXQ (receive indication). * It allows to register interrupt callback for entire device which is * not intended to be used for receive indication (i.e. link status diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c index 7104284106..cd58d60a36 100644 --- a/drivers/net/sfc/sfc_rx.c +++ b/drivers/net/sfc/sfc_rx.c @@ -1057,7 +1057,7 @@ sfc_rx_mb_pool_buf_size(struct sfc_adapter *sa, struct rte_mempool *mb_pool) /* Make sure that end padding does not write beyond the buffer */ if (buf_aligned < nic_align_end) { /* - * Estimate space which can be lost. If guarnteed buffer + * Estimate space which can be lost. If guaranteed buffer * size is odd, lost space is (nic_align_end - 1). More * accurate formula is below. */ @@ -1702,7 +1702,7 @@ sfc_rx_fini_queues(struct sfc_adapter *sa, unsigned int nb_rx_queues)
/* * Finalize only ethdev queues since other ones are finalized only - * on device close and they may require additional deinitializaton. + * on device close and they may require additional deinitialization. */ ethdev_qid = sas->ethdev_rxq_count; while (--ethdev_qid >= (int)nb_rx_queues) { @@ -1775,7 +1775,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
reconfigure = true;
- /* Do not ununitialize reserved queues */ + /* Do not uninitialize reserved queues */ if (nb_rx_queues < sas->ethdev_rxq_count) sfc_rx_fini_queues(sa, nb_rx_queues);
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c index 0dccf21f7c..cd927cf2f7 100644 --- a/drivers/net/sfc/sfc_tx.c +++ b/drivers/net/sfc/sfc_tx.c @@ -356,7 +356,7 @@ sfc_tx_fini_queues(struct sfc_adapter *sa, unsigned int nb_tx_queues)
/* * Finalize only ethdev queues since other ones are finalized only - * on device close and they may require additional deinitializaton. + * on device close and they may require additional deinitialization. */ ethdev_qid = sas->ethdev_txq_count; while (--ethdev_qid >= (int)nb_tx_queues) { diff --git a/drivers/net/softnic/rte_eth_softnic_flow.c b/drivers/net/softnic/rte_eth_softnic_flow.c index ca70eab678..ad96288e7e 100644 --- a/drivers/net/softnic/rte_eth_softnic_flow.c +++ b/drivers/net/softnic/rte_eth_softnic_flow.c @@ -930,7 +930,7 @@ flow_rule_match_acl_get(struct pmd_internals *softnic __rte_unused, * Both *tmask* and *fmask* are byte arrays of size *tsize* and *fsize* * respectively. * They are located within a larger buffer at offsets *toffset* and *foffset* - * respectivelly. Both *tmask* and *fmask* represent bitmasks for the larger + * respectively. Both *tmask* and *fmask* represent bitmasks for the larger * buffer. * Question: are the two masks equivalent? * diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c index ddca630574..6567e5891b 100644 --- a/drivers/net/tap/rte_eth_tap.c +++ b/drivers/net/tap/rte_eth_tap.c @@ -525,7 +525,7 @@ tap_tx_l4_cksum(uint16_t *l4_cksum, uint16_t l4_phdr_cksum, } }
-/* Accumaulate L4 raw checksums */ +/* Accumulate L4 raw checksums */ static void tap_tx_l4_add_rcksum(char *l4_data, unsigned int l4_len, uint16_t *l4_cksum, uint32_t *l4_raw_cksum) diff --git a/drivers/net/tap/tap_bpf_api.c b/drivers/net/tap/tap_bpf_api.c index 98f6a76011..15283f8917 100644 --- a/drivers/net/tap/tap_bpf_api.c +++ b/drivers/net/tap/tap_bpf_api.c @@ -96,7 +96,7 @@ static inline int sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr, * Load BPF instructions to kernel * * @param[in] type - * BPF program type: classifieir or action + * BPF program type: classifier or action * * @param[in] insns * Array of BPF instructions (equivalent to BPF instructions) @@ -104,7 +104,7 @@ static inline int sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr, * @param[in] insns_cnt * Number of BPF instructions (size of array) * - * @param[in] lincense + * @param[in] license * License string that must be acknowledged by the kernel * * @return diff --git a/drivers/net/tap/tap_flow.c b/drivers/net/tap/tap_flow.c index c4f60ce98e..7673823945 100644 --- a/drivers/net/tap/tap_flow.c +++ b/drivers/net/tap/tap_flow.c @@ -961,7 +961,7 @@ add_action(struct rte_flow *flow, size_t *act_index, struct action_data *adata) }
/** - * Helper function to send a serie of TC actions to the kernel + * Helper function to send a series of TC actions to the kernel * * @param[in] flow * Pointer to rte flow containing the netlink message @@ -2017,7 +2017,7 @@ static int bpf_rss_key(enum bpf_rss_key_e cmd, __u32 *key_idx) break;
/* - * Subtract offest to restore real key index + * Subtract offset to restore real key index * If a non RSS flow is falsely trying to release map * entry 0 - the offset subtraction will calculate the real * map index as an out-of-range value and the release operation diff --git a/drivers/net/thunderx/nicvf_svf.c b/drivers/net/thunderx/nicvf_svf.c index bccf290599..1bcf73d9fc 100644 --- a/drivers/net/thunderx/nicvf_svf.c +++ b/drivers/net/thunderx/nicvf_svf.c @@ -21,7 +21,7 @@ nicvf_svf_push(struct nicvf *vf)
entry = rte_zmalloc("nicvf", sizeof(*entry), RTE_CACHE_LINE_SIZE); if (entry == NULL) - rte_panic("Cannoc allocate memory for svf_entry\n"); + rte_panic("Cannot allocate memory for svf_entry\n");
entry->vf = vf;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c index 47d0e6ea40..ac4d4e08f4 100644 --- a/drivers/net/txgbe/txgbe_ethdev.c +++ b/drivers/net/txgbe/txgbe_ethdev.c @@ -1678,7 +1678,7 @@ txgbe_dev_start(struct rte_eth_dev *dev) return -ENOMEM; } } - /* confiugre msix for sleep until rx interrupt */ + /* configure msix for sleep until rx interrupt */ txgbe_configure_msix(dev);
/* initialize transmission unit */ @@ -3682,7 +3682,7 @@ txgbe_set_ivar_map(struct txgbe_hw *hw, int8_t direction, wr32(hw, TXGBE_IVARMISC, tmp); } else { /* rx or tx causes */ - /* Workround for ICR lost */ + /* Workaround for ICR lost */ idx = ((16 * (queue & 1)) + (8 * direction)); tmp = rd32(hw, TXGBE_IVAR(queue >> 1)); tmp &= ~(0xFF << idx); @@ -4387,7 +4387,7 @@ txgbe_timesync_disable(struct rte_eth_dev *dev) /* Disable L2 filtering of IEEE1588/802.1AS Ethernet frame types. */ wr32(hw, TXGBE_ETFLT(TXGBE_ETF_ID_1588), 0);
- /* Stop incrementating the System Time registers. */ + /* Stop incrementing the System Time registers. */ wr32(hw, TXGBE_TSTIMEINC, 0);
return 0; diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index 84b960b8f9..f52cd8bc19 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -961,7 +961,7 @@ txgbevf_set_ivar_map(struct txgbe_hw *hw, int8_t direction, wr32(hw, TXGBE_VFIVARMISC, tmp); } else { /* rx or tx cause */ - /* Workround for ICR lost */ + /* Workaround for ICR lost */ idx = ((16 * (queue & 1)) + (8 * direction)); tmp = rd32(hw, TXGBE_VFIVAR(queue >> 1)); tmp &= ~(0xFF << idx); @@ -997,7 +997,7 @@ txgbevf_configure_msix(struct rte_eth_dev *dev) /* Configure all RX queues of VF */ for (q_idx = 0; q_idx < dev->data->nb_rx_queues; q_idx++) { /* Force all queue use vector 0, - * as TXGBE_VF_MAXMSIVECOTR = 1 + * as TXGBE_VF_MAXMSIVECTOR = 1 */ txgbevf_set_ivar_map(hw, 0, q_idx, vector_idx); rte_intr_vec_list_index_set(intr_handle, q_idx, @@ -1288,7 +1288,7 @@ txgbevf_dev_interrupt_get_status(struct rte_eth_dev *dev)
/* only one misc vector supported - mailbox */ eicr &= TXGBE_VFICR_MASK; - /* Workround for ICR lost */ + /* Workaround for ICR lost */ intr->flags |= TXGBE_FLAG_MAILBOX;
/* To avoid compiler warnings set eicr to used. */ diff --git a/drivers/net/txgbe/txgbe_ipsec.c b/drivers/net/txgbe/txgbe_ipsec.c index 445733f3ba..3ca3d85ed5 100644 --- a/drivers/net/txgbe/txgbe_ipsec.c +++ b/drivers/net/txgbe/txgbe_ipsec.c @@ -288,7 +288,7 @@ txgbe_crypto_remove_sa(struct rte_eth_dev *dev, return -1; }
- /* Disable and clear Rx SPI and key table entryes*/ + /* Disable and clear Rx SPI and key table entries */ reg_val = TXGBE_IPSRXIDX_WRITE | TXGBE_IPSRXIDX_TB_SPI | (sa_index << 3); wr32(hw, TXGBE_IPSRXSPI, 0); diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c index 30be287330..67d92bfa56 100644 --- a/drivers/net/txgbe/txgbe_pf.c +++ b/drivers/net/txgbe/txgbe_pf.c @@ -236,7 +236,7 @@ int txgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
wr32(hw, TXGBE_PSRCTL, TXGBE_PSRCTL_LBENA);
- /* clear VMDq map to perment rar 0 */ + /* clear VMDq map to permanent rar 0 */ hw->mac.clear_vmdq(hw, 0, BIT_MASK32);
/* clear VMDq map to scan rar 127 */ diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index c2588369b2..b317649d7e 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -2657,7 +2657,7 @@ virtio_dev_configure(struct rte_eth_dev *dev) hw->has_rx_offload = rx_offload_enabled(hw);
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) - /* Enable vector (0) for Link State Intrerrupt */ + /* Enable vector (0) for Link State Interrupt */ if (VIRTIO_OPS(hw)->set_config_irq(hw, 0) == VIRTIO_MSI_NO_VECTOR) { PMD_DRV_LOG(ERR, "failed to set config vector"); @@ -2775,7 +2775,7 @@ virtio_dev_start(struct rte_eth_dev *dev) } }
- /* Enable uio/vfio intr/eventfd mapping: althrough we already did that + /* Enable uio/vfio intr/eventfd mapping: although we already did that * in device configure, but it could be unmapped when device is * stopped. */ diff --git a/drivers/net/virtio/virtio_pci.c b/drivers/net/virtio/virtio_pci.c index 182cfc9eae..632451dcbe 100644 --- a/drivers/net/virtio/virtio_pci.c +++ b/drivers/net/virtio/virtio_pci.c @@ -235,7 +235,7 @@ legacy_get_isr(struct virtio_hw *hw) return dst; }
-/* Enable one vector (0) for Link State Intrerrupt */ +/* Enable one vector (0) for Link State Interrupt */ static uint16_t legacy_set_config_irq(struct virtio_hw *hw, uint16_t vec) { diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 2e115ded02..b39dd92d1b 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -962,7 +962,7 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr) return -EINVAL; }
- /* Update mss lengthes in mbuf */ + /* Update mss lengths in mbuf */ m->tso_segsz = hdr->gso_size; switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) { case VIRTIO_NET_HDR_GSO_TCPV4: diff --git a/drivers/net/virtio/virtio_rxtx_packed_avx.h b/drivers/net/virtio/virtio_rxtx_packed_avx.h index 8cb71f3fe6..584ac72f95 100644 --- a/drivers/net/virtio/virtio_rxtx_packed_avx.h +++ b/drivers/net/virtio/virtio_rxtx_packed_avx.h @@ -192,7 +192,7 @@ virtqueue_dequeue_batch_packed_vec(struct virtnet_rx *rxvq,
/* * load len from desc, store into mbuf pkt_len and data_len - * len limiated by l6bit buf_len, pkt_len[16:31] can be ignored + * len limited by l6bit buf_len, pkt_len[16:31] can be ignored */ const __mmask16 mask = 0x6 | 0x6 << 4 | 0x6 << 8 | 0x6 << 12; __m512i values = _mm512_maskz_shuffle_epi32(mask, v_desc, 0xAA); diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c index 65bf792eb0..c98d696e62 100644 --- a/drivers/net/virtio/virtqueue.c +++ b/drivers/net/virtio/virtqueue.c @@ -13,7 +13,7 @@ /* * Two types of mbuf to be cleaned: * 1) mbuf that has been consumed by backend but not used by virtio. - * 2) mbuf that hasn't been consued by backend. + * 2) mbuf that hasn't been consumed by backend. */ struct rte_mbuf * virtqueue_detach_unused(struct virtqueue *vq) diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index 855f57a956..99c68cf622 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -227,7 +227,7 @@ struct virtio_net_ctrl_rss { * Control link announce acknowledgement * * The command VIRTIO_NET_CTRL_ANNOUNCE_ACK is used to indicate that - * driver has recevied the notification; device would clear the + * driver has received the notification; device would clear the * VIRTIO_NET_S_ANNOUNCE bit in the status field after it receives * this command. */ @@ -312,7 +312,7 @@ struct virtqueue { struct vq_desc_extra vq_descx[0]; };
-/* If multiqueue is provided by host, then we suppport it. */ +/* If multiqueue is provided by host, then we support it. */ #define VIRTIO_NET_CTRL_MQ 4
#define VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET 0 diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c index de26d2aef3..ebc2cd5d0d 100644 --- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c +++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c @@ -653,7 +653,7 @@ dpdmai_dev_dequeue_multijob_prefetch( rte_prefetch0((void *)(size_t)(dq_storage + 1));
/* Prepare next pull descriptor. This will give space for the - * prefething done on DQRR entries + * prefetching done on DQRR entries */ q_storage->toggle ^= 1; dq_storage1 = q_storage->dq_storage[q_storage->toggle]; diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.h b/drivers/raw/dpaa2_qdma/dpaa2_qdma.h index d6f6bb5522..1973d5d2b2 100644 --- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.h +++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.h @@ -82,7 +82,7 @@ struct qdma_device { /** total number of hw queues. */ uint16_t num_hw_queues; /** - * Maximum number of hw queues to be alocated per core. + * Maximum number of hw queues to be allocated per core. * This is limited by MAX_HW_QUEUE_PER_CORE */ uint16_t max_hw_queues_per_core; @@ -268,7 +268,7 @@ struct dpaa2_dpdmai_dev { struct fsl_mc_io dpdmai; /** HW ID for DPDMAI object */ uint32_t dpdmai_id; - /** Tocken of this device */ + /** Token of this device */ uint16_t token; /** Number of queue in this DPDMAI device */ uint8_t num_queues; diff --git a/drivers/raw/ifpga/ifpga_rawdev.c b/drivers/raw/ifpga/ifpga_rawdev.c index 8d9db585a4..0eae0c9477 100644 --- a/drivers/raw/ifpga/ifpga_rawdev.c +++ b/drivers/raw/ifpga/ifpga_rawdev.c @@ -382,7 +382,7 @@ ifpga_monitor_sensor(struct rte_rawdev *raw_dev,
if (HIGH_WARN(sensor, value) || LOW_WARN(sensor, value)) { - IFPGA_RAWDEV_PMD_INFO("%s reach theshold %d\n", + IFPGA_RAWDEV_PMD_INFO("%s reach threshold %d\n", sensor->name, value); *gsd_start = true; break; @@ -393,7 +393,7 @@ ifpga_monitor_sensor(struct rte_rawdev *raw_dev, if (!strcmp(sensor->name, "12V AUX Voltage")) { if (value < AUX_VOLTAGE_WARN) { IFPGA_RAWDEV_PMD_INFO( - "%s reach theshold %d mV\n", + "%s reach threshold %d mV\n", sensor->name, value); *gsd_start = true; break; @@ -441,12 +441,12 @@ static int set_surprise_link_check_aer( pos = ifpga_pci_find_ext_capability(fd, RTE_PCI_EXT_CAP_ID_ERR); if (!pos) goto end; - /* save previout ECAP_AER+0x08 */ + /* save previous ECAP_AER+0x08 */ ret = pread(fd, &data, sizeof(data), pos+0x08); if (ret == -1) goto end; ifpga_rdev->aer_old[0] = data; - /* save previout ECAP_AER+0x14 */ + /* save previous ECAP_AER+0x14 */ ret = pread(fd, &data, sizeof(data), pos+0x14); if (ret == -1) goto end; @@ -531,7 +531,7 @@ ifpga_monitor_start_func(void) ifpga_rawdev_gsd_handle, NULL); if (ret != 0) { IFPGA_RAWDEV_PMD_ERR( - "Fail to create ifpga nonitor thread"); + "Fail to create ifpga monitor thread"); return -1; } ifpga_monitor_start = 1; diff --git a/drivers/raw/ntb/ntb.h b/drivers/raw/ntb/ntb.h index cdf7667d5d..c9ff33aa59 100644 --- a/drivers/raw/ntb/ntb.h +++ b/drivers/raw/ntb/ntb.h @@ -95,7 +95,7 @@ enum ntb_spad_idx { * @spad_write: Write val to local/peer spad register. * @db_read: Read doorbells status. * @db_clear: Clear local doorbells. - * @db_set_mask: Set bits in db mask, preventing db interrpts generated + * @db_set_mask: Set bits in db mask, preventing db interrupts generated * for those db bits. * @peer_db_set: Set doorbell bit to generate peer interrupt for that bit. * @vector_bind: Bind vector source [intr] to msix vector [msix]. diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c index b1b9053bff..130d201a85 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c @@ -160,7 +160,7 @@ mlx5_vdpa_vhost_mem_regions_prepare(int vid, uint8_t *mode, uint64_t *mem_size, * The target here is to group all the physical memory regions of the * virtio device in one indirect mkey. * For KLM Fixed Buffer Size mode (HW find the translation entry in one - * read according to the guest phisical address): + * read according to the guest physical address): * All the sub-direct mkeys of it must be in the same size, hence, each * one of them should be in the GCD size of all the virtio memory * regions and the holes between them. diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index db971bad48..2f32aef67f 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -403,7 +403,7 @@ mlx5_vdpa_features_validate(struct mlx5_vdpa_priv *priv) if (priv->features & (1ULL << VIRTIO_F_RING_PACKED)) { if (!(priv->caps.virtio_queue_type & (1 << MLX5_VIRTQ_TYPE_PACKED))) { - DRV_LOG(ERR, "Failed to configur PACKED mode for vdev " + DRV_LOG(ERR, "Failed to configure PACKED mode for vdev " "%d - it was not reported by HW/driver" " capability.", priv->vid); return -ENOTSUP; diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c index ecafc5e4f1..fc7e8b8174 100644 --- a/examples/bbdev_app/main.c +++ b/examples/bbdev_app/main.c @@ -372,7 +372,7 @@ add_awgn(struct rte_mbuf **mbufs, uint16_t num_pkts) /* Encoder output to Decoder input adapter. The Decoder accepts only soft input * so each bit of the encoder output must be translated into one byte of LLR. If * Sub-block Deinterleaver is bypassed, which is the case, the padding bytes - * must additionally be insterted at the end of each sub-block. + * must additionally be inserted at the end of each sub-block. */ static inline void transform_enc_out_dec_in(struct rte_mbuf **mbufs, uint8_t *temp_buf, diff --git a/examples/bond/main.c b/examples/bond/main.c index 1087b0dad1..335bde5c8d 100644 --- a/examples/bond/main.c +++ b/examples/bond/main.c @@ -230,7 +230,7 @@ bond_port_init(struct rte_mempool *mbuf_pool) 0 /*SOCKET_ID_ANY*/); if (retval < 0) rte_exit(EXIT_FAILURE, - "Faled to create bond port\n"); + "Failed to create bond port\n");
BOND_PORT = retval;
@@ -405,7 +405,7 @@ static int lcore_main(__rte_unused void *arg1) struct rte_ether_hdr *); ether_type = eth_hdr->ether_type; if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN)) - printf("VLAN taged frame, offset:"); + printf("VLAN tagged frame, offset:"); offset = get_vlan_offset(eth_hdr, ðer_type); if (offset > 0) printf("%d\n", offset); diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c index b06042e5fe..9fc46f5255 100644 --- a/examples/dma/dmafwd.c +++ b/examples/dma/dmafwd.c @@ -88,7 +88,7 @@ static uint16_t nb_queues = 1; /* MAC updating enabled by default. */ static int mac_updating = 1;
-/* hardare copy mode enabled by default. */ +/* hardware copy mode enabled by default. */ static copy_mode_t copy_mode = COPY_MODE_DMA_NUM;
/* size of descriptor ring for hardware copy mode or diff --git a/examples/ethtool/lib/rte_ethtool.c b/examples/ethtool/lib/rte_ethtool.c index 86286d38a6..ffaad96498 100644 --- a/examples/ethtool/lib/rte_ethtool.c +++ b/examples/ethtool/lib/rte_ethtool.c @@ -402,7 +402,7 @@ rte_ethtool_net_set_rx_mode(uint16_t port_id) #endif }
- /* Enable Rx vlan filter, VF unspport status is discard */ + /* Enable Rx vlan filter, VF unsupported status is discard */ ret = rte_eth_dev_set_vlan_offload(port_id, RTE_ETH_VLAN_FILTER_MASK); if (ret != 0) return ret; diff --git a/examples/ethtool/lib/rte_ethtool.h b/examples/ethtool/lib/rte_ethtool.h index f177096636..d27e0102b1 100644 --- a/examples/ethtool/lib/rte_ethtool.h +++ b/examples/ethtool/lib/rte_ethtool.h @@ -189,7 +189,7 @@ int rte_ethtool_get_module_eeprom(uint16_t port_id,
/** * Retrieve the Ethernet device pause frame configuration according to - * parameter attributes desribed by ethtool data structure, + * parameter attributes described by ethtool data structure, * ethtool_pauseparam. * * @param port_id @@ -209,7 +209,7 @@ int rte_ethtool_get_pauseparam(uint16_t port_id,
/** * Setting the Ethernet device pause frame configuration according to - * parameter attributes desribed by ethtool data structure, ethtool_pauseparam. + * parameter attributes described by ethtool data structure, ethtool_pauseparam. * * @param port_id * The port identifier of the Ethernet device. diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c index fb3cac3bd0..6e4c11c3c7 100644 --- a/examples/ip_reassembly/main.c +++ b/examples/ip_reassembly/main.c @@ -244,7 +244,7 @@ static struct rte_lpm6 *socket_lpm6[RTE_MAX_NUMA_NODES]; #endif /* RTE_LIBRTE_IP_FRAG_TBL_STAT */
/* - * If number of queued packets reached given threahold, then + * If number of queued packets reached given threshold, then * send burst of packets on an output interface. */ static inline uint32_t @@ -873,11 +873,11 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t lcore, uint32_t queue)
/* * At any given moment up to <max_flow_num * (MAX_FRAG_NUM)> - * mbufs could be stored int the fragment table. + * mbufs could be stored in the fragment table. * Plus, each TX queue can hold up to <max_flow_num> packets. */
- /* mbufs stored int the gragment table. 8< */ + /* mbufs stored in the fragment table. 8< */ nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) * MAX_FRAG_NUM; nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + BUF_SIZE - 1) / BUF_SIZE; @@ -895,7 +895,7 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t lcore, uint32_t queue) "rte_pktmbuf_pool_create(%s) failed", buf); return -1; } - /* >8 End of mbufs stored int the fragmentation table. */ + /* >8 End of mbufs stored in the fragmentation table. */
return 0; } diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index e8600f5e90..24b210add4 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -1353,7 +1353,7 @@ eh_display_rx_adapter_conf(struct eventmode_conf *em_conf) for (i = 0; i < nb_rx_adapter; i++) { adapter = &(em_conf->rx_adapter[i]); sprintf(print_buf, - "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d", + "\tRx adapter ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d", adapter->adapter_id, adapter->nb_connections, adapter->eventdev_id); diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index bf3dbf6b5c..96916cd3c5 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -265,7 +265,7 @@ struct socket_ctx socket_ctx[NB_SOCKETS]; /* * Determine is multi-segment support required: * - either frame buffer size is smaller then mtu - * - or reassmeble support is requested + * - or reassemble support is requested */ static int multi_seg_required(void) @@ -2050,7 +2050,7 @@ add_mapping(struct rte_hash *map, const char *str, uint16_t cdev_id,
ret = rte_hash_add_key_data(map, &key, (void *)i); if (ret < 0) { - printf("Faled to insert cdev mapping for (lcore %u, " + printf("Failed to insert cdev mapping for (lcore %u, " "cdev %u, qp %u), errno %d\n", key.lcore_id, ipsec_ctx->tbl[i].id, ipsec_ctx->tbl[i].qp, ret); @@ -2083,7 +2083,7 @@ add_cdev_mapping(struct rte_cryptodev_info *dev_info, uint16_t cdev_id, str = "Inbound"; }
- /* Required cryptodevs with operation chainning */ + /* Required cryptodevs with operation chaining */ if (!(dev_info->feature_flags & RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING)) return ret; @@ -2251,7 +2251,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads) "Error during getting device (port %u) info: %s\n", portid, strerror(-ret));
- /* limit allowed HW offloafs, as user requested */ + /* limit allowed HW offloads, as user requested */ dev_info.rx_offload_capa &= dev_rx_offload; dev_info.tx_offload_capa &= dev_tx_offload;
@@ -2298,7 +2298,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads) local_port_conf.rxmode.offloads) rte_exit(EXIT_FAILURE, "Error: port %u required RX offloads: 0x%" PRIx64 - ", avaialbe RX offloads: 0x%" PRIx64 "\n", + ", available RX offloads: 0x%" PRIx64 "\n", portid, local_port_conf.rxmode.offloads, dev_info.rx_offload_capa);
@@ -2306,7 +2306,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads) local_port_conf.txmode.offloads) rte_exit(EXIT_FAILURE, "Error: port %u required TX offloads: 0x%" PRIx64 - ", avaialbe TX offloads: 0x%" PRIx64 "\n", + ", available TX offloads: 0x%" PRIx64 "\n", portid, local_port_conf.txmode.offloads, dev_info.tx_offload_capa);
@@ -2317,7 +2317,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads) if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
- printf("port %u configurng rx_offloads=0x%" PRIx64 + printf("port %u configuring rx_offloads=0x%" PRIx64 ", tx_offloads=0x%" PRIx64 "\n", portid, local_port_conf.rxmode.offloads, local_port_conf.txmode.offloads); diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index 30bc693e06..1839ac71af 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -897,7 +897,7 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, continue; }
- /* unrecognizeable input */ + /* unrecognizable input */ APP_CHECK(0, status, "unrecognized input "%s"", tokens[ti]); return; @@ -1145,7 +1145,7 @@ get_spi_proto(uint32_t spi, enum rte_security_ipsec_sa_direction dir, if (rc4 >= 0) { if (rc6 >= 0) { RTE_LOG(ERR, IPSEC, - "%s: SPI %u used simultaeously by " + "%s: SPI %u used simultaneously by " "IPv4(%d) and IPv6 (%d) SP rules\n", __func__, spi, rc4, rc6); return -EINVAL; @@ -1550,7 +1550,7 @@ ipsec_sa_init(struct ipsec_sa *lsa, struct rte_ipsec_sa *sa, uint32_t sa_size) }
/* - * Allocate space and init rte_ipsec_sa strcutures, + * Allocate space and init rte_ipsec_sa structures, * one per session. */ static int diff --git a/examples/ipsec-secgw/sp4.c b/examples/ipsec-secgw/sp4.c index beddd7bc1d..fc4101a4a2 100644 --- a/examples/ipsec-secgw/sp4.c +++ b/examples/ipsec-secgw/sp4.c @@ -410,7 +410,7 @@ parse_sp4_tokens(char **tokens, uint32_t n_tokens, continue; }
- /* unrecognizeable input */ + /* unrecognizable input */ APP_CHECK(0, status, "unrecognized input "%s"", tokens[ti]); return; diff --git a/examples/ipsec-secgw/sp6.c b/examples/ipsec-secgw/sp6.c index 328e085288..cce4da7862 100644 --- a/examples/ipsec-secgw/sp6.c +++ b/examples/ipsec-secgw/sp6.c @@ -515,7 +515,7 @@ parse_sp6_tokens(char **tokens, uint32_t n_tokens, continue; }
- /* unrecognizeable input */ + /* unrecognizable input */ APP_CHECK(0, status, "unrecognized input "%s"", tokens[ti]); return; diff --git a/examples/ipsec-secgw/test/common_defs.sh b/examples/ipsec-secgw/test/common_defs.sh index f22eb3ab12..3ef06bc761 100644 --- a/examples/ipsec-secgw/test/common_defs.sh +++ b/examples/ipsec-secgw/test/common_defs.sh @@ -20,7 +20,7 @@ REMOTE_MAC=`ssh ${REMOTE_HOST} ip addr show dev ${REMOTE_IFACE}` st=$? REMOTE_MAC=`echo ${REMOTE_MAC} | sed -e 's/^.*ether //' -e 's/ brd.*$//'` if [[ $st -ne 0 || -z "${REMOTE_MAC}" ]]; then - echo "coouldn't retrieve ether addr from ${REMOTE_IFACE}" + echo "couldn't retrieve ether addr from ${REMOTE_IFACE}" exit 127 fi
@@ -40,7 +40,7 @@ DPDK_VARS=""
# by default ipsec-secgw can't deal with multi-segment packets # make sure our local/remote host wouldn't generate fragmented packets -# if reassmebly option is not enabled +# if reassembly option is not enabled DEF_MTU_LEN=1400 DEF_PING_LEN=1200
diff --git a/examples/kni/main.c b/examples/kni/main.c index d324ee2241..f5b20a7b62 100644 --- a/examples/kni/main.c +++ b/examples/kni/main.c @@ -1039,7 +1039,7 @@ main(int argc, char** argv) pthread_t kni_link_tid; int pid;
- /* Associate signal_hanlder function with USR signals */ + /* Associate signal_handler function with USR signals */ signal(SIGUSR1, signal_handler); signal(SIGUSR2, signal_handler); signal(SIGRTMIN, signal_handler); diff --git a/examples/l2fwd-cat/l2fwd-cat.c b/examples/l2fwd-cat/l2fwd-cat.c index d9cf00c9df..6e16705e99 100644 --- a/examples/l2fwd-cat/l2fwd-cat.c +++ b/examples/l2fwd-cat/l2fwd-cat.c @@ -157,7 +157,7 @@ main(int argc, char *argv[]) int ret = rte_eal_init(argc, argv); if (ret < 0) rte_exit(EXIT_FAILURE, "Error with EAL initialization\n"); - /* >8 End of initializion the Environment Abstraction Layer (EAL). */ + /* >8 End of initialization the Environment Abstraction Layer (EAL). */
argc -= ret; argv += ret; diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c index f31569a744..1977e23261 100644 --- a/examples/l2fwd-event/l2fwd_event_generic.c +++ b/examples/l2fwd-event/l2fwd_event_generic.c @@ -42,7 +42,7 @@ l2fwd_event_device_setup_generic(struct l2fwd_resources *rsrc) ethdev_count++; }
- /* Event device configurtion */ + /* Event device configuration */ rte_event_dev_info_get(event_d_id, &dev_info);
/* Enable implicit release */ diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c index 86d772d817..717a7bceb8 100644 --- a/examples/l2fwd-event/l2fwd_event_internal_port.c +++ b/examples/l2fwd-event/l2fwd_event_internal_port.c @@ -40,7 +40,7 @@ l2fwd_event_device_setup_internal_port(struct l2fwd_resources *rsrc) ethdev_count++; }
- /* Event device configurtion */ + /* Event device configuration */ rte_event_dev_info_get(event_d_id, &dev_info);
/* Enable implicit release */ diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c index d8eabe4c86..9e71ba2d4e 100644 --- a/examples/l2fwd-jobstats/main.c +++ b/examples/l2fwd-jobstats/main.c @@ -468,7 +468,7 @@ l2fwd_flush_job(__rte_unused struct rte_timer *timer, __rte_unused void *arg) qconf->next_flush_time[portid] = rte_get_timer_cycles() + drain_tsc; }
- /* Pass target to indicate that this job is happy of time interwal + /* Pass target to indicate that this job is happy of time interval * in which it was called. */ rte_jobstats_finish(&qconf->flush_job, qconf->flush_job.target); } diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c index 1fb1807235..2d2ecc7635 100644 --- a/examples/l3fwd-acl/main.c +++ b/examples/l3fwd-acl/main.c @@ -801,8 +801,8 @@ send_packets(struct rte_mbuf **m, uint32_t *res, int num) }
/* - * Parses IPV6 address, exepcts the following format: - * XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX (where X - is a hexedecimal digit). + * Parse IPv6 address, expects the following format: + * XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX (where X is a hexadecimal digit). */ static int parse_ipv6_addr(const char *in, const char **end, uint32_t v[IPV6_ADDR_U32], @@ -1959,7 +1959,7 @@ check_all_ports_link_status(uint32_t port_mask) }
/* - * build-up default vaues for dest MACs. + * build-up default values for dest MACs. */ static void set_default_dest_mac(void) diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c index b8b3be2b8a..20e5b59af9 100644 --- a/examples/l3fwd-power/main.c +++ b/examples/l3fwd-power/main.c @@ -433,7 +433,7 @@ signal_exit_now(int sigtype)
}
-/* Freqency scale down timer callback */ +/* Frequency scale down timer callback */ static void power_timer_cb(__rte_unused struct rte_timer *tim, __rte_unused void *arg) @@ -2358,7 +2358,7 @@ update_telemetry(__rte_unused struct rte_timer *tim, ret = rte_metrics_update_values(RTE_METRICS_GLOBAL, telstats_index, values, RTE_DIM(values)); if (ret < 0) - RTE_LOG(WARNING, POWER, "failed to update metrcis\n"); + RTE_LOG(WARNING, POWER, "failed to update metrics\n"); }
static int diff --git a/examples/l3fwd/l3fwd_common.h b/examples/l3fwd/l3fwd_common.h index 7d83ff641a..cbaab79f5b 100644 --- a/examples/l3fwd/l3fwd_common.h +++ b/examples/l3fwd/l3fwd_common.h @@ -51,7 +51,7 @@ rfc1812_process(struct rte_ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype) #endif /* DO_RFC_1812_CHECKS */
/* - * We group consecutive packets with the same destionation port into one burst. + * We group consecutive packets with the same destination port into one burst. * To avoid extra latency this is done together with some other packet * processing, but after we made a final decision about packet's destination. * To do this we maintain: @@ -76,7 +76,7 @@ rfc1812_process(struct rte_ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype)
static const struct { uint64_t pnum; /* prebuild 4 values for pnum[]. */ - int32_t idx; /* index for new last updated elemnet. */ + int32_t idx; /* index for new last updated element. */ uint16_t lpv; /* add value to the last updated element. */ } gptbl[GRPSZ] = { { diff --git a/examples/l3fwd/l3fwd_neon.h b/examples/l3fwd/l3fwd_neon.h index 86ac5971d7..e3d33a5229 100644 --- a/examples/l3fwd/l3fwd_neon.h +++ b/examples/l3fwd/l3fwd_neon.h @@ -64,7 +64,7 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
/* * Group consecutive packets with the same destination port in bursts of 4. - * Suppose we have array of destionation ports: + * Suppose we have array of destination ports: * dst_port[] = {a, b, c, d,, e, ... } * dp1 should contain: <a, b, c, d>, dp2: <b, c, d, e>. * We doing 4 comparisons at once and the result is 4 bit mask. diff --git a/examples/l3fwd/l3fwd_sse.h b/examples/l3fwd/l3fwd_sse.h index bb565ed546..d5a717e18c 100644 --- a/examples/l3fwd/l3fwd_sse.h +++ b/examples/l3fwd/l3fwd_sse.h @@ -64,7 +64,7 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
/* * Group consecutive packets with the same destination port in bursts of 4. - * Suppose we have array of destionation ports: + * Suppose we have array of destination ports: * dst_port[] = {a, b, c, d,, e, ... } * dp1 should contain: <a, b, c, d>, dp2: <b, c, d, e>. * We doing 4 comparisons at once and the result is 4 bit mask. diff --git a/examples/multi_process/hotplug_mp/commands.c b/examples/multi_process/hotplug_mp/commands.c index 48fd329583..41ea265e45 100644 --- a/examples/multi_process/hotplug_mp/commands.c +++ b/examples/multi_process/hotplug_mp/commands.c @@ -175,7 +175,7 @@ static void cmd_dev_detach_parsed(void *parsed_result, cmdline_printf(cl, "detached device %s\n", da.name); else - cmdline_printf(cl, "failed to dettach device %s\n", + cmdline_printf(cl, "failed to detach device %s\n", da.name); rte_devargs_reset(&da); } diff --git a/examples/multi_process/simple_mp/main.c b/examples/multi_process/simple_mp/main.c index 5df2a39000..9d5f1088b0 100644 --- a/examples/multi_process/simple_mp/main.c +++ b/examples/multi_process/simple_mp/main.c @@ -4,7 +4,7 @@
/* * This sample application is a simple multi-process application which - * demostrates sharing of queues and memory pools between processes, and + * demonstrates sharing of queues and memory pools between processes, and * using those queues/pools for communication between the processes. * * Application is designed to run with two processes, a primary and a diff --git a/examples/multi_process/symmetric_mp/main.c b/examples/multi_process/symmetric_mp/main.c index b35886a77b..050337765f 100644 --- a/examples/multi_process/symmetric_mp/main.c +++ b/examples/multi_process/symmetric_mp/main.c @@ -3,7 +3,7 @@ */
/* - * Sample application demostrating how to do packet I/O in a multi-process + * Sample application demonstrating how to do packet I/O in a multi-process * environment. The same code can be run as a primary process and as a * secondary process, just with a different proc-id parameter in each case * (apart from the EAL flag to indicate a secondary process). diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c index f110fc129f..81964d0308 100644 --- a/examples/ntb/ntb_fwd.c +++ b/examples/ntb/ntb_fwd.c @@ -696,7 +696,7 @@ assign_stream_to_lcores(void) break; }
- /* Print packet forwading config. */ + /* Print packet forwarding config. */ RTE_LCORE_FOREACH_WORKER(lcore_id) { conf = &fwd_lcore_conf[lcore_id];
diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c index b01ac60fd1..99e67ef67b 100644 --- a/examples/packet_ordering/main.c +++ b/examples/packet_ordering/main.c @@ -686,7 +686,7 @@ main(int argc, char **argv) if (ret < 0) rte_exit(EXIT_FAILURE, "Invalid packet_ordering arguments\n");
- /* Check if we have enought cores */ + /* Check if we have enough cores */ if (rte_lcore_count() < 3) rte_exit(EXIT_FAILURE, "Error, This application needs at " "least 3 logical cores to run:\n" diff --git a/examples/performance-thread/common/lthread.c b/examples/performance-thread/common/lthread.c index 009374a8c3..b02e0fc13a 100644 --- a/examples/performance-thread/common/lthread.c +++ b/examples/performance-thread/common/lthread.c @@ -178,7 +178,7 @@ lthread_create(struct lthread **new_lt, int lcore_id, bzero(lt, sizeof(struct lthread)); lt->root_sched = THIS_SCHED;
- /* set the function args and exit handlder */ + /* set the function args and exit handler */ _lthread_init(lt, fun, arg, _lthread_exit_handler);
/* put it in the ready queue */ @@ -384,7 +384,7 @@ void lthread_exit(void *ptr) }
- /* wait until the joinging thread has collected the exit value */ + /* wait until the joining thread has collected the exit value */ while (lt->join != LT_JOIN_EXIT_VAL_READ) _reschedule();
@@ -410,7 +410,7 @@ int lthread_join(struct lthread *lt, void **ptr) /* invalid to join a detached thread, or a thread that is joined */ if ((lt_state & BIT(ST_LT_DETACH)) || (lt->join == LT_JOIN_THREAD_SET)) return POSIX_ERRNO(EINVAL); - /* pointer to the joining thread and a poingter to return a value */ + /* pointer to the joining thread and a pointer to return a value */ lt->lt_join = current; current->lt_exit_ptr = ptr; /* There is a race between lthread_join() and lthread_exit() diff --git a/examples/performance-thread/common/lthread_diag.c b/examples/performance-thread/common/lthread_diag.c index 57760a1e23..b1bdf7a30c 100644 --- a/examples/performance-thread/common/lthread_diag.c +++ b/examples/performance-thread/common/lthread_diag.c @@ -232,7 +232,7 @@ lthread_sched_stats_display(void) }
/* - * Defafult diagnostic callback + * Default diagnostic callback */ static uint64_t _lthread_diag_default_cb(uint64_t time, struct lthread *lt, int diag_event, diff --git a/examples/performance-thread/common/lthread_int.h b/examples/performance-thread/common/lthread_int.h index d010126f16..ec018e34a1 100644 --- a/examples/performance-thread/common/lthread_int.h +++ b/examples/performance-thread/common/lthread_int.h @@ -107,7 +107,7 @@ enum join_st { LT_JOIN_EXIT_VAL_READ, /* joining thread has collected ret val */ };
-/* defnition of an lthread stack object */ +/* definition of an lthread stack object */ struct lthread_stack { uint8_t stack[LTHREAD_MAX_STACK_SIZE]; size_t stack_size; diff --git a/examples/performance-thread/common/lthread_tls.c b/examples/performance-thread/common/lthread_tls.c index 4ab2e3558b..bae45f2aa9 100644 --- a/examples/performance-thread/common/lthread_tls.c +++ b/examples/performance-thread/common/lthread_tls.c @@ -215,7 +215,7 @@ void _lthread_tls_alloc(struct lthread *lt) tls->root_sched = (THIS_SCHED); lt->tls = tls;
- /* allocate data for TLS varaiables using RTE_PER_LTHREAD macros */ + /* allocate data for TLS variables using RTE_PER_LTHREAD macros */ if (sizeof(void *) < (uint64_t)RTE_PER_LTHREAD_SECTION_SIZE) { lt->per_lthread_data = _lthread_objcache_alloc((THIS_SCHED)->per_lthread_cache); diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c index 8a35040597..1ddb2a9138 100644 --- a/examples/performance-thread/l3fwd-thread/main.c +++ b/examples/performance-thread/l3fwd-thread/main.c @@ -125,7 +125,7 @@ cb_parse_ptype(__rte_unused uint16_t port, __rte_unused uint16_t queue, }
/* - * When set to zero, simple forwaring path is eanbled. + * When set to zero, simple forwarding path is enabled. * When set to one, optimized forwarding path is enabled. * Note that LPM optimisation path uses SSE4.1 instructions. */ @@ -1529,7 +1529,7 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP]) }
/* - * We group consecutive packets with the same destionation port into one burst. + * We group consecutive packets with the same destination port into one burst. * To avoid extra latency this is done together with some other packet * processing, but after we made a final decision about packet's destination. * To do this we maintain: @@ -1554,7 +1554,7 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
/* * Group consecutive packets with the same destination port in bursts of 4. - * Suppose we have array of destionation ports: + * Suppose we have array of destination ports: * dst_port[] = {a, b, c, d,, e, ... } * dp1 should contain: <a, b, c, d>, dp2: <b, c, d, e>. * We doing 4 comparisons at once and the result is 4 bit mask. @@ -1565,7 +1565,7 @@ port_groupx4(uint16_t pn[FWDSTEP + 1], uint16_t *lp, __m128i dp1, __m128i dp2) { static const struct { uint64_t pnum; /* prebuild 4 values for pnum[]. */ - int32_t idx; /* index for new last updated elemnet. */ + int32_t idx; /* index for new last updated element. */ uint16_t lpv; /* add value to the last updated element. */ } gptbl[GRPSZ] = { { @@ -1834,7 +1834,7 @@ process_burst(struct rte_mbuf *pkts_burst[MAX_PKT_BURST], int nb_rx,
/* * Send packets out, through destination port. - * Consecuteve pacekts with the same destination port + * Consecutive packets with the same destination port * are already grouped together. * If destination port for the packet equals BAD_PORT, * then free the packet without sending it out. @@ -3514,7 +3514,7 @@ main(int argc, char **argv)
ret = rte_timer_subsystem_init(); if (ret < 0) - rte_exit(EXIT_FAILURE, "Failed to initialize timer subystem\n"); + rte_exit(EXIT_FAILURE, "Failed to initialize timer subsystem\n");
/* pre-init dst MACs for all ports to 02:00:00:00:00:xx */ for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) { diff --git a/examples/performance-thread/pthread_shim/pthread_shim.h b/examples/performance-thread/pthread_shim/pthread_shim.h index e90fb15fc1..ce51627a5b 100644 --- a/examples/performance-thread/pthread_shim/pthread_shim.h +++ b/examples/performance-thread/pthread_shim/pthread_shim.h @@ -41,7 +41,7 @@ * * The decision whether to invoke the real library function or the lthread * function is controlled by a per pthread flag that can be switched - * on of off by the pthread_override_set() API described below. Typcially + * on of off by the pthread_override_set() API described below. Typically * this should be done as the first action of the initial lthread. * * N.B In general it would be poor practice to revert to invoke a real diff --git a/examples/pipeline/examples/registers.spec b/examples/pipeline/examples/registers.spec index 74a014ad06..59998fef03 100644 --- a/examples/pipeline/examples/registers.spec +++ b/examples/pipeline/examples/registers.spec @@ -4,7 +4,7 @@ ; This program is setting up two register arrays called "pkt_counters" and "byte_counters". ; On every input packet (Ethernet/IPv4), the "pkt_counters" register at location indexed by ; the IPv4 header "Source Address" field is incremented, while the same location in the -; "byte_counters" array accummulates the value of the IPv4 header "Total Length" field. +; "byte_counters" array accumulates the value of the IPv4 header "Total Length" field. ; ; The "regrd" and "regwr" CLI commands can be used to read and write the current value of ; any register array location. diff --git a/examples/qos_sched/cmdline.c b/examples/qos_sched/cmdline.c index 257b87a7cf..6691b02d89 100644 --- a/examples/qos_sched/cmdline.c +++ b/examples/qos_sched/cmdline.c @@ -41,7 +41,7 @@ static void cmd_help_parsed(__rte_unused void *parsed_result, " qavg port X subport Y pipe Z : Show average queue size per pipe.\n" " qavg port X subport Y pipe Z tc A : Show average queue size per pipe and TC.\n" " qavg port X subport Y pipe Z tc A q B : Show average queue size of a specific queue.\n" - " qavg [n|period] X : Set number of times and peiod (us).\n\n" + " qavg [n|period] X : Set number of times and period (us).\n\n" );
} diff --git a/examples/server_node_efd/node/node.c b/examples/server_node_efd/node/node.c index ba1c7e5153..fc2aa5ffef 100644 --- a/examples/server_node_efd/node/node.c +++ b/examples/server_node_efd/node/node.c @@ -296,7 +296,7 @@ handle_packets(struct rte_hash *h, struct rte_mbuf **bufs, uint16_t num_packets) } } } -/* >8 End of packets dequeueing. */ +/* >8 End of packets dequeuing. */
/* * Application main function - loops through diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c index 16435ee3cc..518cd72179 100644 --- a/examples/skeleton/basicfwd.c +++ b/examples/skeleton/basicfwd.c @@ -179,7 +179,7 @@ main(int argc, char *argv[]) int ret = rte_eal_init(argc, argv); if (ret < 0) rte_exit(EXIT_FAILURE, "Error with EAL initialization\n"); - /* >8 End of initializion the Environment Abstraction Layer (EAL). */ + /* >8 End of initialization the Environment Abstraction Layer (EAL). */
argc -= ret; argv += ret; diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 5ebfff3ac4..d05a8f9193 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -107,7 +107,7 @@ static uint32_t burst_rx_retry_num = BURST_RX_RETRIES; static char *socket_files; static int nb_sockets;
-/* empty vmdq configuration structure. Filled in programatically */ +/* empty VMDq configuration structure. Filled in programmatically */ static struct rte_eth_conf vmdq_conf_default = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY, @@ -115,7 +115,7 @@ static struct rte_eth_conf vmdq_conf_default = { /* * VLAN strip is necessary for 1G NIC such as I350, * this fixes bug of ipv4 forwarding in guest can't - * forward pakets from one virtio dev to another virtio dev. + * forward packets from one virtio dev to another virtio dev. */ .offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP, }, @@ -463,7 +463,7 @@ us_vhost_usage(const char *prgname) " --nb-devices ND\n" " -p PORTMASK: Set mask for ports to be used by application\n" " --vm2vm [0|1|2]: disable/software(default)/hardware vm2vm comms\n" - " --rx-retry [0|1]: disable/enable(default) retries on rx. Enable retry if destintation queue is full\n" + " --rx-retry [0|1]: disable/enable(default) retries on Rx. Enable retry if destination queue is full\n" " --rx-retry-delay [0-N]: timeout(in usecond) between retries on RX. This makes effect only if retries on rx enabled\n" " --rx-retry-num [0-N]: the number of retries on rx. This makes effect only if retries on rx enabled\n" " --mergeable [0|1]: disable(default)/enable RX mergeable buffers\n" @@ -1288,7 +1288,7 @@ switch_worker(void *arg __rte_unused) struct vhost_dev *vdev; struct mbuf_table *tx_q;
- RTE_LOG(INFO, VHOST_DATA, "Procesing on Core %u started\n", lcore_id); + RTE_LOG(INFO, VHOST_DATA, "Processing on Core %u started\n", lcore_id);
tx_q = &lcore_tx_queue[lcore_id]; for (i = 0; i < rte_lcore_count(); i++) { @@ -1332,7 +1332,7 @@ switch_worker(void *arg __rte_unused)
/* * Remove a device from the specific data core linked list and from the - * main linked list. Synchonization occurs through the use of the + * main linked list. Synchronization occurs through the use of the * lcore dev_removal_flag. Device is made volatile here to avoid re-ordering * of dev->remove=1 which can cause an infinite loop in the rte_pause loop. */ diff --git a/examples/vm_power_manager/channel_monitor.c b/examples/vm_power_manager/channel_monitor.c index d767423a40..97b8def7ca 100644 --- a/examples/vm_power_manager/channel_monitor.c +++ b/examples/vm_power_manager/channel_monitor.c @@ -404,7 +404,7 @@ get_pcpu_to_control(struct policy *pol)
/* * So now that we're handling virtual and physical cores, we need to - * differenciate between them when adding them to the branch monitor. + * differentiate between them when adding them to the branch monitor. * Virtual cores need to be converted to physical cores. */ if (pol->pkt.core_type == RTE_POWER_CORE_TYPE_VIRTUAL) { diff --git a/examples/vm_power_manager/power_manager.h b/examples/vm_power_manager/power_manager.h index d35f8cbe01..d51039e2c6 100644 --- a/examples/vm_power_manager/power_manager.h +++ b/examples/vm_power_manager/power_manager.h @@ -224,7 +224,7 @@ int power_manager_enable_turbo_core(unsigned int core_num); int power_manager_disable_turbo_core(unsigned int core_num);
/** - * Get the current freuency of the core specified by core_num + * Get the current frequency of the core specified by core_num * * @param core_num * The core number to get the current frequency diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c index 2c00a942f1..10410b8783 100644 --- a/examples/vmdq/main.c +++ b/examples/vmdq/main.c @@ -62,7 +62,7 @@ static uint8_t rss_enable;
/* Default structure for VMDq. 8< */
-/* empty vmdq configuration structure. Filled in programatically */ +/* empty VMDq configuration structure. Filled in programmatically */ static const struct rte_eth_conf vmdq_conf_default = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY, diff --git a/kernel/linux/kni/kni_fifo.h b/kernel/linux/kni/kni_fifo.h index 5c91b55379..1ba5172002 100644 --- a/kernel/linux/kni/kni_fifo.h +++ b/kernel/linux/kni/kni_fifo.h @@ -41,7 +41,7 @@ kni_fifo_put(struct rte_kni_fifo *fifo, void **data, uint32_t num) }
/** - * Get up to num elements from the fifo. Return the number actully read + * Get up to num elements from the FIFO. Return the number actually read */ static inline uint32_t kni_fifo_get(struct rte_kni_fifo *fifo, void **data, uint32_t num) diff --git a/lib/acl/acl_bld.c b/lib/acl/acl_bld.c index f316d3e875..7ea30f4186 100644 --- a/lib/acl/acl_bld.c +++ b/lib/acl/acl_bld.c @@ -885,7 +885,7 @@ acl_gen_range_trie(struct acl_build_context *context, return root; }
- /* gather information about divirgent paths */ + /* gather information about divergent paths */ lo_00 = 0; hi_ff = UINT8_MAX; for (k = n - 1; k >= 0; k--) { diff --git a/lib/acl/acl_run_altivec.h b/lib/acl/acl_run_altivec.h index 2de6f27b1f..24a41eec17 100644 --- a/lib/acl/acl_run_altivec.h +++ b/lib/acl/acl_run_altivec.h @@ -146,7 +146,7 @@ transition4(xmm_t next_input, const uint64_t *trans,
dfa_ofs = vec_sub(t, r);
- /* QUAD/SINGLE caluclations. */ + /* QUAD/SINGLE calculations. */ t = (xmm_t)vec_cmpgt((vector signed char)in, (vector signed char)tr_hi); t = (xmm_t)vec_sel( vec_sel( diff --git a/lib/acl/acl_run_avx512.c b/lib/acl/acl_run_avx512.c index 78fbe34f7c..3b8795561b 100644 --- a/lib/acl/acl_run_avx512.c +++ b/lib/acl/acl_run_avx512.c @@ -64,7 +64,7 @@ update_flow_mask(const struct acl_flow_avx512 *flow, uint32_t *fmsk, }
/* - * Resolve matches for multiple categories (LE 8, use 128b instuctions/regs) + * Resolve matches for multiple categories (LE 8, use 128b instructions/regs) */ static inline void resolve_mcle8_avx512x1(uint32_t result[], diff --git a/lib/acl/acl_run_avx512x16.h b/lib/acl/acl_run_avx512x16.h index 48bb6fed85..f87293eeb7 100644 --- a/lib/acl/acl_run_avx512x16.h +++ b/lib/acl/acl_run_avx512x16.h @@ -10,7 +10,7 @@ */
/* - * This implementation uses 512-bit registers(zmm) and instrincts. + * This implementation uses 512-bit registers(zmm) and intrinsics. * So our main SIMD type is 512-bit width and each such variable can * process sizeof(__m512i) / sizeof(uint32_t) == 16 entries in parallel. */ @@ -25,20 +25,20 @@ #define _F_(x) x##_avx512x16
/* - * Same instrincts have different syntaxis (depending on the bit-width), + * Same intrinsics have different syntaxes (depending on the bit-width), * so to overcome that few macros need to be defined. */
-/* Naming convention for generic epi(packed integers) type instrincts. */ +/* Naming convention for generic epi(packed integers) type intrinsics. */ #define _M_I_(x) _mm512_##x
-/* Naming convention for si(whole simd integer) type instrincts. */ +/* Naming convention for si(whole simd integer) type intrinsics. */ #define _M_SI_(x) _mm512_##x##_si512
-/* Naming convention for masked gather type instrincts. */ +/* Naming convention for masked gather type intrinsics. */ #define _M_MGI_(x) _mm512_##x
-/* Naming convention for gather type instrincts. */ +/* Naming convention for gather type intrinsics. */ #define _M_GI_(name, idx, base, scale) _mm512_##name(idx, base, scale)
/* num/mask of transitions per SIMD regs */ @@ -239,7 +239,7 @@ _F_(gather_bytes)(__m512i zero, const __m512i p[2], const uint32_t m[2], }
/* - * Resolve matches for multiple categories (GT 8, use 512b instuctions/regs) + * Resolve matches for multiple categories (GT 8, use 512b instructions/regs) */ static inline void resolve_mcgt8_avx512x1(uint32_t result[], diff --git a/lib/acl/acl_run_avx512x8.h b/lib/acl/acl_run_avx512x8.h index 61ac9d1b47..5da2bbfdeb 100644 --- a/lib/acl/acl_run_avx512x8.h +++ b/lib/acl/acl_run_avx512x8.h @@ -10,7 +10,7 @@ */
/* - * This implementation uses 256-bit registers(ymm) and instrincts. + * This implementation uses 256-bit registers(ymm) and intrinsics. * So our main SIMD type is 256-bit width and each such variable can * process sizeof(__m256i) / sizeof(uint32_t) == 8 entries in parallel. */ @@ -25,20 +25,20 @@ #define _F_(x) x##_avx512x8
/* - * Same instrincts have different syntaxis (depending on the bit-width), + * Same intrinsics have different syntaxes (depending on the bit-width), * so to overcome that few macros need to be defined. */
-/* Naming convention for generic epi(packed integers) type instrincts. */ +/* Naming convention for generic epi(packed integers) type intrinsics. */ #define _M_I_(x) _mm256_##x
-/* Naming convention for si(whole simd integer) type instrincts. */ +/* Naming convention for si(whole simd integer) type intrinsics. */ #define _M_SI_(x) _mm256_##x##_si256
-/* Naming convention for masked gather type instrincts. */ +/* Naming convention for masked gather type intrinsics. */ #define _M_MGI_(x) _mm256_m##x
-/* Naming convention for gather type instrincts. */ +/* Naming convention for gather type intrinsics. */ #define _M_GI_(name, idx, base, scale) _mm256_##name(base, idx, scale)
/* num/mask of transitions per SIMD regs */ diff --git a/lib/bpf/bpf_convert.c b/lib/bpf/bpf_convert.c index db84add7dc..9563274c9c 100644 --- a/lib/bpf/bpf_convert.c +++ b/lib/bpf/bpf_convert.c @@ -412,7 +412,7 @@ static int bpf_convert_filter(const struct bpf_insn *prog, size_t len, BPF_EMIT_JMP; break;
- /* ldxb 4 * ([14] & 0xf) is remaped into 6 insns. */ + /* ldxb 4 * ([14] & 0xf) is remapped into 6 insns. */ case BPF_LDX | BPF_MSH | BPF_B: /* tmp = A */ *insn++ = BPF_MOV64_REG(BPF_REG_TMP, BPF_REG_A); @@ -428,7 +428,7 @@ static int bpf_convert_filter(const struct bpf_insn *prog, size_t len, *insn = BPF_MOV64_REG(BPF_REG_A, BPF_REG_TMP); break;
- /* RET_K is remaped into 2 insns. RET_A case doesn't need an + /* RET_K is remapped into 2 insns. RET_A case doesn't need an * extra mov as EBPF_REG_0 is already mapped into BPF_REG_A. */ case BPF_RET | BPF_A: diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h index 9942c6ec21..4abe79c536 100644 --- a/lib/dmadev/rte_dmadev.h +++ b/lib/dmadev/rte_dmadev.h @@ -533,7 +533,7 @@ struct rte_dma_port_param { * @note If some fields can not be supported by the * hardware/driver, then the driver ignores those fields. * Please check driver-specific documentation for limitations - * and capablites. + * and capabilities. */ __extension__ struct { @@ -731,7 +731,7 @@ enum rte_dma_status_code { /** The operation completed successfully. */ RTE_DMA_STATUS_SUCCESSFUL, /** The operation failed to complete due abort by user. - * This is mainly used when processing dev_stop, user could modidy the + * This is mainly used when processing dev_stop, user could modify the * descriptors (e.g. change one bit to tell hardware abort this job), * it allows outstanding requests to be complete as much as possible, * so reduce the time to stop the device. diff --git a/lib/eal/arm/include/rte_cycles_32.h b/lib/eal/arm/include/rte_cycles_32.h index f79718ce8c..cec4d69e7a 100644 --- a/lib/eal/arm/include/rte_cycles_32.h +++ b/lib/eal/arm/include/rte_cycles_32.h @@ -30,7 +30,7 @@ extern "C" {
/** * This call is easily portable to any architecture, however, - * it may require a system call and inprecise for some tasks. + * it may require a system call and imprecise for some tasks. */ static inline uint64_t __rte_rdtsc_syscall(void) diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c index 10aa91cc09..9f720bdc8f 100644 --- a/lib/eal/freebsd/eal_interrupts.c +++ b/lib/eal/freebsd/eal_interrupts.c @@ -234,7 +234,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
rte_spinlock_lock(&intr_lock);
- /* check if the insterrupt source for the fd is existent */ + /* check if the interrupt source for the fd is existent */ TAILQ_FOREACH(src, &intr_sources, next) if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle)) break; @@ -288,7 +288,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
rte_spinlock_lock(&intr_lock);
- /* check if the insterrupt source for the fd is existent */ + /* check if the interrupt source for the fd is existent */ TAILQ_FOREACH(src, &intr_sources, next) if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle)) break; diff --git a/lib/eal/include/generic/rte_pflock.h b/lib/eal/include/generic/rte_pflock.h index b9de063c89..e7bb29b3c5 100644 --- a/lib/eal/include/generic/rte_pflock.h +++ b/lib/eal/include/generic/rte_pflock.h @@ -157,7 +157,7 @@ rte_pflock_write_lock(rte_pflock_t *pf) uint16_t ticket, w;
/* Acquire ownership of write-phase. - * This is same as rte_tickelock_lock(). + * This is same as rte_ticketlock_lock(). */ ticket = __atomic_fetch_add(&pf->wr.in, 1, __ATOMIC_RELAXED); rte_wait_until_equal_16(&pf->wr.out, ticket, __ATOMIC_ACQUIRE); diff --git a/lib/eal/include/rte_malloc.h b/lib/eal/include/rte_malloc.h index ed02e15119..3892519fab 100644 --- a/lib/eal/include/rte_malloc.h +++ b/lib/eal/include/rte_malloc.h @@ -58,7 +58,7 @@ rte_malloc(const char *type, size_t size, unsigned align) __rte_alloc_size(2);
/** - * Allocate zero'ed memory from the heap. + * Allocate zeroed memory from the heap. * * Equivalent to rte_malloc() except that the memory zone is * initialised with zeros. In NUMA systems, the memory allocated resides on the @@ -189,7 +189,7 @@ rte_malloc_socket(const char *type, size_t size, unsigned align, int socket) __rte_alloc_size(2);
/** - * Allocate zero'ed memory from the heap. + * Allocate zeroed memory from the heap. * * Equivalent to rte_malloc() except that the memory zone is * initialised with zeros. diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c index 621e43626e..c3e9a08822 100644 --- a/lib/eal/linux/eal_interrupts.c +++ b/lib/eal/linux/eal_interrupts.c @@ -589,7 +589,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
rte_spinlock_lock(&intr_lock);
- /* check if the insterrupt source for the fd is existent */ + /* check if the interrupt source for the fd is existent */ TAILQ_FOREACH(src, &intr_sources, next) { if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle)) break; @@ -639,7 +639,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
rte_spinlock_lock(&intr_lock);
- /* check if the insterrupt source for the fd is existent */ + /* check if the interrupt source for the fd is existent */ TAILQ_FOREACH(src, &intr_sources, next) if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle)) break; diff --git a/lib/eal/linux/eal_vfio.h b/lib/eal/linux/eal_vfio.h index 6ebaca6a0c..c5d5f70548 100644 --- a/lib/eal/linux/eal_vfio.h +++ b/lib/eal/linux/eal_vfio.h @@ -103,7 +103,7 @@ struct vfio_group { typedef int (*vfio_dma_func_t)(int);
/* Custom memory region DMA mapping function prototype. - * Takes VFIO container fd, virtual address, phisical address, length and + * Takes VFIO container fd, virtual address, physical address, length and * operation type (0 to unmap 1 for map) as a parameters. * Returns 0 on success, -1 on error. **/ diff --git a/lib/eal/windows/eal_windows.h b/lib/eal/windows/eal_windows.h index 23ead6d30c..245aa60344 100644 --- a/lib/eal/windows/eal_windows.h +++ b/lib/eal/windows/eal_windows.h @@ -63,7 +63,7 @@ unsigned int eal_socket_numa_node(unsigned int socket_id); * @param arg * Argument to the called function. * @return - * 0 on success, netagive error code on failure. + * 0 on success, negative error code on failure. */ int eal_intr_thread_schedule(void (*func)(void *arg), void *arg);
diff --git a/lib/eal/windows/include/dirent.h b/lib/eal/windows/include/dirent.h index 869a598378..34eb077f8c 100644 --- a/lib/eal/windows/include/dirent.h +++ b/lib/eal/windows/include/dirent.h @@ -440,7 +440,7 @@ opendir(const char *dirname) * display correctly on console. The problem can be fixed in two ways: * (1) change the character set of console to 1252 using chcp utility * and use Lucida Console font, or (2) use _cprintf function when - * writing to console. The _cprinf() will re-encode ANSI strings to the + * writing to console. The _cprintf() will re-encode ANSI strings to the * console code page so many non-ASCII characters will display correctly. */ static struct dirent* @@ -579,7 +579,7 @@ dirent_mbstowcs_s( wcstr[n] = 0; }
- /* Length of resuting multi-byte string WITH zero + /* Length of resulting multi-byte string WITH zero *terminator */ if (pReturnValue) diff --git a/lib/eal/windows/include/fnmatch.h b/lib/eal/windows/include/fnmatch.h index c272f65ccd..c6b226bd5d 100644 --- a/lib/eal/windows/include/fnmatch.h +++ b/lib/eal/windows/include/fnmatch.h @@ -26,14 +26,14 @@ extern "C" { #define FNM_PREFIX_DIRS 0x20
/** - * This function is used for searhing a given string source + * This function is used for searching a given string source * with the given regular expression pattern. * * @param pattern * regular expression notation describing the pattern to match * * @param string - * source string to searcg for the pattern + * source string to search for the pattern * * @param flag * containing information about the pattern diff --git a/lib/eal/x86/include/rte_atomic.h b/lib/eal/x86/include/rte_atomic.h index 915afd9d27..f2ee1a9ce9 100644 --- a/lib/eal/x86/include/rte_atomic.h +++ b/lib/eal/x86/include/rte_atomic.h @@ -60,7 +60,7 @@ extern "C" { * Basic idea is to use lock prefixed add with some dummy memory location * as the destination. From their experiments 128B(2 cache lines) below * current stack pointer looks like a good candidate. - * So below we use that techinque for rte_smp_mb() implementation. + * So below we use that technique for rte_smp_mb() implementation. */
static __rte_always_inline void diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c index 809416d9b7..3182b52c23 100644 --- a/lib/eventdev/rte_event_eth_rx_adapter.c +++ b/lib/eventdev/rte_event_eth_rx_adapter.c @@ -3334,7 +3334,7 @@ handle_rxa_get_queue_conf(const char *cmd __rte_unused, token = strtok(NULL, "\0"); if (token != NULL) RTE_EDEV_LOG_ERR("Extra parameters passed to eventdev" - " telemetry command, igrnoring"); + " telemetry command, ignoring");
if (rte_event_eth_rx_adapter_queue_conf_get(rx_adapter_id, eth_dev_id, rx_queue_id, &queue_conf)) { @@ -3398,7 +3398,7 @@ handle_rxa_get_queue_stats(const char *cmd __rte_unused, token = strtok(NULL, "\0"); if (token != NULL) RTE_EDEV_LOG_ERR("Extra parameters passed to eventdev" - " telemetry command, igrnoring"); + " telemetry command, ignoring");
if (rte_event_eth_rx_adapter_queue_stats_get(rx_adapter_id, eth_dev_id, rx_queue_id, &q_stats)) { @@ -3460,7 +3460,7 @@ handle_rxa_queue_stats_reset(const char *cmd __rte_unused, token = strtok(NULL, "\0"); if (token != NULL) RTE_EDEV_LOG_ERR("Extra parameters passed to eventdev" - " telemetry command, igrnoring"); + " telemetry command, ignoring");
if (rte_event_eth_rx_adapter_queue_stats_reset(rx_adapter_id, eth_dev_id, diff --git a/lib/fib/rte_fib.c b/lib/fib/rte_fib.c index 6ca180d7e7..0cced97a77 100644 --- a/lib/fib/rte_fib.c +++ b/lib/fib/rte_fib.c @@ -40,10 +40,10 @@ EAL_REGISTER_TAILQ(rte_fib_tailq) struct rte_fib { char name[RTE_FIB_NAMESIZE]; enum rte_fib_type type; /**< Type of FIB struct */ - struct rte_rib *rib; /**< RIB helper datastruct */ + struct rte_rib *rib; /**< RIB helper datastructure */ void *dp; /**< pointer to the dataplane struct*/ - rte_fib_lookup_fn_t lookup; /**< fib lookup function */ - rte_fib_modify_fn_t modify; /**< modify fib datastruct */ + rte_fib_lookup_fn_t lookup; /**< FIB lookup function */ + rte_fib_modify_fn_t modify; /**< modify FIB datastructure */ uint64_t def_nh; };
diff --git a/lib/fib/rte_fib.h b/lib/fib/rte_fib.h index b3c59dfaaa..e592d3251a 100644 --- a/lib/fib/rte_fib.h +++ b/lib/fib/rte_fib.h @@ -189,7 +189,7 @@ rte_fib_lookup_bulk(struct rte_fib *fib, uint32_t *ips, * FIB object handle * @return * Pointer on the dataplane struct on success - * NULL othervise + * NULL otherwise */ void * rte_fib_get_dp(struct rte_fib *fib); @@ -201,7 +201,7 @@ rte_fib_get_dp(struct rte_fib *fib); * FIB object handle * @return * Pointer on the RIB on success - * NULL othervise + * NULL otherwise */ struct rte_rib * rte_fib_get_rib(struct rte_fib *fib); diff --git a/lib/fib/rte_fib6.c b/lib/fib/rte_fib6.c index be79efe004..eebee297d6 100644 --- a/lib/fib/rte_fib6.c +++ b/lib/fib/rte_fib6.c @@ -40,10 +40,10 @@ EAL_REGISTER_TAILQ(rte_fib6_tailq) struct rte_fib6 { char name[FIB6_NAMESIZE]; enum rte_fib6_type type; /**< Type of FIB struct */ - struct rte_rib6 *rib; /**< RIB helper datastruct */ + struct rte_rib6 *rib; /**< RIB helper datastructure */ void *dp; /**< pointer to the dataplane struct*/ - rte_fib6_lookup_fn_t lookup; /**< fib lookup function */ - rte_fib6_modify_fn_t modify; /**< modify fib datastruct */ + rte_fib6_lookup_fn_t lookup; /**< FIB lookup function */ + rte_fib6_modify_fn_t modify; /**< modify FIB datastructure */ uint64_t def_nh; };
diff --git a/lib/fib/rte_fib6.h b/lib/fib/rte_fib6.h index 95879af96d..cb133719e1 100644 --- a/lib/fib/rte_fib6.h +++ b/lib/fib/rte_fib6.h @@ -184,7 +184,7 @@ rte_fib6_lookup_bulk(struct rte_fib6 *fib, * FIB6 object handle * @return * Pointer on the dataplane struct on success - * NULL othervise + * NULL otherwise */ void * rte_fib6_get_dp(struct rte_fib6 *fib); @@ -196,7 +196,7 @@ rte_fib6_get_dp(struct rte_fib6 *fib); * FIB object handle * @return * Pointer on the RIB6 on success - * NULL othervise + * NULL otherwise */ struct rte_rib6 * rte_fib6_get_rib(struct rte_fib6 *fib); diff --git a/lib/ipsec/ipsec_telemetry.c b/lib/ipsec/ipsec_telemetry.c index b8b08404b6..9a91e47122 100644 --- a/lib/ipsec/ipsec_telemetry.c +++ b/lib/ipsec/ipsec_telemetry.c @@ -236,7 +236,7 @@ RTE_INIT(rte_ipsec_telemetry_init) "Return list of IPsec SAs with telemetry enabled."); rte_telemetry_register_cmd("/ipsec/sa/stats", handle_telemetry_cmd_ipsec_sa_stats, - "Returns IPsec SA stastistics. Parameters: int sa_spi"); + "Returns IPsec SA statistics. Parameters: int sa_spi"); rte_telemetry_register_cmd("/ipsec/sa/details", handle_telemetry_cmd_ipsec_sa_details, "Returns IPsec SA configuration. Parameters: int sa_spi"); diff --git a/lib/ipsec/rte_ipsec_sad.h b/lib/ipsec/rte_ipsec_sad.h index b65d295831..a3ae57df7e 100644 --- a/lib/ipsec/rte_ipsec_sad.h +++ b/lib/ipsec/rte_ipsec_sad.h @@ -153,7 +153,7 @@ rte_ipsec_sad_destroy(struct rte_ipsec_sad *sad); * @param keys * Array of keys to be looked up in the SAD * @param sa - * Pointer assocoated with the keys. + * Pointer associated with the keys. * If the lookup for the given key failed, then corresponding sa * will be NULL * @param n diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c index 1e51482c92..cdb70af0cb 100644 --- a/lib/ipsec/sa.c +++ b/lib/ipsec/sa.c @@ -362,7 +362,7 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
memcpy(sa->hdr, prm->tun.hdr, prm->tun.hdr_len);
- /* insert UDP header if UDP encapsulation is inabled */ + /* insert UDP header if UDP encapsulation is enabled */ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) { struct rte_udp_hdr *udph = (struct rte_udp_hdr *) &sa->hdr[prm->tun.hdr_len]; diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h index 321a419c71..3d6ddd6773 100644 --- a/lib/mbuf/rte_mbuf_core.h +++ b/lib/mbuf/rte_mbuf_core.h @@ -8,7 +8,7 @@
/** * @file - * This file contains definion of RTE mbuf structure itself, + * This file contains definition of RTE mbuf structure itself, * packet offload flags and some related macros. * For majority of DPDK entities, it is not recommended to include * this file directly, use include <rte_mbuf.h> instead. diff --git a/lib/meson.build b/lib/meson.build index 5363f0d184..963287b174 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -3,7 +3,7 @@
# process all libraries equally, as far as possible -# "core" libs first, then others alphebetically as far as possible +# "core" libs first, then others alphabetically as far as possible # NOTE: for speed of meson runs, the dependencies in the subdirectories # sometimes skip deps that would be implied by others, e.g. if mempool is # given as a dep, no need to mention ring. This is especially true for the diff --git a/lib/net/rte_l2tpv2.h b/lib/net/rte_l2tpv2.h index b90e36cf12..938a993b48 100644 --- a/lib/net/rte_l2tpv2.h +++ b/lib/net/rte_l2tpv2.h @@ -143,7 +143,7 @@ struct rte_l2tpv2_msg_without_length { /** * L2TPv2 message Header contains all options except ns_nr(length, * offset size, offset padding). - * Ns and Nr MUST be toghter. + * Ns and Nr MUST be together. */ struct rte_l2tpv2_msg_without_ns_nr { rte_be16_t length; /**< length(16) */ @@ -155,7 +155,7 @@ struct rte_l2tpv2_msg_without_ns_nr {
/** * L2TPv2 message Header contains all options except ns_nr(length, ns, nr). - * offset size and offset padding MUST be toghter. + * offset size and offset padding MUST be together. */ struct rte_l2tpv2_msg_without_offset { rte_be16_t length; /**< length(16) */ diff --git a/lib/pipeline/rte_swx_ctl.h b/lib/pipeline/rte_swx_ctl.h index 46d05823e1..82e62e70a7 100644 --- a/lib/pipeline/rte_swx_ctl.h +++ b/lib/pipeline/rte_swx_ctl.h @@ -369,7 +369,7 @@ struct rte_swx_table_stats { uint64_t n_pkts_miss;
/** Number of packets (with either lookup hit or miss) per pipeline - * action. Array of pipeline *n_actions* elements indedex by the + * action. Array of pipeline *n_actions* elements indexed by the * pipeline-level *action_id*, therefore this array has the same size * for all the tables within the same pipeline. */ @@ -629,7 +629,7 @@ struct rte_swx_learner_stats { uint64_t n_pkts_forget;
/** Number of packets (with either lookup hit or miss) per pipeline action. Array of - * pipeline *n_actions* elements indedex by the pipeline-level *action_id*, therefore this + * pipeline *n_actions* elements indexed by the pipeline-level *action_id*, therefore this * array has the same size for all the tables within the same pipeline. */ uint64_t *n_pkts_action; diff --git a/lib/pipeline/rte_swx_pipeline_internal.h b/lib/pipeline/rte_swx_pipeline_internal.h index 1921fdcd78..fa944c95f2 100644 --- a/lib/pipeline/rte_swx_pipeline_internal.h +++ b/lib/pipeline/rte_swx_pipeline_internal.h @@ -309,7 +309,7 @@ enum instruction_type { */ INSTR_ALU_CKADD_FIELD, /* src = H */ INSTR_ALU_CKADD_STRUCT20, /* src = h.header, with sizeof(header) = 20 */ - INSTR_ALU_CKADD_STRUCT, /* src = h.hdeader, with any sizeof(header) */ + INSTR_ALU_CKADD_STRUCT, /* src = h.header, with any sizeof(header) */
/* cksub dst src * dst = dst '- src @@ -1562,7 +1562,7 @@ emit_handler(struct thread *t) return; }
- /* Header encapsulation (optionally, with prior header decasulation). */ + /* Header encapsulation (optionally, with prior header decapsulation). */ if ((t->n_headers_out == 2) && (h1->ptr + h1->n_bytes == t->ptr) && (h0->ptr == h0->ptr0)) { diff --git a/lib/pipeline/rte_swx_pipeline_spec.c b/lib/pipeline/rte_swx_pipeline_spec.c index 8e9aa44e30..07a7580ac8 100644 --- a/lib/pipeline/rte_swx_pipeline_spec.c +++ b/lib/pipeline/rte_swx_pipeline_spec.c @@ -2011,7 +2011,7 @@ rte_swx_pipeline_build_from_spec(struct rte_swx_pipeline *p, if (err_line) *err_line = 0; if (err_msg) - *err_msg = "Null pipeline arument."; + *err_msg = "Null pipeline argument."; status = -EINVAL; goto error; } diff --git a/lib/power/power_cppc_cpufreq.c b/lib/power/power_cppc_cpufreq.c index 6afd310e4e..25185a791c 100644 --- a/lib/power/power_cppc_cpufreq.c +++ b/lib/power/power_cppc_cpufreq.c @@ -621,7 +621,7 @@ power_cppc_enable_turbo(unsigned int lcore_id) return -1; }
- /* TODO: must set to max once enbling Turbo? Considering add condition: + /* TODO: must set to max once enabling Turbo? Considering add condition: * if ((pi->turbo_available) && (pi->curr_idx <= 1)) */ /* Max may have changed, so call to max function */ diff --git a/lib/regexdev/rte_regexdev.h b/lib/regexdev/rte_regexdev.h index 86f0b231b0..0bac46cda9 100644 --- a/lib/regexdev/rte_regexdev.h +++ b/lib/regexdev/rte_regexdev.h @@ -298,14 +298,14 @@ rte_regexdev_get_dev_id(const char *name); * backtracking positions remembered by any tokens inside the group. * Example RegEx is `a(?>bc|b)c` if the given patterns are `abc` and `abcc` then * `a(bc|b)c` matches both where as `a(?>bc|b)c` matches only abcc because - * atomic groups don't allow backtracing back to `b`. + * atomic groups don't allow backtracking back to `b`. * * @see struct rte_regexdev_info::regexdev_capa */
#define RTE_REGEXDEV_SUPP_PCRE_BACKTRACKING_CTRL_F (1ULL << 3) /**< RegEx device support PCRE backtracking control verbs. - * Some examples of backtracing verbs are (*COMMIT), (*ACCEPT), (*FAIL), + * Some examples of backtracking verbs are (*COMMIT), (*ACCEPT), (*FAIL), * (*SKIP), (*PRUNE). * * @see struct rte_regexdev_info::regexdev_capa @@ -1015,7 +1015,7 @@ rte_regexdev_rule_db_update(uint8_t dev_id, * @b EXPERIMENTAL: this API may change without prior notice. * * Compile local rule set and burn the complied result to the - * RegEx deive. + * RegEx device. * * @param dev_id * RegEx device identifier. diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h index 46ad584f9c..1252ca9546 100644 --- a/lib/ring/rte_ring_core.h +++ b/lib/ring/rte_ring_core.h @@ -12,7 +12,7 @@
/** * @file - * This file contains definion of RTE ring structure itself, + * This file contains definition of RTE ring structure itself, * init flags and some related macros. * For majority of DPDK entities, it is not recommended to include * this file directly, use include <rte_ring.h> or <rte_ring_elem.h> diff --git a/lib/sched/rte_pie.h b/lib/sched/rte_pie.h index dfdf572311..02a987f54a 100644 --- a/lib/sched/rte_pie.h +++ b/lib/sched/rte_pie.h @@ -252,7 +252,7 @@ _rte_pie_drop(const struct rte_pie_config *pie_cfg, }
/** - * @brief Decides if new packet should be enqeued or dropped for non-empty queue + * @brief Decides if new packet should be enqueued or dropped for non-empty queue * * @param pie_cfg [in] config pointer to a PIE configuration parameter structure * @param pie [in,out] data pointer to PIE runtime data @@ -319,7 +319,7 @@ rte_pie_enqueue_nonempty(const struct rte_pie_config *pie_cfg, }
/** - * @brief Decides if new packet should be enqeued or dropped + * @brief Decides if new packet should be enqueued or dropped * Updates run time data and gives verdict whether to enqueue or drop the packet. * * @param pie_cfg [in] config pointer to a PIE configuration parameter structure @@ -330,7 +330,7 @@ rte_pie_enqueue_nonempty(const struct rte_pie_config *pie_cfg, * * @return Operation status * @retval 0 enqueue the packet - * @retval 1 drop the packet based on drop probility criteria + * @retval 1 drop the packet based on drop probability criteria */ static inline int __rte_experimental diff --git a/lib/sched/rte_red.h b/lib/sched/rte_red.h index 36273cac64..f5843dab1b 100644 --- a/lib/sched/rte_red.h +++ b/lib/sched/rte_red.h @@ -303,7 +303,7 @@ __rte_red_drop(const struct rte_red_config *red_cfg, struct rte_red *red) }
/** - * @brief Decides if new packet should be enqeued or dropped in queue non-empty case + * @brief Decides if new packet should be enqueued or dropped in queue non-empty case * * @param red_cfg [in] config pointer to a RED configuration parameter structure * @param red [in,out] data pointer to RED runtime data @@ -361,7 +361,7 @@ rte_red_enqueue_nonempty(const struct rte_red_config *red_cfg, }
/** - * @brief Decides if new packet should be enqeued or dropped + * @brief Decides if new packet should be enqueued or dropped * Updates run time data based on new queue size value. * Based on new queue average and RED configuration parameters * gives verdict whether to enqueue or drop the packet. diff --git a/lib/sched/rte_sched.c b/lib/sched/rte_sched.c index ed44808f7b..62b3d2e315 100644 --- a/lib/sched/rte_sched.c +++ b/lib/sched/rte_sched.c @@ -239,7 +239,7 @@ struct rte_sched_port { int socket;
/* Timing */ - uint64_t time_cpu_cycles; /* Current CPU time measured in CPU cyles */ + uint64_t time_cpu_cycles; /* Current CPU time measured in CPU cycles */ uint64_t time_cpu_bytes; /* Current CPU time measured in bytes */ uint64_t time; /* Current NIC TX time measured in bytes */ struct rte_reciprocal inv_cycles_per_byte; /* CPU cycles per byte */ diff --git a/lib/sched/rte_sched.h b/lib/sched/rte_sched.h index 484dbdcc3d..3c625ba169 100644 --- a/lib/sched/rte_sched.h +++ b/lib/sched/rte_sched.h @@ -360,7 +360,7 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port, * * Hierarchical scheduler subport bandwidth profile add * Note that this function is safe to use in runtime for adding new - * subport bandwidth profile as it doesn't have any impact on hiearchical + * subport bandwidth profile as it doesn't have any impact on hierarchical * structure of the scheduler. * @param port * Handle to port scheduler instance diff --git a/lib/table/rte_swx_table.h b/lib/table/rte_swx_table.h index f93e5f3f95..c1383c2e57 100644 --- a/lib/table/rte_swx_table.h +++ b/lib/table/rte_swx_table.h @@ -216,7 +216,7 @@ typedef int * operations into the same table. * * The typical reason an implementation may choose to split the table lookup - * operation into multiple steps is to hide the latency of the inherrent memory + * operation into multiple steps is to hide the latency of the inherent memory * read operations: before a read operation with the source data likely not in * the CPU cache, the source data prefetch is issued and the table lookup * operation is postponed in favor of some other unrelated work, which the CPU diff --git a/lib/table/rte_swx_table_selector.h b/lib/table/rte_swx_table_selector.h index 62988d2856..05863cc90b 100644 --- a/lib/table/rte_swx_table_selector.h +++ b/lib/table/rte_swx_table_selector.h @@ -155,7 +155,7 @@ rte_swx_table_selector_group_set(void *table, * mechanism allows for multiple concurrent select operations into the same table. * * The typical reason an implementation may choose to split the operation into multiple steps is to - * hide the latency of the inherrent memory read operations: before a read operation with the + * hide the latency of the inherent memory read operations: before a read operation with the * source data likely not in the CPU cache, the source data prefetch is issued and the operation is * postponed in favor of some other unrelated work, which the CPU executes in parallel with the * source data being fetched into the CPU cache; later on, the operation is resumed, this time with diff --git a/lib/telemetry/telemetry.c b/lib/telemetry/telemetry.c index a7483167d4..e5ccfe47f7 100644 --- a/lib/telemetry/telemetry.c +++ b/lib/telemetry/telemetry.c @@ -534,7 +534,7 @@ telemetry_legacy_init(void) } rc = pthread_create(&t_old, NULL, socket_listener, &v1_socket); if (rc != 0) { - TMTY_LOG(ERR, "Error with create legcay socket thread: %s\n", + TMTY_LOG(ERR, "Error with create legacy socket thread: %s\n", strerror(rc)); close(v1_socket.sock); v1_socket.sock = -1; diff --git a/lib/telemetry/telemetry_json.h b/lib/telemetry/telemetry_json.h index f02a12f5b0..db70690274 100644 --- a/lib/telemetry/telemetry_json.h +++ b/lib/telemetry/telemetry_json.h @@ -23,7 +23,7 @@ /** * @internal * Copies a value into a buffer if the buffer has enough available space. - * Nothing written to buffer if an overflow ocurs. + * Nothing written to buffer if an overflow occurs. * This function is not for use for values larger than given buffer length. */ __rte_format_printf(3, 4) diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 550b0ee8b5..9b690dc81e 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -1115,7 +1115,7 @@ vhost_user_postcopy_region_register(struct virtio_net *dev, struct uffdio_register reg_struct;
/* - * Let's register all the mmap'ed area to ensure + * Let's register all the mmapped area to ensure * alignment on page boundary. */ reg_struct.range.start = (uint64_t)(uintptr_t)reg->mmap_addr; @@ -1177,7 +1177,7 @@ vhost_user_postcopy_register(struct virtio_net *dev, int main_fd, msg->fd_num = 0; send_vhost_reply(main_fd, msg);
- /* Wait for qemu to acknolwedge it's got the addresses + /* Wait for qemu to acknowledge it got the addresses * we've got to wait before we're allowed to generate faults. */ if (read_vhost_message(main_fd, &ack_msg) <= 0) {
From: Dongdong Liu liudongdong3@huawei.com
Add VLAN filter query in dump file.
Signed-off-by: Dongdong Liu liudongdong3@huawei.com Signed-off-by: Min Hu (Connor) humin29@huawei.com --- drivers/net/hns3/hns3_dump.c | 80 +++++++++++++++++++++++++++++++----- 1 file changed, 69 insertions(+), 11 deletions(-)
diff --git a/drivers/net/hns3/hns3_dump.c b/drivers/net/hns3/hns3_dump.c index 1007b09bd2..8268506f6f 100644 --- a/drivers/net/hns3/hns3_dump.c +++ b/drivers/net/hns3/hns3_dump.c @@ -4,11 +4,10 @@
#include <rte_malloc.h>
-#include "hns3_ethdev.h" #include "hns3_common.h" -#include "hns3_rxtx.h" -#include "hns3_regs.h" #include "hns3_logs.h" +#include "hns3_regs.h" +#include "hns3_rxtx.h" #include "hns3_dump.h"
#define HNS3_BD_DW_NUM 8 @@ -394,11 +393,6 @@ hns3_get_rxtx_queue_enable_state(FILE *file, struct rte_eth_dev *dev) uint32_t nb_tx_queues; uint32_t bitmap_size;
- bitmap_size = (hw->tqps_num * sizeof(uint32_t) + HNS3_UINT32_BIT) / - HNS3_UINT32_BIT; - rx_queue_state = (uint32_t *)rte_zmalloc(NULL, bitmap_size, 0); - tx_queue_state = (uint32_t *)rte_zmalloc(NULL, bitmap_size, 0); - nb_rx_queues = dev->data->nb_rx_queues; nb_tx_queues = dev->data->nb_tx_queues; if (nb_rx_queues == 0) { @@ -410,6 +404,21 @@ hns3_get_rxtx_queue_enable_state(FILE *file, struct rte_eth_dev *dev) return; }
+ bitmap_size = (hw->tqps_num * sizeof(uint32_t) + HNS3_UINT32_BIT) / + HNS3_UINT32_BIT; + rx_queue_state = (uint32_t *)rte_zmalloc(NULL, bitmap_size, 0); + if (rx_queue_state == NULL) { + hns3_err(hw, "Failed to allocate memory for rx queue state!"); + return; + } + + tx_queue_state = (uint32_t *)rte_zmalloc(NULL, bitmap_size, 0); + if (tx_queue_state == NULL) { + hns3_err(hw, "Failed to allocate memory for tx queue state!"); + rte_free(rx_queue_state); + return; + } + fprintf(file, "\t -- enable state:\n"); hns3_get_queue_enable_state(hw, rx_queue_state, nb_rx_queues, true); hns3_display_queue_enable_state(file, rx_queue_state, nb_rx_queues, @@ -448,6 +457,51 @@ hns3_get_rxtx_queue_info(FILE *file, struct rte_eth_dev *dev) hns3_get_rxtx_queue_enable_state(file, dev); }
+static int +hns3_get_vlan_filter_cfg(FILE *file, struct hns3_hw *hw) +{ +#define HNS3_FILTER_TYPE_VF 0 +#define HNS3_FILTER_TYPE_PORT 1 +#define HNS3_FILTER_FE_NIC_INGRESS_B BIT(0) +#define HNS3_FILTER_FE_NIC_EGRESS_B BIT(1) + struct hns3_vlan_filter_ctrl_cmd *req; + struct hns3_cmd_desc desc; + uint8_t i; + int ret; + + static const uint32_t vlan_filter_type[] = { + HNS3_FILTER_TYPE_PORT, + HNS3_FILTER_TYPE_VF + }; + + for (i = 0; i < RTE_DIM(vlan_filter_type); i++) { + hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_VLAN_FILTER_CTRL, + true); + req = (struct hns3_vlan_filter_ctrl_cmd *)desc.data; + req->vlan_type = vlan_filter_type[i]; + req->vf_id = HNS3_PF_FUNC_ID; + ret = hns3_cmd_send(hw, &desc, 1); + if (ret != 0) { + hns3_err(hw, + "NIC IMP exec ret=%d desc_num=%d optcode=0x%x!", + ret, 1, rte_le_to_cpu_16(desc.opcode)); + return ret; + } + fprintf(file, + "\t -- %s VLAN filter configuration\n" + "\t nic_ingress :%s\n" + "\t nic_egress :%s\n", + req->vlan_type == HNS3_FILTER_TYPE_PORT ? + "Port" : "VF", + req->vlan_fe & HNS3_FILTER_FE_NIC_INGRESS_B ? + "Enable" : "Disable", + req->vlan_fe & HNS3_FILTER_FE_NIC_EGRESS_B ? + "Enable" : "Disable"); + } + + return 0; +} + static int hns3_get_vlan_rx_offload_cfg(FILE *file, struct hns3_hw *hw) { @@ -583,6 +637,10 @@ hns3_get_vlan_config_info(FILE *file, struct hns3_hw *hw) int ret;
fprintf(file, " - VLAN Config Info:\n"); + ret = hns3_get_vlan_filter_cfg(file, hw); + if (ret < 0) + return; + ret = hns3_get_vlan_rx_offload_cfg(file, hw); if (ret < 0) return; @@ -619,7 +677,7 @@ hns3_get_tm_conf_port_node_info(FILE *file, struct hns3_tm_conf *conf) return;
fprintf(file, - " port_node: \n" + " port_node:\n" " node_id=%u reference_count=%u shaper_profile_id=%d\n", conf->root->id, conf->root->reference_count, conf->root->shaper_profile ? @@ -637,7 +695,7 @@ hns3_get_tm_conf_tc_node_info(FILE *file, struct hns3_tm_conf *conf) if (conf->nb_tc_node == 0) return;
- fprintf(file, " tc_node: \n"); + fprintf(file, " tc_node:\n"); memset(tc_node, 0, sizeof(tc_node)); TAILQ_FOREACH(tm_node, tc_list, node) { tidx = hns3_tm_calc_node_tc_no(conf, tm_node->id); @@ -705,7 +763,7 @@ hns3_get_tm_conf_queue_node_info(FILE *file, struct hns3_tm_conf *conf, return;
fprintf(file, - " queue_node: \n" + " queue_node:\n" " tx queue id | mapped tc (8 mean node not exist)\n");
memset(queue_node, 0, sizeof(queue_node));
From: Yunjian Wang wangyunjian@huawei.com
In bond_ethdev_rx_burst() function, we check the validity of the 'active_slave' as this code: if (++active_slave == slave_count) active_slave = 0; However, the value of 'active_slave' maybe equal to 'slave_count', when a slave is down. This is wrong and it can cause buffer overflow. This patch fixes the issue by using '>=' instead of '=='.
Fixes: e1110e977648 ("net/bonding: fix Rx slave fairness") Cc: stable@dpdk.org
Signed-off-by: Lei Ji jilei8@huawei.com Signed-off-by: Yunjian Wang wangyunjian@huawei.com Acked-by: Min Hu (Connor) humin29@huawei.com --- drivers/net/bonding/rte_eth_bond_pmd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c index 09636321cd..828fb5f96d 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -82,7 +82,7 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) bufs + num_rx_total, nb_pkts); num_rx_total += num_rx_slave; nb_pkts -= num_rx_slave; - if (++active_slave == slave_count) + if (++active_slave >= slave_count) active_slave = 0; }
From: Yunjian Wang wangyunjian@huawei.com
When link status polling mode is used, the slave link status is queried twice, which may be inconsistent. To fix this, we can keep the latest queried link state.
Fixes: a45b288ef21a ("bond: support link status polling") Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang wangyunjian@huawei.com Acked-by: Min Hu (Connor) humin29@huawei.com --- drivers/net/bonding/rte_eth_bond_pmd.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c index 828fb5f96d..3be2b08128 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -2400,9 +2400,6 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg) * event callback */ if (slave_ethdev->data->dev_link.link_status != internals->slaves[i].last_link_status) { - internals->slaves[i].last_link_status = - slave_ethdev->data->dev_link.link_status; - bond_ethdev_lsc_event_callback(internals->slaves[i].port_id, RTE_ETH_EVENT_INTR_LSC, &bonded_ethdev->data->port_id, @@ -2901,7 +2898,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
uint8_t lsc_flag = 0; int valid_slave = 0; - uint16_t active_pos; + uint16_t active_pos, slave_idx; uint16_t i;
if (type != RTE_ETH_EVENT_INTR_LSC || param == NULL) @@ -2922,6 +2919,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type, for (i = 0; i < internals->slave_count; i++) { if (internals->slaves[i].port_id == port_id) { valid_slave = 1; + slave_idx = i; break; } } @@ -3010,6 +3008,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type, * slaves */ bond_ethdev_link_update(bonded_eth_dev, 0); + internals->slaves[slave_idx].last_link_status = link.link_status;
if (lsc_flag) { /* Cancel any possible outstanding interrupts if delays are enabled */
The rte_eth_dev_info.flow_type_rss_offloads is populated in terms of RTE_ETH_RSS_* bits. If PMD sets RTE_ETH_RSS_L3_SRC_ONLY to dev_info->flow_type_rss_offloads. testpmd will display "user defined 63" when run 'show port info 0'. Because testpmd use flowtype_to_str() to display the supported RSS offload of PMD. In fact, the function is used to display flow type in FDIR commands for i40e or ixgbe. This patch uses the RTE_ETH_RSS_* bits to display supported RSS offload of PMD.
Fixes: b12964f621dc ("ethdev: unification of RSS offload types") Cc: stable@dpdk.org
Signed-off-by: Huisong Li lihuisong@huawei.com Signed-off-by: Ferruh Yigit ferruh.yigit@xilinx.com --- app/test-pmd/config.c | 40 ++++++++++++++++++++++++++-------------- app/test-pmd/testpmd.h | 2 ++ 2 files changed, 28 insertions(+), 14 deletions(-)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index a7fffc3d1d..2849ee7e7c 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -66,8 +66,6 @@
#define NS_PER_SEC 1E9
-static char *flowtype_to_str(uint16_t flow_type); - static const struct { enum tx_pkt_split split; const char *name; @@ -674,6 +672,19 @@ print_dev_capabilities(uint64_t capabilities) } }
+const char * +rsstypes_to_str(uint64_t rss_type) +{ + uint16_t i; + + for (i = 0; rss_type_table[i].str != NULL; i++) { + if (rss_type_table[i].rss_type == rss_type) + return rss_type_table[i].str; + } + + return NULL; +} + void port_infos_display(portid_t port_id) { @@ -778,19 +789,20 @@ port_infos_display(portid_t port_id) if (!dev_info.flow_type_rss_offloads) printf("No RSS offload flow type is supported.\n"); else { + uint64_t rss_offload_types = dev_info.flow_type_rss_offloads; uint16_t i; - char *p;
printf("Supported RSS offload flow types:\n"); - for (i = RTE_ETH_FLOW_UNKNOWN + 1; - i < sizeof(dev_info.flow_type_rss_offloads) * CHAR_BIT; i++) { - if (!(dev_info.flow_type_rss_offloads & (1ULL << i))) - continue; - p = flowtype_to_str(i); - if (p) - printf(" %s\n", p); - else - printf(" user defined %d\n", i); + for (i = 0; i < sizeof(rss_offload_types) * CHAR_BIT; i++) { + uint64_t rss_offload = RTE_BIT64(i); + if ((rss_offload_types & rss_offload) != 0) { + const char *p = rsstypes_to_str(rss_offload); + if (p) + printf(" %s\n", p); + else + printf(" user defined %u\n", + i); + } } }
@@ -4811,6 +4823,8 @@ set_record_burst_stats(uint8_t on_off) record_burst_stats = on_off; }
+#if defined(RTE_NET_I40E) || defined(RTE_NET_IXGBE) + static char* flowtype_to_str(uint16_t flow_type) { @@ -4854,8 +4868,6 @@ flowtype_to_str(uint16_t flow_type) return NULL; }
-#if defined(RTE_NET_I40E) || defined(RTE_NET_IXGBE) - static inline void print_fdir_mask(struct rte_eth_fdir_masks *mask) { diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 569b4300cf..d6a775c485 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -1105,6 +1105,8 @@ extern int flow_parse(const char *src, void *result, unsigned int size, struct rte_flow_item **pattern, struct rte_flow_action **actions);
+const char *rsstypes_to_str(uint64_t rss_type); + /* * Work-around of a compilation error with ICC on invocations of the * rte_be_to_cpu_16() function.
Currently, the "port config all rss xx" command uses 'ether' name to match and to set 'RTE_ETH_RSS_L2_PAYLOAD' offload. However, others RSS command, such as, "port config <port_id> rss-hash-key" and "show port <port_id> rss-hash key", use 'l2-payload' to represent this offload. So this patch unifies the name of 'RTE_ETH_RSS_L2_PAYLOAD' offload.
Signed-off-by: Huisong Li lihuisong@huawei.com Acked-by: Ferruh Yigit ferruh.yigit@xilinx.com --- app/test-pmd/cmdline.c | 12 ++++++------ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 +- 2 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 26d95e64e0..c5e4c30c9f 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -793,8 +793,8 @@ static void cmd_help_long_parsed(void *parsed_result, "receive buffers available.\n\n"
"port config all rss (all|default|ip|tcp|udp|sctp|" - "ether|port|vxlan|geneve|nvgre|vxlan-gpe|ecpri|mpls|none|level-default|" - "level-outer|level-inner|<flowtype_id>)\n" + "l2-payload|port|vxlan|geneve|nvgre|vxlan-gpe|ecpri|mpls|ipv4-chksum|" + "none|level-default|level-outer|level-inner|<flowtype_id>)\n" " Set the RSS mode.\n\n"
"port config port-id rss reta (hash,queue)[,(hash,queue)]\n" @@ -2187,7 +2187,7 @@ cmd_config_rss_parsed(void *parsed_result, rss_conf.rss_hf = RTE_ETH_RSS_TCP; else if (!strcmp(res->value, "sctp")) rss_conf.rss_hf = RTE_ETH_RSS_SCTP; - else if (!strcmp(res->value, "ether")) + else if (!strcmp(res->value, "l2_payload")) rss_conf.rss_hf = RTE_ETH_RSS_L2_PAYLOAD; else if (!strcmp(res->value, "port")) rss_conf.rss_hf = RTE_ETH_RSS_PORT; @@ -2308,9 +2308,9 @@ cmdline_parse_inst_t cmd_config_rss = { .f = cmd_config_rss_parsed, .data = NULL, .help_str = "port config all rss " - "all|default|eth|vlan|ip|tcp|udp|sctp|ether|port|vxlan|geneve|" - "nvgre|vxlan-gpe|l2tpv3|esp|ah|pfcp|ecpri|mpls|none|level-default|" - "level-outer|level-inner|ipv4-chksum|<flowtype_id>", + "all|default|eth|vlan|ip|tcp|udp|sctp|l2-payload|port|vxlan|geneve|" + "nvgre|vxlan-gpe|l2tpv3|esp|ah|pfcp|ecpri|mpls|ipv4-chksum|" + "none|level-default|level-outer|level-inner|<flowtype_id>", .tokens = { (void *)&cmd_config_rss_port, (void *)&cmd_config_rss_keyword, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 94792d88cc..b75adcce55 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -2285,7 +2285,7 @@ port config - RSS
Set the RSS (Receive Side Scaling) mode on or off::
- testpmd> port config all rss (all|default|eth|vlan|ip|tcp|udp|sctp|ether|port|vxlan|geneve|nvgre|vxlan-gpe|l2tpv3|esp|ah|pfcp|ecpri|mpls|none) + testpmd> port config all rss (all|default|eth|vlan|ip|tcp|udp|sctp|l2-payload|port|vxlan|geneve|nvgre|vxlan-gpe|l2tpv3|esp|ah|pfcp|ecpri|mpls|none)
RSS is on by default.
The "port config <port_id> rss-hash-key" and "show port <port_id> rss-hash key" commands both use the 'rss_type_table[]' to get 'rss_types' or the RSS type name. So this patch uses the 'rss_type_table[]' to get the RSS types. In this way, this command naturally supports more individual types.
Suggested-by: Ferruh Yigit ferruh.yigit@xilinx.com Signed-off-by: Huisong Li lihuisong@huawei.com Acked-by: Ferruh Yigit ferruh.yigit@xilinx.com --- app/test-pmd/cmdline.c | 120 ++++++-------------- app/test-pmd/config.c | 20 +++- app/test-pmd/testpmd.h | 1 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 10 +- 4 files changed, 58 insertions(+), 93 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index c5e4c30c9f..6cb095f965 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -792,9 +792,14 @@ static void cmd_help_long_parsed(void *parsed_result, " Enable or disable packet drop on all RX queues of all ports when no " "receive buffers available.\n\n"
- "port config all rss (all|default|ip|tcp|udp|sctp|" - "l2-payload|port|vxlan|geneve|nvgre|vxlan-gpe|ecpri|mpls|ipv4-chksum|" - "none|level-default|level-outer|level-inner|<flowtype_id>)\n" + "port config all rss (all|default|level-default|level-outer|level-inner|" + "ip|tcp|udp|sctp|tunnel|vlan|none|" + "ipv4|ipv4-frag|ipv4-tcp|ipv4-udp|ipv4-sctp|ipv4-other|" + "ipv6|ipv6-frag|ipv6-tcp|ipv6-udp|ipv6-sctp|ipv6-other|ipv6-ex|ipv6-tcp-ex|ipv6-udp-ex|" + "l2-payload|port|vxlan|geneve|nvgre|gtpu|eth|s-vlan|c-vlan|" + "esp|ah|l2tpv3|pfcp|pppoe|ecpri|mpls|ipv4-chksum|l4-chksum|" + "l3-pre96|l3-pre64|l3-pre56|l3-pre48|l3-pre40|l3-pre32|" + "l2-dst-only|l2-src-only|l4-dst-only|l4-src-only|l3-dst-only|l3-src-only|<rsstype_id>)\n" " Set the RSS mode.\n\n"
"port config port-id rss reta (hash,queue)[,(hash,queue)]\n" @@ -2169,79 +2174,7 @@ cmd_config_rss_parsed(void *parsed_result, uint16_t i; int ret;
- if (!strcmp(res->value, "all")) - rss_conf.rss_hf = RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP | - RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP | - RTE_ETH_RSS_L2_PAYLOAD | RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP | - RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP | RTE_ETH_RSS_GTPU | - RTE_ETH_RSS_ECPRI; - else if (!strcmp(res->value, "eth")) - rss_conf.rss_hf = RTE_ETH_RSS_ETH; - else if (!strcmp(res->value, "vlan")) - rss_conf.rss_hf = RTE_ETH_RSS_VLAN; - else if (!strcmp(res->value, "ip")) - rss_conf.rss_hf = RTE_ETH_RSS_IP; - else if (!strcmp(res->value, "udp")) - rss_conf.rss_hf = RTE_ETH_RSS_UDP; - else if (!strcmp(res->value, "tcp")) - rss_conf.rss_hf = RTE_ETH_RSS_TCP; - else if (!strcmp(res->value, "sctp")) - rss_conf.rss_hf = RTE_ETH_RSS_SCTP; - else if (!strcmp(res->value, "l2_payload")) - rss_conf.rss_hf = RTE_ETH_RSS_L2_PAYLOAD; - else if (!strcmp(res->value, "port")) - rss_conf.rss_hf = RTE_ETH_RSS_PORT; - else if (!strcmp(res->value, "vxlan")) - rss_conf.rss_hf = RTE_ETH_RSS_VXLAN; - else if (!strcmp(res->value, "geneve")) - rss_conf.rss_hf = RTE_ETH_RSS_GENEVE; - else if (!strcmp(res->value, "nvgre")) - rss_conf.rss_hf = RTE_ETH_RSS_NVGRE; - else if (!strcmp(res->value, "l3-pre32")) - rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE32; - else if (!strcmp(res->value, "l3-pre40")) - rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE40; - else if (!strcmp(res->value, "l3-pre48")) - rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE48; - else if (!strcmp(res->value, "l3-pre56")) - rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE56; - else if (!strcmp(res->value, "l3-pre64")) - rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE64; - else if (!strcmp(res->value, "l3-pre96")) - rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE96; - else if (!strcmp(res->value, "l3-src-only")) - rss_conf.rss_hf = RTE_ETH_RSS_L3_SRC_ONLY; - else if (!strcmp(res->value, "l3-dst-only")) - rss_conf.rss_hf = RTE_ETH_RSS_L3_DST_ONLY; - else if (!strcmp(res->value, "l4-src-only")) - rss_conf.rss_hf = RTE_ETH_RSS_L4_SRC_ONLY; - else if (!strcmp(res->value, "l4-dst-only")) - rss_conf.rss_hf = RTE_ETH_RSS_L4_DST_ONLY; - else if (!strcmp(res->value, "l2-src-only")) - rss_conf.rss_hf = RTE_ETH_RSS_L2_SRC_ONLY; - else if (!strcmp(res->value, "l2-dst-only")) - rss_conf.rss_hf = RTE_ETH_RSS_L2_DST_ONLY; - else if (!strcmp(res->value, "l2tpv3")) - rss_conf.rss_hf = RTE_ETH_RSS_L2TPV3; - else if (!strcmp(res->value, "esp")) - rss_conf.rss_hf = RTE_ETH_RSS_ESP; - else if (!strcmp(res->value, "ah")) - rss_conf.rss_hf = RTE_ETH_RSS_AH; - else if (!strcmp(res->value, "pfcp")) - rss_conf.rss_hf = RTE_ETH_RSS_PFCP; - else if (!strcmp(res->value, "pppoe")) - rss_conf.rss_hf = RTE_ETH_RSS_PPPOE; - else if (!strcmp(res->value, "gtpu")) - rss_conf.rss_hf = RTE_ETH_RSS_GTPU; - else if (!strcmp(res->value, "ecpri")) - rss_conf.rss_hf = RTE_ETH_RSS_ECPRI; - else if (!strcmp(res->value, "mpls")) - rss_conf.rss_hf = RTE_ETH_RSS_MPLS; - else if (!strcmp(res->value, "ipv4-chksum")) - rss_conf.rss_hf = RTE_ETH_RSS_IPV4_CHKSUM; - else if (!strcmp(res->value, "none")) - rss_conf.rss_hf = 0; - else if (!strcmp(res->value, "level-default")) { + if (!strcmp(res->value, "level-default")) { rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK); rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_PMD_DEFAULT); } else if (!strcmp(res->value, "level-outer")) { @@ -2250,14 +2183,24 @@ cmd_config_rss_parsed(void *parsed_result, } else if (!strcmp(res->value, "level-inner")) { rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK); rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_INNERMOST); - } else if (!strcmp(res->value, "default")) + } else if (!strcmp(res->value, "default")) { use_default = 1; - else if (isdigit(res->value[0]) && atoi(res->value) > 0 && - atoi(res->value) < 64) - rss_conf.rss_hf = 1ULL << atoi(res->value); - else { - fprintf(stderr, "Unknown parameter\n"); - return; + } else if (isdigit(res->value[0])) { + int value = atoi(res->value); + if (value > 0 && value < 64) + rss_conf.rss_hf = 1ULL << (uint8_t)value; + else { + fprintf(stderr, "flowtype_id should be greater than 0 and less than 64.\n"); + return; + } + } else if (!strcmp(res->value, "none")) { + rss_conf.rss_hf = 0; + } else { + rss_conf.rss_hf = str_to_rsstypes(res->value); + if (rss_conf.rss_hf == 0) { + fprintf(stderr, "Unknown parameter\n"); + return; + } } rss_conf.rss_key = NULL; /* Update global configuration for RSS types. */ @@ -2308,9 +2251,14 @@ cmdline_parse_inst_t cmd_config_rss = { .f = cmd_config_rss_parsed, .data = NULL, .help_str = "port config all rss " - "all|default|eth|vlan|ip|tcp|udp|sctp|l2-payload|port|vxlan|geneve|" - "nvgre|vxlan-gpe|l2tpv3|esp|ah|pfcp|ecpri|mpls|ipv4-chksum|" - "none|level-default|level-outer|level-inner|<flowtype_id>", + "all|default|level-default|level-outer|level-inner|" + "ip|tcp|udp|sctp|tunnel|vlan|none|" + "ipv4|ipv4-frag|ipv4-tcp|ipv4-udp|ipv4-sctp|ipv4-other|" + "ipv6|ipv6-frag|ipv6-tcp|ipv6-udp|ipv6-sctp|ipv6-other|ipv6-ex|ipv6-tcp-ex|ipv6-udp-ex|" + "l2-payload|port|vxlan|geneve|nvgre|gtpu|eth|s-vlan|c-vlan|" + "esp|ah|l2tpv3|pfcp|pppoe|ecpri|mpls|ipv4-chksum|l4-chksum|" + "l3-pre96|l3-pre64|l3-pre56|l3-pre48|l3-pre40|l3-pre32|" + "l2-dst-only|l2-src-only|l4-dst-only|l4-src-only|l3-dst-only|l3-src-only|<rsstype_id>", .tokens = { (void *)&cmd_config_rss_port, (void *)&cmd_config_rss_keyword, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 2849ee7e7c..b08face76d 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -672,6 +672,19 @@ print_dev_capabilities(uint64_t capabilities) } }
+uint64_t +str_to_rsstypes(const char *str) +{ + uint16_t i; + + for (i = 0; rss_type_table[i].str != NULL; i++) { + if (strcmp(rss_type_table[i].str, str) == 0) + return rss_type_table[i].rss_type; + } + + return 0; +} + const char * rsstypes_to_str(uint64_t rss_type) { @@ -3063,15 +3076,10 @@ port_rss_hash_key_update(portid_t port_id, char rss_type[], uint8_t *hash_key, { struct rte_eth_rss_conf rss_conf; int diag; - unsigned int i;
rss_conf.rss_key = NULL; rss_conf.rss_key_len = 0; - rss_conf.rss_hf = 0; - for (i = 0; rss_type_table[i].str; i++) { - if (!strcmp(rss_type_table[i].str, rss_type)) - rss_conf.rss_hf = rss_type_table[i].rss_type; - } + rss_conf.rss_hf = str_to_rsstypes(rss_type); diag = rte_eth_dev_rss_hash_conf_get(port_id, &rss_conf); if (diag == 0) { rss_conf.rss_key = hash_key; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index d6a775c485..e50188778b 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -1105,6 +1105,7 @@ extern int flow_parse(const char *src, void *result, unsigned int size, struct rte_flow_item **pattern, struct rte_flow_action **actions);
+uint64_t str_to_rsstypes(const char *str); const char *rsstypes_to_str(uint64_t rss_type);
/* diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index b75adcce55..e15dc0c4c4 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -2285,7 +2285,15 @@ port config - RSS
Set the RSS (Receive Side Scaling) mode on or off::
- testpmd> port config all rss (all|default|eth|vlan|ip|tcp|udp|sctp|l2-payload|port|vxlan|geneve|nvgre|vxlan-gpe|l2tpv3|esp|ah|pfcp|ecpri|mpls|none) + testpmd> port config all rss (all|default|level-default|level-outer|level-inner| \ + ip|tcp|udp|sctp|tunnel|vlan|none| \ + ipv4|ipv4-frag|ipv4-tcp|ipv4-udp|ipv4-sctp|ipv4-other| \ + ipv6|ipv6-frag|ipv6-tcp|ipv6-udp|ipv6-sctp| \ + ipv6-other|ipv6-ex|ipv6-tcp-ex|ipv6-udp-ex| \ + l2-payload|port|vxlan|geneve|nvgre|gtpu|eth|s-vlan|c-vlan| \ + esp|ah|l2tpv3|pfcp|pppoe|ecpri|mpls|ipv4-chksum|l4-chksum| \ + l3-pre96|l3-pre64|l3-pre56|l3-pre48|l3-pre40|l3-pre32| \ + l2-dst-only|l2-src-only|l4-dst-only|l4-src-only|l3-dst-only|l3-src-only|<rsstype_id>)
RSS is on by default.
The 'rss_type_table[]' maintains the name and value of RSS types. This patch unifies a common interface to display RSS types.
Signed-off-by: Huisong Li lihuisong@huawei.com Signed-off-by: Ferruh Yigit ferruh.yigit@xilinx.com --- app/test-pmd/config.c | 34 ++++++++++++++++++++-------------- 1 file changed, 20 insertions(+), 14 deletions(-)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index b08face76d..7b725fc7a1 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1554,6 +1554,23 @@ port_flow_complain(struct rte_flow_error *error) return -err; }
+static void +rss_types_display(uint64_t rss_types) +{ + uint16_t i; + + if (rss_types == 0) + return; + + for (i = 0; rss_type_table[i].str; i++) { + if (rss_type_table[i].rss_type == 0) + continue; + if ((rss_types & rss_type_table[i].rss_type) == + rss_type_table[i].rss_type) + printf(" %s", rss_type_table[i].str); + } +} + static void rss_config_display(struct rte_flow_action_rss *rss_conf) { @@ -1596,13 +1613,7 @@ rss_config_display(struct rte_flow_action_rss *rss_conf) printf(" none\n"); return; } - for (i = 0; rss_type_table[i].str; i++) { - if ((rss_conf->types & - rss_type_table[i].rss_type) == - rss_type_table[i].rss_type && - rss_type_table[i].rss_type != 0) - printf(" %s\n", rss_type_table[i].str); - } + rss_types_display(rss_conf->types); }
static struct port_indirect_action * @@ -3054,13 +3065,8 @@ port_rss_hash_conf_show(portid_t port_id, int show_rss_key) printf("RSS disabled\n"); return; } - printf("RSS functions:\n "); - for (i = 0; rss_type_table[i].str; i++) { - if (rss_type_table[i].rss_type == 0) - continue; - if ((rss_hf & rss_type_table[i].rss_type) == rss_type_table[i].rss_type) - printf("%s ", rss_type_table[i].str); - } + printf("RSS functions:\n"); + rss_types_display(rss_hf); printf("\n"); if (!show_rss_key) return;
From: Ferruh Yigit ferruh.yigit@xilinx.com
In port info command output, 'show port info all', supported RSS offload types printed one type per line, and although this information is not most important part of the command it takes big part of the command output.
In port RSS hash and flow RSS command output, 'show port 0 rss-hash', and 'flow query 0 0 rss', all enabled RSS types are printed on one line. If there are many types, the print will be very long.
Compacting these RSS offloads and types output by fixing the length of the character string printed on each line, instead of one per line or one line. Output becomes as following:
Supported RSS offload flow types: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other ipv6-frag ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l4-dst-only l4-src-only l3-dst-only l3-src-only
Signed-off-by: Ferruh Yigit ferruh.yigit@xilinx.com Signed-off-by: Huisong Li lihuisong@huawei.com --- app/test-pmd/config.c | 68 +++++++++++++++++++++++++++++++----------- app/test-pmd/testpmd.h | 2 ++ 2 files changed, 52 insertions(+), 18 deletions(-)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 7b725fc7a1..873cca6f3e 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -698,6 +698,38 @@ rsstypes_to_str(uint64_t rss_type) return NULL; }
+static void +rss_offload_types_display(uint64_t offload_types, uint16_t char_num_per_line) +{ + uint16_t user_defined_str_len; + uint16_t total_len = 0; + uint16_t str_len = 0; + uint64_t rss_offload; + uint16_t i; + + for (i = 0; i < sizeof(offload_types) * CHAR_BIT; i++) { + rss_offload = RTE_BIT64(i); + if ((offload_types & rss_offload) != 0) { + const char *p = rsstypes_to_str(rss_offload); + + user_defined_str_len = + strlen("user-defined-") + (i / 10 + 1); + str_len = p ? strlen(p) : user_defined_str_len; + str_len += 2; /* add two spaces */ + if (total_len + str_len >= char_num_per_line) { + total_len = 0; + printf("\n"); + } + + if (p) + printf(" %s", p); + else + printf(" user-defined-%u", i); + total_len += str_len; + } + } +} + void port_infos_display(portid_t port_id) { @@ -802,21 +834,10 @@ port_infos_display(portid_t port_id) if (!dev_info.flow_type_rss_offloads) printf("No RSS offload flow type is supported.\n"); else { - uint64_t rss_offload_types = dev_info.flow_type_rss_offloads; - uint16_t i; - printf("Supported RSS offload flow types:\n"); - for (i = 0; i < sizeof(rss_offload_types) * CHAR_BIT; i++) { - uint64_t rss_offload = RTE_BIT64(i); - if ((rss_offload_types & rss_offload) != 0) { - const char *p = rsstypes_to_str(rss_offload); - if (p) - printf(" %s\n", p); - else - printf(" user defined %u\n", - i); - } - } + rss_offload_types_display(dev_info.flow_type_rss_offloads, + TESTPMD_RSS_TYPES_CHAR_NUM_PER_LINE); + printf("\n"); }
printf("Minimum size of RX buffer: %u\n", dev_info.min_rx_bufsize); @@ -1555,8 +1576,10 @@ port_flow_complain(struct rte_flow_error *error) }
static void -rss_types_display(uint64_t rss_types) +rss_types_display(uint64_t rss_types, uint16_t char_num_per_line) { + uint16_t total_len = 0; + uint16_t str_len; uint16_t i;
if (rss_types == 0) @@ -1565,9 +1588,18 @@ rss_types_display(uint64_t rss_types) for (i = 0; rss_type_table[i].str; i++) { if (rss_type_table[i].rss_type == 0) continue; + if ((rss_types & rss_type_table[i].rss_type) == - rss_type_table[i].rss_type) + rss_type_table[i].rss_type) { + /* Contain two spaces */ + str_len = strlen(rss_type_table[i].str) + 2; + if (total_len + str_len > char_num_per_line) { + printf("\n"); + total_len = 0; + } printf(" %s", rss_type_table[i].str); + total_len += str_len; + } } }
@@ -1613,7 +1645,7 @@ rss_config_display(struct rte_flow_action_rss *rss_conf) printf(" none\n"); return; } - rss_types_display(rss_conf->types); + rss_types_display(rss_conf->types, TESTPMD_RSS_TYPES_CHAR_NUM_PER_LINE); }
static struct port_indirect_action * @@ -3066,7 +3098,7 @@ port_rss_hash_conf_show(portid_t port_id, int show_rss_key) return; } printf("RSS functions:\n"); - rss_types_display(rss_hf); + rss_types_display(rss_hf, TESTPMD_RSS_TYPES_CHAR_NUM_PER_LINE); printf("\n"); if (!show_rss_key) return; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index e50188778b..9c3a5d9bc5 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -105,6 +105,8 @@ struct pkt_burst_stats { unsigned int pkt_burst_spread[MAX_PKT_BURST + 1]; };
+ +#define TESTPMD_RSS_TYPES_CHAR_NUM_PER_LINE 64 /** Information for a given RSS type. */ struct rss_type_info { const char *str; /**< Type name. */
There are group and individual types in rss_type_table[]. However, group types are very scattered, and individual types are not arranged based on the bit number order in 'RTE_ETH_RSS_xxx'. For a clear distribution of types and better maintenance, this patch reorders this table.
Signed-off-by: Huisong Li lihuisong@huawei.com Acked-by: Ferruh Yigit ferruh.yigit@xilinx.com --- app/test-pmd/config.c | 47 +++++++++++++++++++++++-------------------- 1 file changed, 25 insertions(+), 22 deletions(-)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 873cca6f3e..f8cd135970 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -85,17 +85,20 @@ static const struct { };
const struct rss_type_info rss_type_table[] = { + /* Group types */ { "all", RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_L2_PAYLOAD | RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP | RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP | RTE_ETH_RSS_GTPU | RTE_ETH_RSS_ECPRI | RTE_ETH_RSS_MPLS}, { "none", 0 }, - { "eth", RTE_ETH_RSS_ETH }, - { "l2-src-only", RTE_ETH_RSS_L2_SRC_ONLY }, - { "l2-dst-only", RTE_ETH_RSS_L2_DST_ONLY }, + { "ip", RTE_ETH_RSS_IP }, + { "udp", RTE_ETH_RSS_UDP }, + { "tcp", RTE_ETH_RSS_TCP }, + { "sctp", RTE_ETH_RSS_SCTP }, + { "tunnel", RTE_ETH_RSS_TUNNEL }, { "vlan", RTE_ETH_RSS_VLAN }, - { "s-vlan", RTE_ETH_RSS_S_VLAN }, - { "c-vlan", RTE_ETH_RSS_C_VLAN }, + + /* Individual type */ { "ipv4", RTE_ETH_RSS_IPV4 }, { "ipv4-frag", RTE_ETH_RSS_FRAG_IPV4 }, { "ipv4-tcp", RTE_ETH_RSS_NONFRAG_IPV4_TCP }, @@ -116,32 +119,32 @@ const struct rss_type_info rss_type_table[] = { { "vxlan", RTE_ETH_RSS_VXLAN }, { "geneve", RTE_ETH_RSS_GENEVE }, { "nvgre", RTE_ETH_RSS_NVGRE }, - { "ip", RTE_ETH_RSS_IP }, - { "udp", RTE_ETH_RSS_UDP }, - { "tcp", RTE_ETH_RSS_TCP }, - { "sctp", RTE_ETH_RSS_SCTP }, - { "tunnel", RTE_ETH_RSS_TUNNEL }, - { "l3-pre32", RTE_ETH_RSS_L3_PRE32 }, - { "l3-pre40", RTE_ETH_RSS_L3_PRE40 }, - { "l3-pre48", RTE_ETH_RSS_L3_PRE48 }, - { "l3-pre56", RTE_ETH_RSS_L3_PRE56 }, - { "l3-pre64", RTE_ETH_RSS_L3_PRE64 }, - { "l3-pre96", RTE_ETH_RSS_L3_PRE96 }, - { "l3-src-only", RTE_ETH_RSS_L3_SRC_ONLY }, - { "l3-dst-only", RTE_ETH_RSS_L3_DST_ONLY }, - { "l4-src-only", RTE_ETH_RSS_L4_SRC_ONLY }, - { "l4-dst-only", RTE_ETH_RSS_L4_DST_ONLY }, + { "gtpu", RTE_ETH_RSS_GTPU }, + { "eth", RTE_ETH_RSS_ETH }, + { "s-vlan", RTE_ETH_RSS_S_VLAN }, + { "c-vlan", RTE_ETH_RSS_C_VLAN }, { "esp", RTE_ETH_RSS_ESP }, { "ah", RTE_ETH_RSS_AH }, { "l2tpv3", RTE_ETH_RSS_L2TPV3 }, { "pfcp", RTE_ETH_RSS_PFCP }, { "pppoe", RTE_ETH_RSS_PPPOE }, - { "gtpu", RTE_ETH_RSS_GTPU }, { "ecpri", RTE_ETH_RSS_ECPRI }, { "mpls", RTE_ETH_RSS_MPLS }, { "ipv4-chksum", RTE_ETH_RSS_IPV4_CHKSUM }, { "l4-chksum", RTE_ETH_RSS_L4_CHKSUM }, - { NULL, 0 }, + { "l3-pre96", RTE_ETH_RSS_L3_PRE96 }, + { "l3-pre64", RTE_ETH_RSS_L3_PRE64 }, + { "l3-pre56", RTE_ETH_RSS_L3_PRE56 }, + { "l3-pre48", RTE_ETH_RSS_L3_PRE48 }, + { "l3-pre40", RTE_ETH_RSS_L3_PRE40 }, + { "l3-pre32", RTE_ETH_RSS_L3_PRE32 }, + { "l2-dst-only", RTE_ETH_RSS_L2_DST_ONLY }, + { "l2-src-only", RTE_ETH_RSS_L2_SRC_ONLY }, + { "l4-dst-only", RTE_ETH_RSS_L4_DST_ONLY }, + { "l4-src-only", RTE_ETH_RSS_L4_SRC_ONLY }, + { "l3-dst-only", RTE_ETH_RSS_L3_DST_ONLY }, + { "l3-src-only", RTE_ETH_RSS_L3_SRC_ONLY }, + { NULL, 0}, };
static const struct {
Now testpmd fails to display types when query RSS rule. The failure is because the '\n' character is missing at the end of the function 'rss_config_display()'. Actually, all places calling 'xxx_types_display()' need to '\n'. So this patch moves '\n' to the inside of these function.
Bugzilla ID: 1048 Fixes: 534988c490f1 ("app/testpmd: unify RSS types display") Fixes: 44a37f3cffe0 ("app/testpmd: compact RSS types output")
Signed-off-by: Huisong Li lihuisong@huawei.com Tested-by: Weiyuan Li weiyuanx.li@intel.com Reviewed-by: Ferruh Yigit ferruh.yigit@xilinx.com --- app/test-pmd/config.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index f8cd135970..12386c4d82 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -731,6 +731,7 @@ rss_offload_types_display(uint64_t offload_types, uint16_t char_num_per_line) total_len += str_len; } } + printf("\n"); }
void @@ -840,7 +841,6 @@ port_infos_display(portid_t port_id) printf("Supported RSS offload flow types:\n"); rss_offload_types_display(dev_info.flow_type_rss_offloads, TESTPMD_RSS_TYPES_CHAR_NUM_PER_LINE); - printf("\n"); }
printf("Minimum size of RX buffer: %u\n", dev_info.min_rx_bufsize); @@ -1604,6 +1604,7 @@ rss_types_display(uint64_t rss_types, uint16_t char_num_per_line) total_len += str_len; } } + printf("\n"); }
static void @@ -3102,7 +3103,6 @@ port_rss_hash_conf_show(portid_t port_id, int show_rss_key) } printf("RSS functions:\n"); rss_types_display(rss_hf, TESTPMD_RSS_TYPES_CHAR_NUM_PER_LINE); - printf("\n"); if (!show_rss_key) return; printf("RSS key:\n");
From: Chengwen Feng fengchengwen@huawei.com
This patch supports telemetry private dump a ethdev port.
Signed-off-by: Chengwen Feng fengchengwen@huawei.com Acked-by: Morten Brørup mb@smartsharesystems.com --- lib/ethdev/rte_ethdev.c | 47 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 47 insertions(+)
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index b95f501b51..df5a627cbe 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -7,6 +7,7 @@ #include <inttypes.h> #include <stdbool.h> #include <stdint.h> +#include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/queue.h> @@ -6272,6 +6273,48 @@ eth_dev_handle_port_xstats(const char *cmd __rte_unused, return 0; }
+#ifndef RTE_EXEC_ENV_WINDOWS +static int +eth_dev_handle_port_dump_priv(const char *cmd __rte_unused, + const char *params, + struct rte_tel_data *d) +{ + char *buf, *end_param; + int port_id, ret; + FILE *f; + + if (params == NULL || strlen(params) == 0 || !isdigit(*params)) + return -EINVAL; + + port_id = strtoul(params, &end_param, 0); + if (*end_param != '\0') + RTE_ETHDEV_LOG(NOTICE, + "Extra parameters passed to ethdev telemetry command, ignoring"); + if (!rte_eth_dev_is_valid_port(port_id)) + return -EINVAL; + + buf = calloc(sizeof(char), RTE_TEL_MAX_SINGLE_STRING_LEN); + if (buf == NULL) + return -ENOMEM; + + f = fmemopen(buf, RTE_TEL_MAX_SINGLE_STRING_LEN - 1, "w+"); + if (f == NULL) { + free(buf); + return -EINVAL; + } + + ret = rte_eth_dev_priv_dump(port_id, f); + fclose(f); + if (ret == 0) { + rte_tel_data_start_dict(d); + rte_tel_data_string(d, buf); + } + + free(buf); + return 0; +} +#endif /* !RTE_EXEC_ENV_WINDOWS */ + static int eth_dev_handle_port_link_status(const char *cmd __rte_unused, const char *params, @@ -6571,6 +6614,10 @@ RTE_INIT(ethdev_init_telemetry) "Returns the common stats for a port. Parameters: int port_id"); rte_telemetry_register_cmd("/ethdev/xstats", eth_dev_handle_port_xstats, "Returns the extended stats for a port. Parameters: int port_id"); +#ifndef RTE_EXEC_ENV_WINDOWS + rte_telemetry_register_cmd("/ethdev/dump_priv", eth_dev_handle_port_dump_priv, + "Returns dump private information for a port. Parameters: int port_id"); +#endif rte_telemetry_register_cmd("/ethdev/link_status", eth_dev_handle_port_link_status, "Returns the link status for a port. Parameters: int port_id");
From: Sean Morrissey sean.morrissey@intel.com
Telemetry commands are now registered through the dmadev library for the gathering of DSA stats. The corresponding callback functions for listing dmadevs and providing info and stats for a
An example usage can be seen below: Connecting to /var/run/dpdk/rte/dpdk_telemetry.v2 {"version": "DPDK 22.03.0-rc2", "pid": 2956551, "max_output_len": 16384} Connected to application: "dpdk-dma" --> / {"/": ["/", "/dmadev/info", "/dmadev/list", "/dmadev/stats", ...]} --> /dmadev/list {"/dmadev/info": {"name": "0000:00:01.0", "nb_vchans": 1, "numa_node": 0, "max_vchans": 1, "max_desc": 4096, "min_desc": 32, "max_sges": 0, "capabilities": {"mem2mem": 1, "mem2dev": 0, "dev2mem": 0, ...}}} --> /dmadev/stats,0,0 {"/dmadev/stats": {"submitted": 0, "completed": 0, "errors": 0}}
Signed-off-by: Sean Morrissey sean.morrissey@intel.com Reviewed-by: Bruce Richardson bruce.richardson@intel.com Reviewed-by: Conor Walsh conor.walsh@intel.com Tested-by: Sunil Pai G sunil.pai.g@intel.com Tested-by: Kevin Laatz kevin.laatz@intel.com Acked-by: Chengwen Feng fengchengwen@huawei.com --- doc/guides/prog_guide/dmadev.rst | 24 ++++++ lib/dmadev/meson.build | 2 + lib/dmadev/rte_dmadev.c | 130 +++++++++++++++++++++++++++++++ 3 files changed, 156 insertions(+)
diff --git a/doc/guides/prog_guide/dmadev.rst b/doc/guides/prog_guide/dmadev.rst index 77863f8028..2aa26d33b8 100644 --- a/doc/guides/prog_guide/dmadev.rst +++ b/doc/guides/prog_guide/dmadev.rst @@ -118,3 +118,27 @@ i.e. ``rte_dma_stats_get()``. The statistics returned for each device instance a * ``submitted``: The number of operations submitted to the device. * ``completed``: The number of operations which have completed (successful and failed). * ``errors``: The number of operations that completed with error. + +The dmadev library has support for displaying DMA device information +through the Telemetry interface. Telemetry commands that can be used +are shown below. + +#. Get the list of available DMA devices by ID:: + + --> /dmadev/list + {"/dmadev/list": [0, 1]} + +#. Get general information from a DMA device by passing the device id as a parameter:: + + --> /dmadev/info,0 + {"/dmadev/info": {"name": "0000:00:01.0", "nb_vchans": 1, "numa_node": 0, "max_vchans": 1, "max_desc": 4096, + "min_desc": 32, "max_sges": 0, "capabilities": {"mem2mem": 1, "mem2dev": 0, "dev2mem": 0, ...}}} + +#. Get the statistics for a particular DMA device and virtual DMA channel by passing the device id and vchan id as parameters + (if a DMA device only has one virtual DMA channel you only need to pass the device id):: + + --> /dmadev/stats,0,0 + {"/dmadev/stats": {"submitted": 0, "completed": 0, "errors": 0}} + +For more information on how to use the Telemetry interface, see +the :doc:`../howto/telemetry`. diff --git a/lib/dmadev/meson.build b/lib/dmadev/meson.build index d2fc85e8c7..2f17587b75 100644 --- a/lib/dmadev/meson.build +++ b/lib/dmadev/meson.build @@ -5,3 +5,5 @@ sources = files('rte_dmadev.c') headers = files('rte_dmadev.h') indirect_headers += files('rte_dmadev_core.h') driver_sdk_headers += files('rte_dmadev_pmd.h') + +deps += ['telemetry'] diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c index d4b32b2971..174d4c40ae 100644 --- a/lib/dmadev/rte_dmadev.c +++ b/lib/dmadev/rte_dmadev.c @@ -11,6 +11,7 @@ #include <rte_malloc.h> #include <rte_memzone.h> #include <rte_string_fns.h> +#include <rte_telemetry.h>
#include "rte_dmadev.h" #include "rte_dmadev_pmd.h" @@ -864,3 +865,132 @@ dma_fp_object_dummy(struct rte_dma_fp_object *obj) obj->completed_status = dummy_completed_status; obj->burst_capacity = dummy_burst_capacity; } + +static int +dmadev_handle_dev_list(const char *cmd __rte_unused, + const char *params __rte_unused, + struct rte_tel_data *d) +{ + int dev_id; + + rte_tel_data_start_array(d, RTE_TEL_INT_VAL); + for (dev_id = 0; dev_id < dma_devices_max; dev_id++) + if (rte_dma_is_valid(dev_id)) + rte_tel_data_add_array_int(d, dev_id); + + return 0; +} + +#define ADD_CAPA(td, dc, c) rte_tel_data_add_dict_int(td, dma_capability_name(c), !!(dc & c)) + +static int +dmadev_handle_dev_info(const char *cmd __rte_unused, + const char *params, struct rte_tel_data *d) +{ + struct rte_dma_info dma_info; + struct rte_tel_data *dma_caps; + int dev_id, ret; + uint64_t dev_capa; + char *end_param; + + if (params == NULL || strlen(params) == 0 || !isdigit(*params)) + return -EINVAL; + + dev_id = strtoul(params, &end_param, 0); + if (*end_param != '\0') + RTE_DMA_LOG(WARNING, "Extra parameters passed to dmadev telemetry command, ignoring"); + + /* Function info_get validates dev_id so we don't need to. */ + ret = rte_dma_info_get(dev_id, &dma_info); + if (ret < 0) + return -EINVAL; + dev_capa = dma_info.dev_capa; + + rte_tel_data_start_dict(d); + rte_tel_data_add_dict_string(d, "name", dma_info.dev_name); + rte_tel_data_add_dict_int(d, "nb_vchans", dma_info.nb_vchans); + rte_tel_data_add_dict_int(d, "numa_node", dma_info.numa_node); + rte_tel_data_add_dict_int(d, "max_vchans", dma_info.max_vchans); + rte_tel_data_add_dict_int(d, "max_desc", dma_info.max_desc); + rte_tel_data_add_dict_int(d, "min_desc", dma_info.min_desc); + rte_tel_data_add_dict_int(d, "max_sges", dma_info.max_sges); + + dma_caps = rte_tel_data_alloc(); + if (!dma_caps) + return -ENOMEM; + + rte_tel_data_start_dict(dma_caps); + ADD_CAPA(dma_caps, dev_capa, RTE_DMA_CAPA_MEM_TO_MEM); + ADD_CAPA(dma_caps, dev_capa, RTE_DMA_CAPA_MEM_TO_DEV); + ADD_CAPA(dma_caps, dev_capa, RTE_DMA_CAPA_DEV_TO_MEM); + ADD_CAPA(dma_caps, dev_capa, RTE_DMA_CAPA_DEV_TO_DEV); + ADD_CAPA(dma_caps, dev_capa, RTE_DMA_CAPA_SVA); + ADD_CAPA(dma_caps, dev_capa, RTE_DMA_CAPA_SILENT); + ADD_CAPA(dma_caps, dev_capa, RTE_DMA_CAPA_HANDLES_ERRORS); + ADD_CAPA(dma_caps, dev_capa, RTE_DMA_CAPA_OPS_COPY); + ADD_CAPA(dma_caps, dev_capa, RTE_DMA_CAPA_OPS_COPY_SG); + ADD_CAPA(dma_caps, dev_capa, RTE_DMA_CAPA_OPS_FILL); + rte_tel_data_add_dict_container(d, "capabilities", dma_caps, 0); + + return 0; +} + +#define ADD_DICT_STAT(s) rte_tel_data_add_dict_u64(d, #s, dma_stats.s) + +static int +dmadev_handle_dev_stats(const char *cmd __rte_unused, + const char *params, + struct rte_tel_data *d) +{ + struct rte_dma_info dma_info; + struct rte_dma_stats dma_stats; + int dev_id, ret, vchan_id; + char *end_param; + const char *vchan_param; + + if (params == NULL || strlen(params) == 0 || !isdigit(*params)) + return -EINVAL; + + dev_id = strtoul(params, &end_param, 0); + + /* Function info_get validates dev_id so we don't need to. */ + ret = rte_dma_info_get(dev_id, &dma_info); + if (ret < 0) + return -EINVAL; + + /* If the device has one vchan the user does not need to supply the + * vchan id and only the device id is needed, no extra parameters. + */ + if (dma_info.nb_vchans == 1 && *end_param == '\0') + vchan_id = 0; + else { + vchan_param = strtok(end_param, ","); + if (!vchan_param || strlen(vchan_param) == 0 || !isdigit(*vchan_param)) + return -EINVAL; + + vchan_id = strtoul(vchan_param, &end_param, 0); + } + if (*end_param != '\0') + RTE_DMA_LOG(WARNING, "Extra parameters passed to dmadev telemetry command, ignoring"); + + ret = rte_dma_stats_get(dev_id, vchan_id, &dma_stats); + if (ret < 0) + return -EINVAL; + + rte_tel_data_start_dict(d); + ADD_DICT_STAT(submitted); + ADD_DICT_STAT(completed); + ADD_DICT_STAT(errors); + + return 0; +} + +RTE_INIT(dmadev_init_telemetry) +{ + rte_telemetry_register_cmd("/dmadev/list", dmadev_handle_dev_list, + "Returns list of available dmadev devices by IDs. No parameters."); + rte_telemetry_register_cmd("/dmadev/info", dmadev_handle_dev_info, + "Returns information for a dmadev. Parameters: int dev_id"); + rte_telemetry_register_cmd("/dmadev/stats", dmadev_handle_dev_stats, + "Returns the stats for a dmadev vchannel. Parameters: int dev_id, vchan_id (Optional if only one vchannel)"); +}
From: Chengwen Feng fengchengwen@huawei.com
This patch supports telemetry dump dmadev.
Signed-off-by: Chengwen Feng fengchengwen@huawei.com Acked-by: Morten Brørup mb@smartsharesystems.com --- lib/dmadev/rte_dmadev.c | 43 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+)
diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c index 174d4c40ae..ea1cb815b4 100644 --- a/lib/dmadev/rte_dmadev.c +++ b/lib/dmadev/rte_dmadev.c @@ -985,6 +985,45 @@ dmadev_handle_dev_stats(const char *cmd __rte_unused, return 0; }
+#ifndef RTE_EXEC_ENV_WINDOWS +static int +dmadev_handle_dev_dump(const char *cmd __rte_unused, + const char *params, + struct rte_tel_data *d) +{ + char *buf, *end_param; + int dev_id, ret; + FILE *f; + + if (params == NULL || strlen(params) == 0 || !isdigit(*params)) + return -EINVAL; + + dev_id = strtoul(params, &end_param, 0); + if (*end_param != '\0') + RTE_DMA_LOG(WARNING, "Extra parameters passed to dmadev telemetry command, ignoring"); + + buf = calloc(sizeof(char), RTE_TEL_MAX_SINGLE_STRING_LEN); + if (buf == NULL) + return -ENOMEM; + + f = fmemopen(buf, RTE_TEL_MAX_SINGLE_STRING_LEN - 1, "w+"); + if (f == NULL) { + free(buf); + return -EINVAL; + } + + ret = rte_dma_dump(dev_id, f); + fclose(f); + if (ret == 0) { + rte_tel_data_start_dict(d); + rte_tel_data_string(d, buf); + } + + free(buf); + return ret; +} +#endif /* !RTE_EXEC_ENV_WINDOWS */ + RTE_INIT(dmadev_init_telemetry) { rte_telemetry_register_cmd("/dmadev/list", dmadev_handle_dev_list, @@ -993,4 +1032,8 @@ RTE_INIT(dmadev_init_telemetry) "Returns information for a dmadev. Parameters: int dev_id"); rte_telemetry_register_cmd("/dmadev/stats", dmadev_handle_dev_stats, "Returns the stats for a dmadev vchannel. Parameters: int dev_id, vchan_id (Optional if only one vchannel)"); +#ifndef RTE_EXEC_ENV_WINDOWS + rte_telemetry_register_cmd("/dmadev/dump", dmadev_handle_dev_dump, + "Returns dump information for a dmadev. Parameters: int dev_id"); +#endif }
From: Brian Dooley brian.dooley@intel.com
Some public header files were missing 'extern "C"' C++ guards, and couldn't be used by C++ applications. Add the missing guards.
Fixes: 8877ac688b52 ("telemetry: introduce infrastructure") Cc: stable@dpdk.org
Signed-off-by: Brian Dooley brian.dooley@intel.com Acked-by: Bruce Richardson bruce.richardson@intel.com Acked-by: Tyler Retzlaff roretzla@linux.microsoft.com --- lib/telemetry/rte_telemetry.h | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/lib/telemetry/rte_telemetry.h b/lib/telemetry/rte_telemetry.h index 7bca8a9a49..3372b32f38 100644 --- a/lib/telemetry/rte_telemetry.h +++ b/lib/telemetry/rte_telemetry.h @@ -9,6 +9,10 @@ #ifndef _RTE_TELEMETRY_H_ #define _RTE_TELEMETRY_H_
+#ifdef __cplusplus +extern "C" { +#endif + /** Maximum length for string used in object. */ #define RTE_TEL_MAX_STRING_LEN 128 /** Maximum length of string. */ @@ -294,4 +298,8 @@ rte_tel_data_alloc(void); void rte_tel_data_free(struct rte_tel_data *data);
+#ifdef __cplusplus +} +#endif + #endif
From: Brian Dooley brian.dooley@intel.com
Some public header files were missing 'extern "C"' C++ guards, and couldn't be used by C++ applications. Add the missing guards.
Fixes: 8877ac688b52 ("telemetry: introduce infrastructure") Cc: stable@dpdk.org
Signed-off-by: Brian Dooley brian.dooley@intel.com Acked-by: Bruce Richardson bruce.richardson@intel.com Acked-by: Tyler Retzlaff roretzla@linux.microsoft.com --- lib/telemetry/rte_telemetry.h | 8 ++++++++ lib/telemetry/telemetry_data.c | 31 +++++++++++++++++++++++++++++++ 2 files changed, 39 insertions(+)
diff --git a/lib/telemetry/rte_telemetry.h b/lib/telemetry/rte_telemetry.h index 3372b32f38..fadea48cb9 100644 --- a/lib/telemetry/rte_telemetry.h +++ b/lib/telemetry/rte_telemetry.h @@ -64,6 +64,10 @@ rte_tel_data_start_array(struct rte_tel_data *d, enum rte_tel_value_type type); /** * Start a dictionary of values for returning from a callback * + * Dictionaries consist of key-values pairs to be returned, where the keys, + * or names, are strings and the values can be any of the types supported by telemetry. + * Name strings may only contain alphanumeric characters as well as '_' or '/' + * * @param d * The data structure passed to the callback * @return @@ -159,6 +163,7 @@ rte_tel_data_add_array_container(struct rte_tel_data *d, * The data structure passed to the callback * @param name * The name the value is to be stored under in the dict + * Must contain only alphanumeric characters or the symbols: '_' or '/' * @param val * The string to be stored in the dict * @return @@ -177,6 +182,7 @@ rte_tel_data_add_dict_string(struct rte_tel_data *d, const char *name, * The data structure passed to the callback * @param name * The name the value is to be stored under in the dict + * Must contain only alphanumeric characters or the symbols: '_' or '/' * @param val * The number to be stored in the dict * @return @@ -193,6 +199,7 @@ rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int val); * The data structure passed to the callback * @param name * The name the value is to be stored under in the dict + * Must contain only alphanumeric characters or the symbols: '_' or '/' * @param val * The number to be stored in the dict * @return @@ -212,6 +219,7 @@ rte_tel_data_add_dict_u64(struct rte_tel_data *d, * The data structure passed to the callback * @param name * The name the value is to be stored under in the dict. + * Must contain only alphanumeric characters or the symbols: '_' or '/' * @param val * The pointer to the container to be stored in the dict. * @param keep diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c index e14ae3c4d4..be46054c29 100644 --- a/lib/telemetry/telemetry_data.c +++ b/lib/telemetry/telemetry_data.c @@ -3,6 +3,7 @@ */
#undef RTE_USE_LIBBSD +#include <stdbool.h> #include <rte_string_fns.h>
#include "telemetry_data.h" @@ -92,6 +93,24 @@ rte_tel_data_add_array_container(struct rte_tel_data *d, return 0; }
+static bool +valid_name(const char *name) +{ + char allowed[128] = { + ['0' ... '9'] = 1, + ['A' ... 'Z'] = 1, + ['a' ... 'z'] = 1, + ['_'] = 1, + ['/'] = 1, + }; + while (*name != '\0') { + if ((size_t)*name >= RTE_DIM(allowed) || allowed[(int)*name] == 0) + return false; + name++; + } + return true; +} + int rte_tel_data_add_dict_string(struct rte_tel_data *d, const char *name, const char *val) @@ -104,6 +123,9 @@ rte_tel_data_add_dict_string(struct rte_tel_data *d, const char *name, if (d->data_len >= RTE_TEL_MAX_DICT_ENTRIES) return -ENOSPC;
+ if (!valid_name(name)) + return -EINVAL; + d->data_len++; e->type = RTE_TEL_STRING_VAL; vbytes = strlcpy(e->value.sval, val, RTE_TEL_MAX_STRING_LEN); @@ -123,6 +145,9 @@ rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int val) if (d->data_len >= RTE_TEL_MAX_DICT_ENTRIES) return -ENOSPC;
+ if (!valid_name(name)) + return -EINVAL; + d->data_len++; e->type = RTE_TEL_INT_VAL; e->value.ival = val; @@ -140,6 +165,9 @@ rte_tel_data_add_dict_u64(struct rte_tel_data *d, if (d->data_len >= RTE_TEL_MAX_DICT_ENTRIES) return -ENOSPC;
+ if (!valid_name(name)) + return -EINVAL; + d->data_len++; e->type = RTE_TEL_U64_VAL; e->value.u64val = val; @@ -161,6 +189,9 @@ rte_tel_data_add_dict_container(struct rte_tel_data *d, const char *name, if (d->data_len >= RTE_TEL_MAX_DICT_ENTRIES) return -ENOSPC;
+ if (!valid_name(name)) + return -EINVAL; + d->data_len++; e->type = RTE_TEL_CONTAINER; e->value.container.data = val;
From: Bruce Richardson bruce.richardson@intel.com
For string values returned from telemetry, escape any values that cannot normally appear in a json string. According to the json spec[1], the characters than need to be handled are control chars (char value < 0x20) and '"' and '' characters.
To handle this, we replace the snprintf call with a separate string copying and encapsulation routine which checks each character as it copies it to the final array.
[1] https://www.rfc-editor.org/rfc/rfc8259.txt
Bugzilla ID: 1037 Fixes: 6dd571fd07c3 ("telemetry: introduce new functionality")
Signed-off-by: Bruce Richardson bruce.richardson@intel.com Acked-by: Ciara Power ciara.power@intel.com Acked-by: Morten Brørup mb@smartsharesystems.com Acked-by: Chengwen Feng fengchengwen@huawei.com --- lib/telemetry/telemetry.c | 11 +++++--- lib/telemetry/telemetry_json.h | 48 +++++++++++++++++++++++++++++++++- 2 files changed, 55 insertions(+), 4 deletions(-)
diff --git a/lib/telemetry/telemetry.c b/lib/telemetry/telemetry.c index e5ccfe47f7..d4a7838ded 100644 --- a/lib/telemetry/telemetry.c +++ b/lib/telemetry/telemetry.c @@ -233,9 +233,14 @@ output_json(const char *cmd, const struct rte_tel_data *d, int s) MAX_CMD_LEN, cmd ? cmd : "none"); break; case RTE_TEL_STRING: - used = snprintf(out_buf, sizeof(out_buf), "{"%.*s":"%.*s"}", - MAX_CMD_LEN, cmd, - RTE_TEL_MAX_SINGLE_STRING_LEN, d->data.str); + prefix_used = snprintf(out_buf, sizeof(out_buf), "{"%.*s":", + MAX_CMD_LEN, cmd); + cb_data_buf = &out_buf[prefix_used]; + buf_len = sizeof(out_buf) - prefix_used - 1; /* space for '}' */ + + used = rte_tel_json_str(cb_data_buf, buf_len, 0, d->data.str); + used += prefix_used; + used += strlcat(out_buf + used, "}", sizeof(out_buf) - used); break; case RTE_TEL_DICT: prefix_used = snprintf(out_buf, sizeof(out_buf), "{"%.*s":", diff --git a/lib/telemetry/telemetry_json.h b/lib/telemetry/telemetry_json.h index db70690274..13df5d07e3 100644 --- a/lib/telemetry/telemetry_json.h +++ b/lib/telemetry/telemetry_json.h @@ -44,6 +44,52 @@ __json_snprintf(char *buf, const int len, const char *format, ...) return 0; /* nothing written or modified */ }
+static const char control_chars[0x20] = { + ['\n'] = 'n', + ['\r'] = 'r', + ['\t'] = 't', +}; + +/** + * @internal + * Does the same as __json_snprintf(buf, len, ""%s"", str) + * except that it does proper escaping as necessary. + * Drops any invalid characters we don't support + */ +static inline int +__json_format_str(char *buf, const int len, const char *str) +{ + char tmp[len]; + int tmpidx = 0; + + tmp[tmpidx++] = '"'; + while (*str != '\0') { + if (*str < (int)RTE_DIM(control_chars)) { + int idx = *str; /* compilers don't like char type as index */ + if (control_chars[idx] != 0) { + tmp[tmpidx++] = '\'; + tmp[tmpidx++] = control_chars[idx]; + } + } else if (*str == '"' || *str == '\') { + tmp[tmpidx++] = '\'; + tmp[tmpidx++] = *str; + } else + tmp[tmpidx++] = *str; + /* we always need space for closing quote and null character. + * Ensuring at least two free characters also means we can always take an + * escaped character like "\n" without overflowing + */ + if (tmpidx > len - 2) + return 0; + str++; + } + tmp[tmpidx++] = '"'; + tmp[tmpidx] = '\0'; + + strcpy(buf, tmp); + return tmpidx; +} + /* Copies an empty array into the provided buffer. */ static inline int rte_tel_json_empty_array(char *buf, const int len, const int used) @@ -62,7 +108,7 @@ rte_tel_json_empty_obj(char *buf, const int len, const int used) static inline int rte_tel_json_str(char *buf, const int len, const int used, const char *str) { - return used + __json_snprintf(buf + used, len - used, ""%s"", str); + return used + __json_format_str(buf + used, len - used, str); }
/* Appends a string into the JSON array in the provided buffer. */
From: Bruce Richardson bruce.richardson@intel.com
When strings are added to an array variable, we need to properly escape the invalid json characters in the strings.
Signed-off-by: Bruce Richardson bruce.richardson@intel.com Acked-by: Ciara Power ciara.power@intel.com Acked-by: Morten Brørup mb@smartsharesystems.com Acked-by: Chengwen Feng fengchengwen@huawei.com --- lib/telemetry/telemetry_json.h | 28 +++++++++++++++++++--------- 1 file changed, 19 insertions(+), 9 deletions(-)
diff --git a/lib/telemetry/telemetry_json.h b/lib/telemetry/telemetry_json.h index 13df5d07e3..c4442a0bf0 100644 --- a/lib/telemetry/telemetry_json.h +++ b/lib/telemetry/telemetry_json.h @@ -52,17 +52,22 @@ static const char control_chars[0x20] = {
/** * @internal - * Does the same as __json_snprintf(buf, len, ""%s"", str) - * except that it does proper escaping as necessary. + * This function acts the same as __json_snprintf(buf, len, "%s%s%s", prefix, str, suffix) + * except that it does proper escaping of "str" as necessary. Prefix and suffix should be compile- + * time constants not needing escaping. * Drops any invalid characters we don't support */ static inline int -__json_format_str(char *buf, const int len, const char *str) +__json_format_str(char *buf, const int len, const char *prefix, const char *str, const char *suffix) { char tmp[len]; int tmpidx = 0;
- tmp[tmpidx++] = '"'; + while (*prefix != '\0' && tmpidx < len) + tmp[tmpidx++] = *prefix++; + if (tmpidx >= len) + return 0; + while (*str != '\0') { if (*str < (int)RTE_DIM(control_chars)) { int idx = *str; /* compilers don't like char type as index */ @@ -75,7 +80,7 @@ __json_format_str(char *buf, const int len, const char *str) tmp[tmpidx++] = *str; } else tmp[tmpidx++] = *str; - /* we always need space for closing quote and null character. + /* we always need space for (at minimum) closing quote and null character. * Ensuring at least two free characters also means we can always take an * escaped character like "\n" without overflowing */ @@ -83,7 +88,12 @@ __json_format_str(char *buf, const int len, const char *str) return 0; str++; } - tmp[tmpidx++] = '"'; + + while (*suffix != '\0' && tmpidx < len) + tmp[tmpidx++] = *suffix++; + if (tmpidx >= len) + return 0; + tmp[tmpidx] = '\0';
strcpy(buf, tmp); @@ -108,7 +118,7 @@ rte_tel_json_empty_obj(char *buf, const int len, const int used) static inline int rte_tel_json_str(char *buf, const int len, const int used, const char *str) { - return used + __json_format_str(buf + used, len - used, str); + return used + __json_format_str(buf + used, len - used, """, str, """); }
/* Appends a string into the JSON array in the provided buffer. */ @@ -118,9 +128,9 @@ rte_tel_json_add_array_string(char *buf, const int len, const int used, { int ret, end = used - 1; /* strip off final delimiter */ if (used <= 2) /* assume empty, since minimum is '[]' */ - return __json_snprintf(buf, len, "["%s"]", str); + return __json_format_str(buf, len, "["", str, ""]");
- ret = __json_snprintf(buf + end, len - end, ","%s"]", str); + ret = __json_format_str(buf + end, len - end, ","", str, ""]"); return ret == 0 ? used : end + ret; }
From: Bruce Richardson bruce.richardson@intel.com
When strings are added to an dict variable, we need to properly escape the invalid json characters in the strings.
Signed-off-by: Bruce Richardson bruce.richardson@intel.com Acked-by: Ciara Power ciara.power@intel.com Acked-by: Morten Brørup mb@smartsharesystems.com Acked-by: Chengwen Feng fengchengwen@huawei.com --- lib/telemetry/telemetry_json.h | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/lib/telemetry/telemetry_json.h b/lib/telemetry/telemetry_json.h index c4442a0bf0..e3fae7c30d 100644 --- a/lib/telemetry/telemetry_json.h +++ b/lib/telemetry/telemetry_json.h @@ -54,7 +54,7 @@ static const char control_chars[0x20] = { * @internal * This function acts the same as __json_snprintf(buf, len, "%s%s%s", prefix, str, suffix) * except that it does proper escaping of "str" as necessary. Prefix and suffix should be compile- - * time constants not needing escaping. + * time constants, or values not needing escaping. * Drops any invalid characters we don't support */ static inline int @@ -219,12 +219,16 @@ static inline int rte_tel_json_add_obj_str(char *buf, const int len, const int used, const char *name, const char *val) { + char tmp_name[RTE_TEL_MAX_STRING_LEN + 5]; int ret, end = used - 1; + + /* names are limited to certain characters so need no escaping */ + snprintf(tmp_name, sizeof(tmp_name), "{"%s":"", name); if (used <= 2) /* assume empty, since minimum is '{}' */ - return __json_snprintf(buf, len, "{"%s":"%s"}", name, val); + return __json_format_str(buf, len, tmp_name, val, ""}");
- ret = __json_snprintf(buf + end, len - end, ","%s":"%s"}", - name, val); + tmp_name[0] = ','; /* replace '{' with ',' at start */ + ret = __json_format_str(buf + end, len - end, tmp_name, val, ""}"); return ret == 0 ? used : end + ret; }
From: Bruce Richardson bruce.richardson@intel.com
Limit the telemetry command characters to the minimum set needed for current implementations. This prevents issues with invalid json characters needing to be escaped on replies.
Signed-off-by: Bruce Richardson bruce.richardson@intel.com Acked-by: Ciara Power ciara.power@intel.com Acked-by: Morten Brørup mb@smartsharesystems.com Acked-by: Chengwen Feng fengchengwen@huawei.com --- lib/telemetry/telemetry.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/lib/telemetry/telemetry.c b/lib/telemetry/telemetry.c index d4a7838ded..f0be50b2bf 100644 --- a/lib/telemetry/telemetry.c +++ b/lib/telemetry/telemetry.c @@ -3,6 +3,7 @@ */
#ifndef RTE_EXEC_ENV_WINDOWS +#include <ctype.h> #include <unistd.h> #include <pthread.h> #include <sys/socket.h> @@ -71,12 +72,19 @@ int rte_telemetry_register_cmd(const char *cmd, telemetry_cb fn, const char *help) { struct cmd_callback *new_callbacks; + const char *cmdp = cmd; int i = 0;
if (strlen(cmd) >= MAX_CMD_LEN || fn == NULL || cmd[0] != '/' || strlen(help) >= RTE_TEL_MAX_STRING_LEN) return -EINVAL;
+ while (*cmdp != '\0') { + if (!isalnum(*cmdp) && *cmdp != '_' && *cmdp != '/') + return -EINVAL; + cmdp++; + } + rte_spinlock_lock(&callback_sl); new_callbacks = realloc(callbacks, sizeof(callbacks[0]) * (num_callbacks + 1)); if (new_callbacks == NULL) {
From: Bruce Richardson bruce.richardson@intel.com
When preparing the json response to a telemetry socket query, the code for prefixing the command name, and appending the file "}" on the end of the response was duplicated for multiple reply types. Taking this code out of the switch statement reduces the duplication and makes the code more maintainable.
For completeness of testing, add in a test case to validate the "null" response type - the only leg of the switch statement not already covered by an existing test case in the telemetry_data tests.
Signed-off-by: Bruce Richardson bruce.richardson@intel.com Acked-by: Ciara Power ciara.power@intel.com Acked-by: Morten Brørup mb@smartsharesystems.com Acked-by: Chengwen Feng fengchengwen@huawei.com --- lib/telemetry/telemetry.c | 35 ++++++++++++----------------------- 1 file changed, 12 insertions(+), 23 deletions(-)
diff --git a/lib/telemetry/telemetry.c b/lib/telemetry/telemetry.c index f0be50b2bf..25ab6ed877 100644 --- a/lib/telemetry/telemetry.c +++ b/lib/telemetry/telemetry.c @@ -235,27 +235,22 @@ output_json(const char *cmd, const struct rte_tel_data *d, int s)
RTE_BUILD_BUG_ON(sizeof(out_buf) < MAX_CMD_LEN + RTE_TEL_MAX_SINGLE_STRING_LEN + 10); + + prefix_used = snprintf(out_buf, sizeof(out_buf), "{"%.*s":", + MAX_CMD_LEN, cmd); + cb_data_buf = &out_buf[prefix_used]; + buf_len = sizeof(out_buf) - prefix_used - 1; /* space for '}' */ + switch (d->type) { case RTE_TEL_NULL: - used = snprintf(out_buf, sizeof(out_buf), "{"%.*s":null}", - MAX_CMD_LEN, cmd ? cmd : "none"); + used = strlcpy(cb_data_buf, "null", buf_len); break; - case RTE_TEL_STRING: - prefix_used = snprintf(out_buf, sizeof(out_buf), "{"%.*s":", - MAX_CMD_LEN, cmd); - cb_data_buf = &out_buf[prefix_used]; - buf_len = sizeof(out_buf) - prefix_used - 1; /* space for '}' */
+ case RTE_TEL_STRING: used = rte_tel_json_str(cb_data_buf, buf_len, 0, d->data.str); - used += prefix_used; - used += strlcat(out_buf + used, "}", sizeof(out_buf) - used); break; - case RTE_TEL_DICT: - prefix_used = snprintf(out_buf, sizeof(out_buf), "{"%.*s":", - MAX_CMD_LEN, cmd); - cb_data_buf = &out_buf[prefix_used]; - buf_len = sizeof(out_buf) - prefix_used - 1; /* space for '}' */
+ case RTE_TEL_DICT: used = rte_tel_json_empty_obj(cb_data_buf, buf_len, 0); for (i = 0; i < d->data_len; i++) { const struct tel_dict_entry *v = &d->data.dict[i]; @@ -291,18 +286,12 @@ output_json(const char *cmd, const struct rte_tel_data *d, int s) } } } - used += prefix_used; - used += strlcat(out_buf + used, "}", sizeof(out_buf) - used); break; + case RTE_TEL_ARRAY_STRING: case RTE_TEL_ARRAY_INT: case RTE_TEL_ARRAY_U64: case RTE_TEL_ARRAY_CONTAINER: - prefix_used = snprintf(out_buf, sizeof(out_buf), "{"%.*s":", - MAX_CMD_LEN, cmd); - cb_data_buf = &out_buf[prefix_used]; - buf_len = sizeof(out_buf) - prefix_used - 1; /* space for '}' */ - used = rte_tel_json_empty_array(cb_data_buf, buf_len, 0); for (i = 0; i < d->data_len; i++) if (d->type == RTE_TEL_ARRAY_STRING) @@ -330,10 +319,10 @@ output_json(const char *cmd, const struct rte_tel_data *d, int s) if (!rec_data->keep) rte_tel_data_free(rec_data->data); } - used += prefix_used; - used += strlcat(out_buf + used, "}", sizeof(out_buf) - used); break; } + used += prefix_used; + used += strlcat(out_buf + used, "}", sizeof(out_buf) - used); if (write(s, out_buf, used) < 0) perror("Error writing to socket"); }
From: Bruce Richardson bruce.richardson@intel.com
The /help telemetry command prints out the help text for the given command passed in as parameter. However, entering /help without any parameters does not give any useful information as to the fact that you need to pass in a command to get help on. Update the command so it prints its own help text when called without any parameters.
Signed-off-by: Bruce Richardson bruce.richardson@intel.com Acked-by: Ciara Power ciara.power@intel.com Acked-by: Morten Brørup mb@smartsharesystems.com Acked-by: Chengwen Feng fengchengwen@huawei.com --- lib/telemetry/telemetry.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/lib/telemetry/telemetry.c b/lib/telemetry/telemetry.c index 25ab6ed877..52048de55c 100644 --- a/lib/telemetry/telemetry.c +++ b/lib/telemetry/telemetry.c @@ -141,15 +141,17 @@ command_help(const char *cmd __rte_unused, const char *params, struct rte_tel_data *d) { int i; + /* if no parameters return our own help text */ + const char *to_lookup = (params == NULL ? cmd : params);
- if (!params) - return -1; rte_tel_data_start_dict(d); rte_spinlock_lock(&callback_sl); for (i = 0; i < num_callbacks; i++) - if (strcmp(params, callbacks[i].cmd) == 0) { - rte_tel_data_add_dict_string(d, params, - callbacks[i].help); + if (strcmp(to_lookup, callbacks[i].cmd) == 0) { + if (params == NULL) + rte_tel_data_string(d, callbacks[i].help); + else + rte_tel_data_add_dict_string(d, params, callbacks[i].help); break; } rte_spinlock_unlock(&callback_sl);
high-performance-network@openeuler.org