[PATCH v2] PCI/DPC: Check host->native_dpc before enable dpc service
by Yicong Yang
Per Downstream Port Containment Related Enhancements ECN[1]
Table 4-6, Interpretation of _OSC Control Field Returned Value,
for bit 7 of _OSC control return value:
"Firmware sets this bit to 1 to grant the OS control over PCI Express
Downstream Port Containment configuration."
"If control of this feature was requested and denied,
or was not requested, the firmware returns this bit set to 0."
We store bit 7 of _OSC control return value in host->native_dpc,
check it before enable the dpc service as the firmware may not
grant the control.
[1] Downstream Port Containment Related Enhancements ECN,
Jan 28, 2019, affecting PCI Firmware Specification, Rev. 3.2
https://members.pcisig.com/wg/PCI-SIG/document/12888
Signed-off-by: Yicong Yang <yangyicong(a)hisilicon.com>
---
Change since v1:
- use correct reference for _OSC control return value
drivers/pci/pcie/portdrv_core.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c
index e1fed664..7445d03 100644
--- a/drivers/pci/pcie/portdrv_core.c
+++ b/drivers/pci/pcie/portdrv_core.c
@@ -253,7 +253,8 @@ static int get_port_device_capability(struct pci_dev *dev)
*/
if (pci_find_ext_capability(dev, PCI_EXT_CAP_ID_DPC) &&
pci_aer_available() &&
- (pcie_ports_dpc_native || (services & PCIE_PORT_SERVICE_AER)))
+ (pcie_ports_dpc_native ||
+ ((services & PCIE_PORT_SERVICE_AER) && host->native_dpc)))
services |= PCIE_PORT_SERVICE_DPC;
if (pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM ||
--
2.8.1
1 year, 6 months
[RFC PATCH 0/5] KVM/ARM64 Add support for pinned VMIDs
by Shameer Kolothum
On an ARM64 system with a SMMUv3 implementation that fully supports
Broadcast TLB Maintenance(BTM) feature as part of the Distributed
Virtual Memory(DVM) protocol, the CPU TLB invalidate instructions are
received by SMMUv3. This is very useful when the SMMUv3 shares the
page tables with the CPU(eg: Guest SVA use case). For this to work,
the SMMU must use the same VMID that is allocated by KVM to configure
the stage 2 translations. At present KVM VMID allocations are recycled
on rollover and may change as a result. This will create issues if we
have to share the KVM VMID with SMMU.
Please see the discussion here,
https://lore.kernel.org/linux-iommu/20200522101755.GA3453945@myrica/
This series proposes a way to share the VMID between KVM and IOMMU
driver by,
1. Splitting the KVM VMID space into two equal halves based on the
command line option "kvm-arm.pinned_vmid_enable".
2. First half of the VMID space follows the normal recycle on rollover
policy.
3. Second half of the VMID space doesn't roll over and is used to
allocate pinned VMIDs.
4. Provides helper function to retrieve the KVM instance associated
with a device(if it is part of a vfio group).
5. Introduces generic interfaces to get/put pinned KVM VMIDs.
Open Items:
1. I couldn't figure out a way to determine whether a platform actually
fully supports DVM/BTM or not. Not sure we can take a call based on
SMMUv3 BTM feature bit alone. Probably we can get it from firmware
via IORT?
2. The current splitting of VMID space is only one way to do this and
probably not the best. Maybe we can follow the pinned ASID method used
in SVA code. Suggestions welcome here.
3. The detach_pasid_table() interface is not very clear to me as the current
Qemu prototype is not using that. This requires fixing from my side.
This is based on Jean-Philippe's SVA series[1] and Eric's SMMUv3 dual-stage
support series[2].
The branch with the whole vSVA + BTM solution is here,
https://github.com/hisilicon/kernel-dev/tree/5.10-rc4-2stage-v13-vsva-btm...
This is lightly tested on a HiSilicon D06 platform with uacce/zip dev test tool,
./zip_sva_per -k tlb
Thanks,
Shameer
1. https://github.com/Linaro/linux-kernel-uadk/commits/uacce-devel-5.10
2. https://lore.kernel.org/linux-iommu/20201118112151.25412-1-eric.auger@red...
Shameer Kolothum (5):
vfio: Add a helper to retrieve kvm instance from a dev
KVM: Add generic infrastructure to support pinned VMIDs
KVM: ARM64: Add support for pinned VMIDs
iommu/arm-smmu-v3: Use pinned VMID for NESTED stage with BTM
KVM: arm64: Make sure pinned vmid is released on VM exit
arch/arm64/include/asm/kvm_host.h | 2 +
arch/arm64/kvm/Kconfig | 1 +
arch/arm64/kvm/arm.c | 116 +++++++++++++++++++-
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 49 ++++++++-
drivers/vfio/vfio.c | 12 ++
include/linux/kvm_host.h | 17 +++
include/linux/vfio.h | 1 +
virt/kvm/Kconfig | 2 +
virt/kvm/kvm_main.c | 25 +++++
9 files changed, 220 insertions(+), 5 deletions(-)
--
2.17.1
1 year, 6 months
[PATCH net-next RFC 0/2] add elevated refcnt support for page pool
by Yunsheng Lin
This patchset adds elevated refcnt support for page pool
and enable skb's page frag recycling based on page pool
in hns3 drvier.
Yunsheng Lin (2):
page_pool: add page recycling support based on elevated refcnt
net: hns3: support skb's frag page recycling based on page pool
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 79 +++++++-
drivers/net/ethernet/hisilicon/hns3/hns3_enet.h | 3 +
drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c | 1 +
drivers/net/ethernet/marvell/mvneta.c | 6 +-
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 2 +-
include/linux/mm_types.h | 2 +-
include/linux/skbuff.h | 4 +-
include/net/page_pool.h | 30 ++-
net/core/page_pool.c | 215 +++++++++++++++++----
9 files changed, 285 insertions(+), 57 deletions(-)
--
2.7.4
1 year, 6 months
[PATCH net-next RFC 2/2] net: hns3: support skb's frag page recycling based on page pool
by Yunsheng Lin
This patch adds skb's frag page recycling support based on
the elevated refcnt support in page pool.
The performance improves above 10~20% with IOMMU disabled.
The performance improves about two times when IOMMU is enabled
and iperf server shares the same cpu with irq/NAPI.
Signed-off-by: Yunsheng Lin <linyunsheng(a)huawei.com>
---
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 79 ++++++++++++++++++++--
drivers/net/ethernet/hisilicon/hns3/hns3_enet.h | 3 +
drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c | 1 +
3 files changed, 78 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
index cdb5f14..a76e0f7 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
@@ -3205,6 +3205,20 @@ static int hns3_alloc_buffer(struct hns3_enet_ring *ring,
unsigned int order = hns3_page_order(ring);
struct page *p;
+ if (ring->page_pool) {
+ p = page_pool_dev_alloc_frag(ring->page_pool,
+ &cb->page_offset);
+ if (unlikely(!p))
+ return -ENOMEM;
+
+ cb->priv = p;
+ cb->buf = page_address(p);
+ cb->dma = page_pool_get_dma_addr(p);
+ cb->type = DESC_TYPE_FRAG;
+ cb->reuse_flag = 0;
+ return 0;
+ }
+
p = dev_alloc_pages(order);
if (!p)
return -ENOMEM;
@@ -3227,8 +3241,12 @@ static void hns3_free_buffer(struct hns3_enet_ring *ring,
if (cb->type & (DESC_TYPE_SKB | DESC_TYPE_BOUNCE_HEAD |
DESC_TYPE_BOUNCE_ALL | DESC_TYPE_SGL_SKB))
napi_consume_skb(cb->priv, budget);
- else if (!HNAE3_IS_TX_RING(ring) && cb->pagecnt_bias)
- __page_frag_cache_drain(cb->priv, cb->pagecnt_bias);
+ else if (!HNAE3_IS_TX_RING(ring)) {
+ if (cb->type & DESC_TYPE_PAGE && cb->pagecnt_bias)
+ __page_frag_cache_drain(cb->priv, cb->pagecnt_bias);
+ else if (cb->type & DESC_TYPE_FRAG)
+ page_pool_put_full_page(ring->page_pool, cb->priv, false);
+ }
memset(cb, 0, sizeof(*cb));
}
@@ -3315,7 +3333,7 @@ static int hns3_alloc_and_map_buffer(struct hns3_enet_ring *ring,
int ret;
ret = hns3_alloc_buffer(ring, cb);
- if (ret)
+ if (ret || ring->page_pool)
goto out;
ret = hns3_map_buffer(ring, cb);
@@ -3337,7 +3355,8 @@ static int hns3_alloc_and_attach_buffer(struct hns3_enet_ring *ring, int i)
if (ret)
return ret;
- ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma);
+ ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma +
+ ring->desc_cb[i].page_offset);
return 0;
}
@@ -3367,7 +3386,8 @@ static void hns3_replace_buffer(struct hns3_enet_ring *ring, int i,
{
hns3_unmap_buffer(ring, &ring->desc_cb[i]);
ring->desc_cb[i] = *res_cb;
- ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma);
+ ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma +
+ ring->desc_cb[i].page_offset);
ring->desc[i].rx.bd_base_info = 0;
}
@@ -3539,6 +3559,12 @@ static void hns3_nic_reuse_page(struct sk_buff *skb, int i,
u32 frag_size = size - pull_len;
bool reused;
+ if (ring->page_pool) {
+ skb_add_rx_frag(skb, i, desc_cb->priv, frag_offset,
+ frag_size, truesize);
+ return;
+ }
+
/* Avoid re-using remote or pfmem page */
if (unlikely(!dev_page_is_reusable(desc_cb->priv)))
goto out;
@@ -3856,6 +3882,9 @@ static int hns3_alloc_skb(struct hns3_enet_ring *ring, unsigned int length,
/* We can reuse buffer as-is, just make sure it is reusable */
if (dev_page_is_reusable(desc_cb->priv))
desc_cb->reuse_flag = 1;
+ else if (desc_cb->type & DESC_TYPE_FRAG)
+ page_pool_put_full_page(ring->page_pool, desc_cb->priv,
+ false);
else /* This page cannot be reused so discard it */
__page_frag_cache_drain(desc_cb->priv,
desc_cb->pagecnt_bias);
@@ -3863,6 +3892,10 @@ static int hns3_alloc_skb(struct hns3_enet_ring *ring, unsigned int length,
hns3_rx_ring_move_fw(ring);
return 0;
}
+
+ if (ring->page_pool)
+ skb_mark_for_recycle(skb);
+
u64_stats_update_begin(&ring->syncp);
ring->stats.seg_pkt_cnt++;
u64_stats_update_end(&ring->syncp);
@@ -3901,6 +3934,10 @@ static int hns3_add_frag(struct hns3_enet_ring *ring)
"alloc rx fraglist skb fail\n");
return -ENXIO;
}
+
+ if (ring->page_pool)
+ skb_mark_for_recycle(new_skb);
+
ring->frag_num = 0;
if (ring->tail_skb) {
@@ -4705,6 +4742,30 @@ static void hns3_put_ring_config(struct hns3_nic_priv *priv)
priv->ring = NULL;
}
+static void hns3_alloc_page_pool(struct hns3_enet_ring *ring)
+{
+ struct page_pool_params pp_params = {
+ .flags = PP_FLAG_DMA_MAP | PP_FLAG_PAGECNT_BIAS,
+ .order = hns3_page_order(ring),
+ .pool_size = ring->desc_num * hns3_buf_size(ring) / PAGE_SIZE,
+ .nid = dev_to_node(ring_to_dev(ring)),
+ .dev = ring_to_dev(ring),
+ .dma_dir = DMA_FROM_DEVICE,
+ .offset = 0,
+ .max_len = PAGE_SIZE,
+ .frag_size = hns3_buf_size(ring),
+ };
+
+ ring->page_pool = page_pool_create(&pp_params);
+ if (IS_ERR(ring->page_pool)) {
+ dev_warn(ring_to_dev(ring), "page pool creation failed: %ld\n",
+ PTR_ERR(ring->page_pool));
+ ring->page_pool = NULL;
+ } else {
+ dev_info(ring_to_dev(ring), "page pool creation succeeded\n");
+ }
+}
+
static int hns3_alloc_ring_memory(struct hns3_enet_ring *ring)
{
int ret;
@@ -4724,6 +4785,8 @@ static int hns3_alloc_ring_memory(struct hns3_enet_ring *ring)
goto out_with_desc_cb;
if (!HNAE3_IS_TX_RING(ring)) {
+ hns3_alloc_page_pool(ring);
+
ret = hns3_alloc_ring_buffers(ring);
if (ret)
goto out_with_desc;
@@ -4764,6 +4827,12 @@ void hns3_fini_ring(struct hns3_enet_ring *ring)
devm_kfree(ring_to_dev(ring), tx_spare);
ring->tx_spare = NULL;
}
+
+ if (!HNAE3_IS_TX_RING(ring) && ring->page_pool) {
+ page_pool_destroy(ring->page_pool);
+ ring->page_pool = NULL;
+ dev_info(ring_to_dev(ring), "page pool destroyed\n");
+ }
}
static int hns3_buf_size2type(u32 buf_size)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
index 15af3d9..115c0ce 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
@@ -6,6 +6,7 @@
#include <linux/dim.h>
#include <linux/if_vlan.h>
+#include <net/page_pool.h>
#include "hnae3.h"
@@ -307,6 +308,7 @@ enum hns3_desc_type {
DESC_TYPE_BOUNCE_ALL = 1 << 3,
DESC_TYPE_BOUNCE_HEAD = 1 << 4,
DESC_TYPE_SGL_SKB = 1 << 5,
+ DESC_TYPE_FRAG = 1 << 6,
};
struct hns3_desc_cb {
@@ -451,6 +453,7 @@ struct hns3_enet_ring {
struct hnae3_queue *tqp;
int queue_index;
struct device *dev; /* will be used for DMA mapping of descriptors */
+ struct page_pool *page_pool;
/* statistic */
struct ring_stats stats;
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
index 82061ab..6794d88 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
@@ -983,6 +983,7 @@ static struct hns3_enet_ring *hns3_backup_ringparam(struct hns3_nic_priv *priv)
memcpy(&tmp_rings[i], &priv->ring[i],
sizeof(struct hns3_enet_ring));
tmp_rings[i].skb = NULL;
+ priv->ring[i].page_pool = NULL;
}
return tmp_rings;
--
2.7.4
1 year, 7 months
[PATCH net-next v2 0/2] add benchmark selftest and optimization for ptr_ring
by Yunsheng Lin
Patch 1: add a selftest app to benchmark the performance
of ptr_ring.
Patch 2: make __ptr_ring_empty() checking more reliable
and use the just added selftest to benchmark the
performance impact.
V2: add patch 1 and add performance data for patch 2.
Yunsheng Lin (2):
selftests/ptr_ring: add benchmark application for ptr_ring
ptr_ring: make __ptr_ring_empty() checking more reliable
MAINTAINERS | 5 +
include/linux/ptr_ring.h | 25 ++-
tools/testing/selftests/ptr_ring/Makefile | 6 +
tools/testing/selftests/ptr_ring/ptr_ring_test.c | 249 +++++++++++++++++++++++
tools/testing/selftests/ptr_ring/ptr_ring_test.h | 150 ++++++++++++++
5 files changed, 426 insertions(+), 9 deletions(-)
create mode 100644 tools/testing/selftests/ptr_ring/Makefile
create mode 100644 tools/testing/selftests/ptr_ring/ptr_ring_test.c
create mode 100644 tools/testing/selftests/ptr_ring/ptr_ring_test.h
--
2.7.4
1 year, 7 months
[PATCH net] net: hns3: Fixes+Refactors the broken set channel error fallback logic
by Salil Mehta
The fallback logic of the set channels when set_channel() fails to
configure TX Sched/RSS H/W configuration or for the code which brings
down/restores client before/therafter is not handled properly.
Fix and refactor the code for handling the errors properly and to improve
the readibility.
Fixes: 3a5a5f06d4d2 ("net: hns3: revert to old channel when setting new channel num fail")
Signed-off-by: Salil Mehta <salil.mehta(a)huawei.com>
Signed-off-by: Peng Li <lipeng321(a)huawei.com>
---
.../net/ethernet/hisilicon/hns3/hns3_enet.c | 77 +++++++++++--------
1 file changed, 47 insertions(+), 30 deletions(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
index bf4302a5cf95..fbb0f4c9b98e 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
@@ -4692,24 +4692,60 @@ static int hns3_reset_notify(struct hnae3_handle *handle,
static int hns3_change_channels(struct hnae3_handle *handle, u32 new_tqp_num,
bool rxfh_configured)
{
- int ret;
+ const struct hnae3_ae_ops *ops = handle->ae_algo->ops;
+ u32 org_tqp_num = handle->kinfo.num_tqps;
+ struct device *dev = &handle->pdev->dev;
+ u32 req_tqp_num = new_tqp_num;
+ bool revert_old_config = false;
+ int ret, retval = 0;
+
+ /* bring down the client */
+ ret = hns3_reset_notify(handle, HNAE3_DOWN_CLIENT);
+ if (ret) {
+ dev_err(dev, "client down fail, this should'nt have happened!\n");
+ return ret;
+ }
- ret = handle->ae_algo->ops->set_channels(handle, new_tqp_num,
- rxfh_configured);
+ ret = hns3_reset_notify(handle, HNAE3_UNINIT_CLIENT);
if (ret) {
- dev_err(&handle->pdev->dev,
- "Change tqp num(%u) fail.\n", new_tqp_num);
+ dev_err(dev, "client uninit fail, this should'nt have happened!\n");
return ret;
}
+revert_old_tpqs_config:
+ /* update the TX Sched and RSS config in the H/W */
+ ret = ops->set_channels(handle, req_tqp_num, rxfh_configured);
+ if (ret) {
+ dev_err(dev, "TX Sched/RSS H/W cfg fail(=%d) for %s TPQs\n",
+ ret, revert_old_config ? "old" : "new");
+ goto err_set_channel;
+ }
+
+ /* restore the client */
ret = hns3_reset_notify(handle, HNAE3_INIT_CLIENT);
- if (ret)
- return ret;
+ if (ret) {
+ dev_err(dev, "failed to initialize the client again\n");
+ goto err_set_channel;
+ }
ret = hns3_reset_notify(handle, HNAE3_UP_CLIENT);
- if (ret)
- hns3_reset_notify(handle, HNAE3_UNINIT_CLIENT);
+ if (ret) {
+ dev_err(dev, "Client up fail, this should'nt have happened!\n");
+ return ret;
+ }
+
+ return retval;
+err_set_channel:
+ if (!revert_old_config) {
+ dev_warn(dev, "Revert TX Sched/RSS H/W config with old TPQs\n");
+ req_tqp_num = org_tqp_num;
+ revert_old_config = true;
+ retval = ret;
+ goto revert_old_tpqs_config;
+ }
+ dev_err(dev, "Bad, we could'nt revert to old TPQ H/W config\n");
+ dev_warn(dev, "Device maybe insane. Reload driver/Reset required!\n");
return ret;
}
@@ -4720,7 +4756,6 @@ int hns3_set_channels(struct net_device *netdev,
struct hnae3_knic_private_info *kinfo = &h->kinfo;
bool rxfh_configured = netif_is_rxfh_configured(netdev);
u32 new_tqp_num = ch->combined_count;
- u16 org_tqp_num;
int ret;
if (hns3_nic_resetting(netdev))
@@ -4750,28 +4785,10 @@ int hns3_set_channels(struct net_device *netdev,
"set channels: tqp_num=%u, rxfh=%d\n",
new_tqp_num, rxfh_configured);
- ret = hns3_reset_notify(h, HNAE3_DOWN_CLIENT);
- if (ret)
- return ret;
-
- ret = hns3_reset_notify(h, HNAE3_UNINIT_CLIENT);
- if (ret)
- return ret;
-
- org_tqp_num = h->kinfo.num_tqps;
ret = hns3_change_channels(h, new_tqp_num, rxfh_configured);
if (ret) {
- int ret1;
-
- netdev_warn(netdev,
- "Change channels fail, revert to old value\n");
- ret1 = hns3_change_channels(h, org_tqp_num, rxfh_configured);
- if (ret1) {
- netdev_err(netdev,
- "revert to old channel fail\n");
- return ret1;
- }
-
+ netdev_err(netdev, "fail(=%d) to set number of channels to %u\n", ret,
+ new_tqp_num);
return ret;
}
--
2.17.1
1 year, 7 months
Email Marketing Products ISA 2021 - id= 202162445333
by ISA Email Marketing
<http://isadatamarketing.com/isaemarketing_us_em_visitantes.htm>
Code Product Price
U.S. Databases
53-0179 <http://isadatamarketing.com/isaemarketing_en_us_bd_pedido.htm>
Email Database ISA USA 2021 B
<http://isadatamarketing.com/isaemarketing_en_us_bd_pedido.htm>
1.258.000 emails in the United States USD 70
53-0173 <http://isadatamarketing.com/isaemarketing_us_bdcorp_pedido.htm>
Email Database ISA USA Corporative 2021 B
<http://isadatamarketing.com/isaemarketing_en_us_bdcorp_pedido.htm>
824.000 corporative emails in the United States USD 65
International Email Databases
53-0145 <http://isadatamarketing.com/isaemarketing_en_all_bd_pedido.htm>
Email Databases ISA International 2021
<http://isadatamarketing.com/isaemarketing_en_all_bd_pedido.htm>
57 Countries - 17.2 Million emails USD 250
53-0150
<http://isadatamarketing.com/isaemarketing_en_al_bd_pedido_usd.htm>
Email Databases ISA Latin America 2021
<http://isadatamarketing.com/isaemarketing_en_al_bd_pedido_usd.htm>
20 Countries
6.5 Million emails USD 150
53-0166
<http://isadatamarketing.com/isaemarketing_en_al_bdcorp_pedido_usd.htm>
Email Databases ISA Latin America Corporative 2021
<http://isadatamarketing.com/isaemarketing_en_al_bdcorp_pedido_usd.htm>
16 countries in Latin America
1.058.000 corporative emails USD 180
53-0165
<http://isadatamarketing.com/isaemarketing_en_as_bd_pedido_usd.htm>
Email Databases ISA Asia 2021
<http://isadatamarketing.com/isaemarketing_en_as_bd_pedido_usd.htm>
10 countries in Asia - 1.4 million emails USD 80
53-0146
<http://isadatamarketing.com/isaemarketing_en_as_bdcorp_pedido_usd.htm>
Email Databases ISA Asia Corporative 2021
<http://isadatamarketing.com/isaemarketing_en_as_bdcorp_pedido_usd.htm>
10 countries in Asia - 709.000 corporative emails USD 100
53-0178
<http://isadatamarketing.com/isaemarketing_en_eu_bd_pedido_usd.htm>
Email Databases ISA Europe 2021 B
<http://isadatamarketing.com/isaemarketing_en_eu_bd_pedido_usd.htm>
24 countries in Europe - 7.6 million emails USD 160
53-0176
<http://isadatamarketing.com/isaemarketing_en_eu_bdcorp_pedido_usd.htm>
Email Databases ISA Europe Corporative 2021 B
<http://isadatamarketing.com/isaemarketing_en_eu_bdcorp_pedido_usd.htm>
23 countries in Europe - 3.82 million corporative emails USD 230
Speciales Databases
53-2012
<http://isadatamarketing.com/isaemarketing_us_quejosos_pedido.htm>
Database ISA Unsubscribers, Complainants and Spamtraps 2021
<http://isadatamarketing.com/isaemarketing_us_quejosos_pedido.htm>
List of soft complainants (unsubscribe), hard complainants and spamtraps
to add to your Exclusion List and avoid spam complaints and inclusion of
your domains in blacklists
USD 30
Software
53-2040 <http://isadatamarketing.com/isaemarketing_en_emailpacker.htm>
ISA Email Packer 2
Email List Optimitation Software
<http://isadatamarketing.com/isaemarketing_en_emailpacker.htm>
Mix up to 3 lists
Eliminates duplicates
Divide lists into packages with least amount of emails
Eliminates the cut emails
Cleans commas, quotation marks, or the character that you want to
Select emails from a domain
Remove unwanted domains
Deletes emails that contain certain words or symbols
Select emails that contain a certain word
Select emails of corporative domains
<http://isadatamarketing.com/isaemarketing_en_emailpacker.htm>
USD 50
53-2004 <http://isadatamarketing.com/isaemarketing_en_version4.htm>
Software ISA E-Marketing 4.0
Sales Management, CRM, Email Marketing, Shipping, Databases, Harvest
Emails Robots
<http://isadatamarketing.com/isaemarketing_en_version4.htm>
Customer Management
Multiple Marketing Campaigns
Email Marketing
Predefined Response Emails organized by campaign to accelerate the
management of inquiries and direct sales
Online Sales Management
Sales History
Shipping Management
Database Management
Search by Email, by Name, or by Phone
Import emails from Text Files (TXT)
Email Extractor from Web Pages (Manual and with Robots)
Imports Web Forms generated in PHP to email (MAPI) or text files
Manage Web Forms received by updating database and sending autoresponse
emails. Autoresponse associated with Web Forms
Database in MDB Format compatible with Microsoft Access
<http://isadatamarketing.com/isaemarketing_en_version4.htm>
USD 350
Email Sending Services
53-3050 <http://isadatamarketing.com/superserver_en_pedido_latam_50.htm>
Super Server Plan LAT-50K
<http://isadatamarketing.com/superserver_en_pedido_latam_50.htm>
Sending 50.000 emails in Latinamerica, U.S. or Europe USD 60
53-3150 <http://isadatamarketing.com/superserver_en_pedido_latam.htm>
Super Server Plan LAT-150K
<http://isadatamarketing.com/superserver_en_pedido_latam.htm>
Sending 150.000 emails in Latinamerica, U.S. or Europe USD 100
53-5002 <http://isadatamarketing.com/superserver_en_pedido_latam_1m.htm>
Super Server Plan LAT-1M
<http://isadatamarketing.com/superserver_en_pedido_latam_1m.htm>
Sending 1 MILLION emails in Latinamerica, U.S. or Europe
USD 350
Software Consulting
53-4001 <http://isadatamarketing.com/isaemarketing_us_gamma_smtprc.htm>
Installation of Mailer with Scanner of Open SMTP Servers
<http://isadatamarketing.com/isaemarketing_us_gamma_smtprc.htm>
Configuration and support of a multi server emailing system USD 210
53-4006
<http://isadatamarketing.com/isaemarketing_us_mailer_port_587.htm>
Installation of Email Marketing System
with SMTP Server with SPF and DKIM
<http://isadatamarketing.com/isaemarketing_us_mailer_port_587.htm>
- Program installed on your computer that allows local administration of
your campaigns
- SMTP Servers appropriate to the rules SPF and DKIM
- Online statistics
- Works on Internet service providers with port 25 locked (use port 587)
<http://isadatamarketing.com/isaemarketing_us_mailer_port_587.htm>
USD 65
if you do not want to receive this newsletter or this emails was sent by
mistake
please click here <http://isadatamarketing.com/php/us_bd_rm.php>
ISA Email Marketing
Fabián Torre
Software Consulting
Rua Antonio Rodrigues 22, Itacimirim - Bahia - CEP 42823-000 - BRASIL
WhatsApp: +55 719 9204-2292 - Skype: chronskype
Facebook
<https://www.facebook.com/isaemailmarketing/>
<https://www.instagram.com/isa_email_marketing/>
Instagram
<http://isadatamarketing.com/logoinstagram.jpg>
1 year, 7 months
[PATCH net-next v3 0/3] Some optimization for lockless qdisc
by Yunsheng Lin
Patch 1: remove unnecessary seqcount operation.
Patch 2: implement TCQ_F_CAN_BYPASS.
Patch 3: remove qdisc->empty.
Performance data for pktgen in queue_xmit mode + dummy netdev
with pfifo_fast:
threads unpatched patched delta
1 2.60Mpps 3.21Mpps +23%
2 3.84Mpps 5.56Mpps +44%
4 5.52Mpps 5.58Mpps +1%
8 2.77Mpps 2.76Mpps -0.3%
16 2.24Mpps 2.23Mpps -0.4%
Performance for IP forward testing: 1.05Mpps increases to
1.16Mpps, about 10% improvement.
V3: Add 'Acked-by' from Jakub and 'Tested-by' from Vladimir,
and resend based on latest net-next.
V2: Adjust the comment and commit log according to discussion
in V1.
V1: Drop RFC tag, add nolock_qdisc_is_empty() and do the qdisc
empty checking without the protection of qdisc->seqlock to
aviod doing unnecessary spin_trylock() for contention case.
RFC v4: Use STATE_MISSED and STATE_DRAINING to indicate non-empty
qdisc, and add patch 1 and 3.
Yunsheng Lin (3):
net: sched: avoid unnecessary seqcount operation for lockless qdisc
net: sched: implement TCQ_F_CAN_BYPASS for lockless qdisc
net: sched: remove qdisc->empty for lockless qdisc
include/net/sch_generic.h | 31 ++++++++++++++++++-------------
net/core/dev.c | 27 +++++++++++++++++++++++++--
net/sched/sch_generic.c | 23 ++++++++++++++++-------
3 files changed, 59 insertions(+), 22 deletions(-)
--
2.7.4
1 year, 7 months
[PATCH V2 net-next 0/3] net: hns3: add support for TX push
by Yufeng Mo
This series adds TX push support for the HNS3 ethernet driver.
V1 -> V2:
1. fix compile issue on non-arm64 system in patch #2
Huazhong Tan (2):
net: hns3: add support for TX push mode
net: hns3: add ethtool priv-flag for TX push
Xiongfeng Wang (1):
arm64: barrier: add DGH macros to control memory accesses merging
arch/arm64/include/asm/assembler.h | 7 ++
arch/arm64/include/asm/barrier.h | 1 +
drivers/net/ethernet/hisilicon/hns3/hnae3.h | 2 +
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 86 +++++++++++++++++++++-
drivers/net/ethernet/hisilicon/hns3/hns3_enet.h | 6 ++
drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c | 21 +++++-
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c | 2 +
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 11 ++-
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.h | 8 ++
.../ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c | 2 +
.../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c | 11 ++-
.../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h | 8 ++
12 files changed, 156 insertions(+), 9 deletions(-)
--
2.8.1
1 year, 7 months
[PATCH net v2] net: sched: add barrier to ensure correct ordering for lockless qdisc
by Yunsheng Lin
The spin_trylock() was assumed to contain the implicit
barrier needed to ensure the correct ordering between
STATE_MISSED setting/clearing and STATE_MISSED checking
in commit a90c57f2cedd ("net: sched: fix packet stuck
problem for lockless qdisc").
But it turns out that spin_trylock() only has load-acquire
semantic, for strongly-ordered system(like x86), the compiler
barrier implicitly contained in spin_trylock() seems enough
to ensure the correct ordering. But for weakly-orderly system
(like arm64), the store-release semantic is needed to ensure
the correct ordering as clear_bit() and test_bit() is store
operation, see queued_spin_lock().
So add the explicit barrier to ensure the correct ordering
for the above case.
Fixes: a90c57f2cedd ("net: sched: fix packet stuck problem for lockless qdisc")
Signed-off-by: Yunsheng Lin <linyunsheng(a)huawei.com>
---
V2: add the missing Fixes tag.
The above ordering issue can easily cause out of order packet
problem when testing lockless qdisc bypass patchset [1] with
two iperf threads and one netdev queue in arm64 system.
1. https://lkml.org/lkml/2021/6/2/1417
---
include/net/sch_generic.h | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index 1e62551..5771030 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -163,6 +163,12 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
if (spin_trylock(&qdisc->seqlock))
goto nolock_empty;
+ /* Paired with smp_mb__after_atomic() to make sure
+ * STATE_MISSED checking is synchronized with clearing
+ * in pfifo_fast_dequeue().
+ */
+ smp_mb__before_atomic();
+
/* If the MISSED flag is set, it means other thread has
* set the MISSED flag before second spin_trylock(), so
* we can return false here to avoid multi cpus doing
@@ -180,6 +186,12 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
*/
set_bit(__QDISC_STATE_MISSED, &qdisc->state);
+ /* spin_trylock() only has load-acquire semantic, so use
+ * smp_mb__after_atomic() to ensure STATE_MISSED is set
+ * before doing the second spin_trylock().
+ */
+ smp_mb__after_atomic();
+
/* Retry again in case other CPU may not see the new flag
* after it releases the lock at the end of qdisc_run_end().
*/
--
2.7.4
1 year, 7 months